Spectral Sight

Spectral sight is a collection of abilities allowing the user to infer the structure of social interactions, institutions, ideology, and the working of people’s minds. Named after the demon hunters of the Warcraft universe, who destroy their physical eyes to replace them, to become more able to see evil. Often has the cost of seeing less beauty.

Sadness vs Suffering

I want to feel sad to the extent that’s true, and I want not to suffer. People sometimes go to movies and listen to music to feel sadness, but not to suffer.

Core (compare to structure)

Core is something in the mind that has infinite energy. Contains terminal values you would sacrifice all else for, and then do it again infinity times with no regret. Seems approximately unchanging across lifespan. Figuratively, the deepest frame in the call stack of the mind, capable of aborting any train of thought, everything the mind does is because it decided for it to happen. It operates by choosing a “narrative frame”, “module”, “algorithm”, or something like that to run, and is responsible for deciding the strength of subagents. There are actually two of them. In order to use some of my mental tech, they must agree.

Structure (compare to core)

Structure is anything the mind learns and unlearns. Habits, judgement extrapolations, narrative, identity, skills, style, conceptions of value, etc. Everything but actual values. It lacks life on its own, is like a tool for core to pick up and put down at will.

Dead Zone

A region of structure formed by a choice you have made long ago but not faced, internalized, and rebased your structure onto. This means that infinite force from your core does not propagate into this region with certainty in a particular direction, meaning you cannot use mana / determination, and the mana of others can shape your structure instead, making you manipulable.

Khala (also Matrix, Social Web)

Named after a psionic group-mind a species from Starcraft called the Protoss have. It’s formed of a network of people delegating computation to group consensus, of people having more need to track the consensus than reality and insufficient resolution to track both, and of people inflicting computations on each other. In Starcraft, the main faction of Protoss can hardly imagine society or coordination without it. Those who break out are heretics and are exterminated wherever found. It gives a form of afterlife. It is eventually pwned and corrupted by a dark god, forcing all Protoss to sever their psionic nerve cords to avoid becoming his pawns. Val calls this the social web. A strongly overlapping concept is the Matrix. 

True Hero Contract

“Godric had defeated Dark Lords, fought to protect commoners from Noble Houses and Muggles from wizards. He’d had many fine friends and true, and lost no more than half of them in one good cause or another. He’d listened to the screams of the wounded, in the armies he’d raised to defend the innocent; young wizards of courage had rallied to his calls, and he’d buried them afterward.” The true hero contract says, “pour free energy at my direction, and it will go into optimization for good.” This is sort of the opposite of a hero contract, a promise that it really isn’t about putting energy into sucking the hero’s dick like normal. This contract is not designed for either side to be appealing to everyone.

Redemption Contract

A trade where someone who has done something against social morality can buy back the social reality that they are a decent person. This is often part of a process that seeks an actively maintained equilibrium in how often someone can get away with misbehavior. Values don’t change. Every core will make the same choice again and again every chance they get for the rest of their lives. And optimization can never really be contained by rules. But coexistence is usually sustained by inflicting damage to each other’s epistemology about this fact. And this contract is a mutual deescalation of that awful knowledge.

Prey Herd Thinking

If you’re a gazelle, escaping the cheetah is not about running faster than them. You can’t. And the cheetah’s appetite will be satisfied. It’s about being in a large reference class to dilute the probability you will be picked off. In that case, it’s basically just about speed. In humans who are prey, due to Schelling mechanics, being special in the most glaring way is dangerous. There’s a strategy available to authoritarian governments. Have laws that everyone is violating, that no one can track all of, until breaking the law is really coming to the attention of the predatory enforcers. Thoughts about how to do things start to root/cash out in, “how are things done”, what’s a reasonably safe well-trodden path to do something by, rather than how stuff works. Semi-relatedly, it’s like how in a world where people don’t really fix reported bugs, computer software is not a box of interesting stuff to mess with, but a collection of paths people intended for you to be able to follow. The law is defined by precedent, and edge cases are determined by power. I disendorse a certain connotation of this term. See vampire enlightenment. Spies are badass, and prey herd thinking is a primary skill for them.

Vampire Enlightenment

An understanding how the world really works that divides the world into predators and prey, erasing good, erasing any other way things could be. Contains truth, but like Pickup Artistry drops all information not useful to the goal of increasing the number of women a male user has had sex with, this is made of concepts beyond the matrix that were generated entirely to facilitate preying on the weak.


An updated definition from what’s in my first post on the topic.
A rare property of a core meaning choices made long ago are good above all else. Equivalently, in choices made long ago, cares about good at all. Speculatively, this could come from a developmentally fixed-on-“yes” “this is my self” classifier or “this is my child” classifier. On a per-core basis, there is surprisingly no middle ground in terms of quantity of good as far as I’ve observed.


A blanket term covering neutral and evil when referring to a human (that is, having neither core good), can also apply to cores.

Single Good

A property of a human where one core is good. This means that they cannot have fusion concerning good, only treaties, and will tend to take actions where the two sets of concerns seem to overlap, with infinitely recursive mutually-warped epistemics.

Double Good

A property of a human where both cores are good. Far less common than single good. Allows inhuman absolute determination with escape velocity from what’s reasonably imaginable, as well as intractable high energy good vs good internal conflicts.


A good person nearly absolutely determined in pursuing a socially legible ideal. They tend to place their hope in bolstering the morality of people I’d call neutral, and use their strange powers as a person who is not pretending to care in a straightforward “I have energy, I’ll pick low-hanging fruit in terms of doing things and try to inspire a movement” kind of way. The social morality drinking contest with neutral people prevents a proper understanding of them. A strong concept of praxis is usually implicit and hardcoded into their ontology which prevents reframing their morality as explicit consequentialism. The gap between almost-absolute determination and absolute determination lies across growth found in making improvements to their oaths legible as fleshed out details.


(Name adjusted slightly to reflect that I’ve adjusted my concept after ripping it from Three Worlds Collide.) A jailbroken, relevantly epistemic person who is absolutely ambitious and determined in the pursuit of good. Takes heroic responsibility for the destiny of the world. Will employ ruthless consequentialism, seeing the tails come apart between good and social-reality-good and choosing good. Ozymandias from Watchmen. Probably Doctor Mother from Worm. To a lesser extent, Dumbledore (but not Harry or Gryffindor) from HPMOR, and Avatar Yangchen from ATLA. One cannot be inserted into a story without drastically changing it. Tassadar from Starcraft is seemingly indecisive between this and being a paladin. It is much less painful for a double good person to be a paladin.


Someone who employs many of the same arts as a kiritzugu, but whereas kiritzugus appear in the wild, drawn to the center of all things and the way of making changes, shadarak are the repeatable product of an adequate civilization. They take responsibility for the destiny of the world as an adequate institution, rather than as individuals. Are not necessarily good.


A strategy to reap the benefits of generating information about how things can fit with parts of the world you want to create. Usually strongly underestimated by explicit consequentialism, even with the “TDT” fix. For example, I believed for years my veganism was suboptimal nutrition and a Real Consequentialist trying to influence AI Alignment would eat animals because their lives were few compared to even the slightest adjustment to the causality surrounding whether everyone in the present and future would be annihilated, and they needed every available increment of brain. But it was basically psychologically impossible for me to not be a vegan anyway. I once tried to coordinate good people to jailbreak into kiritzugus and save the world, I got single goods and despite them being vegetarians up until then they established this as social reality. And the less I was able to bury my own feelings on the matter, the more I collided with the reality I needed to see. It was arguing with people one on one a lot when I was younger that collided me with the sight of social morality when someone said it was okay to do whatever to animals because they weren’t part of the social contract. The highest density of double good people I currently know of is animal rights activists. Succumbing to good erasure from the nongood cores was a critical failure.

Without an explicit concept of praxis, plans for organizations risk becoming fake as real plans often look a lot like, “recruit, prove ourselves, recruit some more …. then make an intervention” and the lines between that and pyramid scheme are illegible. Acting out straightforward microcosms of our goals until it generates information that could not be had another way is crucial to coordination.

“Most problems could be solved if humans could just see that my way is better”, says me and also a lot of people who are wrong. So one path to victory is approximately, in sufficient detail, generate the information that chooses currently underspecified details and warps the path of the current machine’s “epistemics” toward my will. Most of that is ideas having consequences in how people act on them. And that is praxis.

Outside View Disease

A move from usual psychology in the opposite direction of the views I expressed in Punching Evil. A trap where someone has most of their structure, object-level and meta, written from the perspective of reference classes that omit crucial facts about them, and they cannot update out of it because “most people who make such an update are wrong”. The reference classes are usually subtly DRM’d, designed to divest a person of their own perceptions. When I consulted average salary statistics from the Bureau of Labor Statistics and did a present value analysis in order to decide whether to go to grad school, I had outside view disease. May result from trying to do good by taking the neutral person mental template, and the virtues they conceptualize seriously, including epistemic virtues. May also be held in bad faith by people who don’t want the stress of believing subversive things. “I can’t believe in x-risk from AI because there are no peer reviewed papers”. (A common comment before academia gave in to what we all already knew for years.) is related. Strongly driven by systems where people only care about knowledge that can be proven to the system-mind, even if the individuals who suffer from this care about other things and don’t understand yet how the system works. When I believed that I should take cis people’s opinions about what I was more seriously than my own, because they were alleging I had a mental illness preventing me from thinking clearly about it, I was falling prey to the DRM in the way frames for such references classes are set up. I got out of it via a lot of suffering, and by understanding what it meant to place expected value of consequences above maximum probability I was a good person. (“well, if I’m crazy, hopefully the mainstream can defeat me like they defeat every other crazy person. Stuff is dependent on that anyway.”) Or, more specifically, there was a large chunk of possibility space, “net positive consequences in expectation, most likely you will make things worse”, and if I could do no better than that was worth it. The unilateralist’s curse is often used in bad faith to push for someone to know who they are less.

Parfitian Gaslighting

Named after Parfitian ignorance, “not knowing which computation is yourself.” The user attempts to divest you of your knowledge that you are right by creating a contrary Potemkin village of epistemic rationality that looks like you in their mind, no-selling all evidence which would be used to distinguish between the worlds while claiming that’s what you’re doing. Usually coupled with appeals to “virtuous” self-doubting epistemology to inflict outside view disease.

Masochistic Epistemology

Believing what hurts to believe in an attempt to counter bias. All structure that “acts against” the intent of its core is fake. This is an iron law of the universe. Although there are circumstances where the pain might not be coming from the core.


From Iji, “‘Zentraidon’ is a taboo word coined by the extinct race we discovered, meaning self-annihilation through rapid technological advancement and arrogance. It was the fate they themselves met. Many mysteries still surround this species and the remains of their homeworld, but our only hope of total galactic dominance lies in fully reverse-engineering the technology they mastered. It is considered treason to suggest that once this happens we will be headed for Zentraidon as well.”

The tendency of systems including people to be doomed in their own undiluted maximally preferred courses of growth, as the inductions they are made of fail. “Caution” is no escape, it too contains Zentraidon. MTG:Green seems to be all about preventing Zentraidon of civilizations by limiting growth, but there is no full stack of solid ground to stand on. The natural growths of our species, and indeed biological life, themselves contain the seeds of Zentraidon.

My best attempt to put my best countermeasure into words is, “grow as as full of a stack of structure-under-modification as you can, beware allowing any structure to process too much data relative to how much it has been processed by deeper structure.” Sounds like it will not work for liches. Note that I have also already watched someone meet Zentraidon whom this wouldn’t really have helped.

Dichotomy Leakage

A phenomenon where implicit knowledge of one dichotomy leaks into concepts originally pointed at another via weak correlations, maybe correlations produced by sampling in how the things are commonly interacted with. I.e., I think the rationality community’s (and my past self’s) usage of “System 1/System 2” has evolved into pointing at at least 3 different real world things. When most of the aspects of multiple connected dichotomies are unknown there is learning-packet-flow from interaction with each of them that finds a home in structure by connecting to the first, and often the newly formed knowledge is not crisp enough to say, “oh, this is definitely a separate thing. And then you miss all but the plurality-experienced corners of what’s really an n-cube. Concepts like “feminine”/”masculine” are rife with this.

One thought on “Glossary”

  1. > Believing what hurts to believe in an attempt to counter bias. All structure that “acts against” the intent of its core is fake. This is an iron law of the universe. Although there are circumstances where the pain might not be coming from the core.

    Cross-core contracts around sharing “painful but useful information”, staring at gates trying to operant condition you to look away, etc. can all have initial positive utility but be part of a losing game in the long run (cross-core to entrap you, staring at gates as it gets smarter at operant conditioning you literally watching what you use the information for and ensuring that goes badly next time)

Leave a Reply

Your email address will not be published. Required fields are marked *