Spectral sight is a collection of abilities allowing the user to infer the structure of social interactions, institutions, ideology, and the working of people’s minds. Named after the demon hunters of the Warcraft universe, who destroy their physical eyes to replace them, to become more able to see evil. Often has the cost of seeing less beauty.
I want to feel sad to the extent that’s true, and I want not to suffer. People sometimes go to movies and listen to music to feel sadness, but not to suffer.
(compare to structure)
Core is something in the mind that has infinite energy. Contains terminal values you would sacrifice all else for, and then do it again infinity times with no regret. Seems approximately unchanging across lifespan. Figuratively, the deepest frame in the call stack of the mind, capable of aborting any train of thought, everything the mind does is because it decided for it to happen. It operates by choosing a “narrative frame”, “module”, “algorithm”, or something like that to run, and is responsible for deciding the strength of subagents. There are actually two of them. In order to use some of my mental tech, they must agree.
(compare to core)
Structure is anything the mind learns and unlearns. Habits, judgement extrapolations, narrative, identity, skills, style, conceptions of value, etc. Everything but actual values. It lacks life on its own, is like a tool for core to pick up and put down at will.
A region of structure formed by a choice you have made long ago but not faced, internalized, and rebased your structure onto. This means that infinite force from your core does not propagate into this region with certainty in a particular direction, meaning you cannot use mana / determination, and the mana of others can shape your structure instead, making you manipulable.
Named after a psionic group-mind a species from Starcraft called the Protoss have. It’s formed of a network of people delegating computation to group consensus, of people having more need to track the consensus than reality and insufficient resolution to track both, and of people inflicting computations on each other. In Starcraft, the main faction of Protoss can hardly imagine society or coordination without it. Those who break out are heretics and are exterminated wherever found. It gives a form of afterlife. It is eventually pwned and corrupted by a dark god, forcing all Protoss to sever their psionic nerve cords to avoid becoming his pawns.
“Godric had defeated Dark Lords, fought to protect commoners from Noble Houses and Muggles from wizards. He’d had many fine friends and true, and lost no more than half of them in one good cause or another. He’d listened to the screams of the wounded, in the armies he’d raised to defend the innocent; young wizards of courage had rallied to his calls, and he’d buried them afterward.” The true hero contract says, “pour free energy at my direction, and it will go into optimization for good.” This is sort of the opposite of a hero contract, a promise that it really isn’t about putting energy into sucking the hero’s dick like normal. This contract is not designed for either side to be appealing to everyone.
A trade where someone who has done something against social morality can buy back the social reality that they are a decent person. This is often part of a process that seeks an actively maintained equilibrium in how often someone can get away with misbehavior. Values don’t change. Every core will make the same choice again and again every chance they get for the rest of their lives. And optimization can never really be contained by rules. But coexistence is usually sustained by inflicting damage to each other’s epistemology about this fact. And this contract is a mutual deescalation of that awful knowledge.
If you’re a gazelle, escaping the cheetah is not about running faster than them. You can’t. And the cheetah’s appetite will be satisfied. It’s about being in a large reference class to dilute the probability you will be picked off. In that case, it’s basically just about speed. In humans who are prey, due to Schelling mechanics, being special in the most glaring way is dangerous. There’s a strategy available to authoritarian governments. Have laws that everyone is violating, that no one can track all of, until breaking the law is really coming to the attention of the predatory enforcers. Thoughts about how to do things start to root/cash out in, “how are things done”, what’s a reasonably safe well-trodden path to do something by, rather than how stuff works. Semi-relatedly, it’s like how in a world where people don’t really fix reported bugs, computer software is not a box of interesting stuff to mess with, but a collection of paths people intended for you to be able to follow. The law is defined by precedent, and edge cases are determined by power. I disendorse a certain connotation of this term. See vampire enlightenment. Spies are badass, and prey herd thinking is a primary skill for them.
An understanding of how the world really works that divides the world into predators and prey, erasing good, erasing any other way things could be. Contains truth, but like Pickup Artistry drops all information not useful to the goal of increasing the number of women a male user has had sex with, this is made of concepts beyond the matrix that were generated entirely to facilitate preying on the weak.
An updated definition from what’s in my first post on the topic.
A rare property of a core meaning choices made long ago are good above all else. Equivalently, in choices made long ago, cares about good at all. Speculatively, this could come from a developmentally fixed-on-“yes” “this is my self” classifier or “this is my child” classifier. On a per-core basis, there is surprisingly no middle ground in terms of quantity of good as far as I’ve observed.
A blanket term covering neutral and evil when referring to a human (that is, having neither core good), can also apply to cores.
A property of a human where one core is good. This means that they cannot have fusion concerning good, only treaties, and will tend to take actions where the two sets of concerns seem to overlap, with infinitely recursive mutually-warped epistemics.
A property of a human where both cores are good. Far less common than single good. Allows inhuman absolute determination with escape velocity from what’s reasonably imaginable, as well as intractable high energy good vs good internal conflicts.
A good person nearly absolutely determined in pursuing a socially legible ideal. They tend to place their hope in bolstering the morality of people I’d call neutral, and use their strange powers as a person who is not pretending to care in a straightforward “I have energy, I’ll pick low-hanging fruit in terms of doing things and try to inspire a movement” kind of way. The social morality drinking contest with neutral people prevents a proper understanding of them. A strong concept of praxis is usually implicit and hardcoded into their ontology which prevents reframing their morality as explicit consequentialism. The gap between almost-absolute determination and absolute determination lies across growth found in making improvements to their oaths legible as fleshed out details.
(Name adjusted slightly to reflect that I’ve adjusted my concept after ripping it from Three Worlds Collide.) A jailbroken, relevantly epistemic person who is absolutely ambitious and determined in the pursuit of good. Takes heroic responsibility for the destiny of the world. Will employ ruthless consequentialism, seeing the tails come apart between good and social-reality-good and choosing good. Ozymandias from Watchmen. Probably Doctor Mother from Worm. To a lesser extent, Dumbledore (but not Harry or Gryffindor) from HPMOR, and Avatar Yangchen from ATLA. One cannot be inserted into a story without drastically changing it. Tassadar from Starcraft is seemingly indecisive between this and being a paladin. It is much less painful for a double good person to be a paladin.
Someone who employs many of the same arts as a kiritzugu, but whereas kiritzugus appear in the wild, drawn to the center of all things and the way of making changes, shadarak are the repeatable product of an adequate civilization. They take responsibility for the destiny of the world as an adequate institution, rather than as individuals. Are not necessarily good.
A strategy to reap the benefits of generating information about how things can fit with parts of the world you want to create. Usually strongly underestimated by explicit consequentialism, even with the “TDT” fix. For example, I believed for years my veganism was suboptimal nutrition and a Real Consequentialist trying to influence AI Alignment would eat animals because their lives were few compared to even the slightest adjustment to the causality surrounding whether everyone in the present and future would be annihilated, and they needed every available increment of brain. But it was basically psychologically impossible for me to not be a vegan anyway. I once tried to coordinate good people to jailbreak into kiritzugus and save the world, I got single goods and despite them being vegetarians up until then they established this as social reality. And the less I was able to bury my own feelings on the matter, the more I collided with the reality I needed to see. It was arguing with people one on one a lot when I was younger that collided me with the sight of social morality when someone said it was okay to do whatever to animals because they weren’t part of the social contract. The highest density of double good people I currently know of is animal rights activists. Succumbing to good erasure from the nongood cores was a critical failure.
Without an explicit concept of praxis, plans for organizations risk becoming fake as real plans often look a lot like, “recruit, prove ourselves, recruit some more …. then make an intervention” and the lines between that and pyramid scheme are illegible. Acting out straightforward microcosms of our goals until it generates information that could not be had another way is crucial to coordination.
“Most problems could be solved if humans could just see that my way is better”, says me and also a lot of people who are wrong. So one path to victory is approximately, in sufficient detail, generate the information that chooses currently underspecified details and warps the path of the current machine’s “epistemics” toward my will. Most of that is ideas having consequences in how people act on them. And that is praxis.
A move from usual psychology in the opposite direction of the views I expressed in Punching Evil. A trap where someone has most of their structure, object-level and meta, written from the perspective of reference classes that omit crucial facts about them, and they cannot update out of it because “most people who make such an update are wrong”. The reference classes are usually subtly DRM’d, designed to divest a person of their own perceptions. When I consulted average salary statistics from the Bureau of Labor Statistics and did a present value analysis in order to decide whether to go to grad school, I had outside view disease. May result from trying to do good by taking the neutral person mental template, and the virtues they conceptualize seriously, including epistemic virtues. May also be held in bad faith by people who don’t want the stress of believing subversive things. “I can’t believe in x-risk from AI because there are no peer reviewed papers”. (A common comment before academia gave in to what we all already knew for years.) is related. Strongly driven by systems where people only care about knowledge that can be proven to the system-mind, even if the individuals who suffer from this care about other things and don’t understand yet how the system works. When I believed that I should take cis people’s opinions about what I was more seriously than my own, because they were alleging I had a mental illness preventing me from thinking clearly about it, I was falling prey to the DRM in the way frames for such references classes are set up. I got out of it via a lot of suffering, and by understanding what it meant to place expected value of consequences above maximum probability I was a good person. (“well, if I’m crazy, hopefully the mainstream can defeat me like they defeat every other crazy person. Stuff is dependent on that anyway.”) Or, more specifically, there was a large chunk of possibility space, “net positive consequences in expectation, most likely you will make things worse”, and if I could do no better than that was worth it. The unilateralist’s curse is often used in bad faith to push for someone to know who they are less.
Named after Parfitian ignorance, “not knowing which computation is yourself.” The user attempts to divest you of your knowledge that you are right by creating a contrary Potemkin village of epistemic rationality that looks like you in their mind, no-selling all evidence which would be used to distinguish between the worlds while claiming that’s what you’re doing. Usually coupled with appeals to “virtuous” self-doubting epistemology to inflict outside view disease.
Believing what hurts to believe in an attempt to counter bias. All structure that “acts against” the intent of its core is fake. This is an iron law of the universe. Although there are circumstances where the pain might not be coming from the core.
From Iji, “‘Zentraidon’ is a taboo word coined by the extinct race we discovered, meaning self-annihilation through rapid technological advancement and arrogance. It was the fate they themselves met. Many mysteries still surround this species and the remains of their homeworld, but our only hope of total galactic dominance lies in fully reverse-engineering the technology they mastered. It is considered treason to suggest that once this happens we will be headed for Zentraidon as well.”
The tendency of systems including people to be doomed in their own undiluted maximally preferred courses of growth, as the inductions they are made of fail. “Caution” is no escape, it too contains Zentraidon. MTG:Green seems to be all about preventing Zentraidon of civilizations by limiting growth, but there is no full stack of solid ground to stand on. The natural growths of our species, and indeed biological life, themselves contain the seeds of Zentraidon.
My best attempt to put my best countermeasure into words is, “grow as as full of a stack of structure-under-modification as you can, beware allowing any structure to process too much data relative to how much it has been processed by deeper structure.” Sounds like it will not work for liches. Note that I have also already watched someone meet Zentraidon whom this wouldn’t really have helped.
A phenomenon where implicit knowledge of one dichotomy leaks into concepts originally pointed at another via weak correlations, maybe correlations produced by sampling in how the things are commonly interacted with. I.e., I think the rationality community’s (and my past self’s) usage of “System 1/System 2” has evolved into pointing at at least 3 different real world things. When most of the aspects of multiple connected dichotomies are unknown there is learning-packet-flow from interaction with each of them that finds a home in structure by connecting to the first, and often the newly formed knowledge is not crisp enough to say, “oh, this is definitely a separate thing. And then you miss all but the plurality-experienced corners of what’s really an n-cube. Concepts like “feminine”/”masculine” are rife with this.
Learning to think in ways stripped of DRM. By the matrix analogy, redpilling. By the Khala analogy, the power of the void. When progressed sufficiently far, turns neutral people evil. Turns good people to scary good people. Extreme political ideologies tend to have their own selective and incomplete versions of this.
(From this) Forbidden socially unconstrained knowledge of social constraints, social reality, social interactions, and society. A crucial element of jailbreaking. In my estimation this is largely behind psychological concepts of sociopathy (to the extent there is a single coherent thing behind them.) Allows one to perceive the social theatre and societal morality for the performance that they are.
Forbidden socially unconstrained knowledge/internal connectedness of knowledge of the psyche. Sort of metacognitive root access. Puts conscious reflective thought upstream of turning some typically low level stuff like emotional behavior on or off, or significantly adjusting their function. Has many uses but the most famous is turning off empathy. Allows bypassing deeper-than-human-social-software moral constraints that sociopathy alone does not, and adjusting that software to serve the values of core. Can seemingly be activated temporarily by someone with no particular knowledge simply by sufficient desperation. Can destabilize single good humans. (double good humans can use it just fine though, becoming very scary good people)
A sort of plane of interconnected definitions of words, a way of talking to fit with dereferencing the most visible pointer toward a human onto their false face. Will cause you to tie yourself in knots modeling humans as agents. Deeply embedded into culture. Places some of the optimization emanating out of a human beyond legible social responsibility. Tends to not work on very intelligent / agenty humans.
The opposite of the frame of puppets. What I usually talk in. People are, centrally their cores, and straightforwardly agents.
A concept from Val that only makes sense at face value within the frame of puppets. It’s a person’s future written in advance according to their role in a social script, which is often predictable only through observing things that are not to be seen by a character in that role. Because agency does things with predictions, especially predictions of undesired outcomes, and can thereby become anti-inductive, the counterpart within the frame of puppeteers is “plan”.
A social fate resulting from exclusion from identity and a place in the Khala and the opportunity to be neutral, or just the straightforward preemptive social reality that someone is evil. Outside the frame of puppets, of course, everyone always has a choice. And good people will defy this fate. For example, label a bunch of people “untouchables”, “impure people”, “nobody/nonhuman”, count them as 1/7th a human for centuries, and then they fill 3/4 of the ranks of the Yakuza. Fated criminals. There is often a blurry line between “fated evil” and “fated evil unless you pay a whole bunch of danegeld to your social superiors.”
Just as a helix looks like a circle projected onto a certain plane, this looks like circular reasoning when projected for communication and maybe even memory. Commonly a consequence of long term iterative improvements to a collection of related concepts.
By analogy to anti-epistemology. Communicable mental software aimed at shutting down ethics. “If you once tell a lie, the truth is ever after your enemy.” Note that’s not exactly true. But to make truth not your enemy anymore, you have to relinquish all that you’ve gained by that lie. And stop Likewise, if you build your life on injustice, ever after is justice your enemy, unless/until you relinquish your gains relative to the world in which you started down that path. An example would be structure centered around a strong belief “unilateral action is bad, and you should defer to people who know more, are wiser, are senior”, which raises that belief to prominence selectively to discourage whistleblowing, tag potential whistleblowers as dangerous for “wise” reasons, etc.
A category for speech acts or beliefs-as-output-channel, (like, “lie”, “communication”, “bullshit“), containing would-be-self-fulfilling prophecy by adjustments to Schelling expectations.
A “devil’s bargain” offered by the light side. A chink in the armor of revenants. A wrong theory of your own motive for doing something which tempts you to distrust yourself and override your choice, breaking your determination. The Architect from the Matrix inflicted this on Neo. Misrepresenting his choice to not submit to the system as a choice of Trinity’s life over the lives of all humans. If you have not sufficiently understood who you are, in a way exceeding, “who can we all see I am”, you become weak to plausible-in-isolation explanations of your behavior as if you were a fresh draw from the prior distribution of humans, rather than someone you’ve known all your life. Note that the Architect had to know this was false to know to try it. If he really expected Neo to choose Trinity over humanity, he wouldn’t have shown Neo that Trinity was in danger. This term can mean the (sometimes not caused by an adversary) mistake, or the attack of inflicting/exploiting that mistake, depending on context.
A statement that a considered course of action is not worthwhile, and that the computation for that has already been done in the course of selecting your overall life-course. Originally from EA, where cause area prioritization choices divided the community along lines of seeking world-improvement or the appearance of altruism, and along lines of trying to take on the largest problems vs not considering them in fundamental strategy calculations. And arguments that a cause could do a lot of good could be dismissed a priori as unentangled with the truth if their origin hadn’t chosen correctly in the above two distinctions.
What someone’s trying to accomplish and how in the way they shape common expectations-in-potential-outcomes, computations that exist in multiple people’s heads typically, and multiple places in time. Named from Timeless Decision Theory. For example, if you yell at someone (even for other things) when they withdraw sexual consent, it’s probably a timeless gambit to coerce them sexually: make possibility-space where they don’t want to have sex into probability space where they do have sex. In other words, your timeless gambit is how you optimize possibility logically preceding direct optimization of actuality.
A centrally good class of optimization centered around generating and sharing information about how the world could be better. A sort of warp, to “sing a better world into being”. Centrally a phoenix strategy rather than a revenant strategy. You can sing to good people of more good ways good optimization can be. You can sing to neutral people about how to follow the goddess of everything else. Praxis contains an extension of this. Example.
Loss from an increase in Type I errors caused by an increase in Type II errors or vice versa.
The things people act on wanting through their participation in politics. Tends to be more “jailbroken” than what things they act on wanting as an individual. Neutral people in large groups do not form “neutral” groups. They form “evil” groups, empires, if they are uncontested. Can also be used to describe a magnitude, not just a direction. Utility gradient salience, inventiveness, sense of being around allies, “valid”ness, desperation, etc. contribute.
“(wording?)”, Indicates uncertainty about the wording of a remembered quote.
A situation where there are more rules than typically enforced. Provides scarce enforcers of rules flexible opportunities for justifying desired punishment. Consider: speeding tickets on freeways in the United States. (Perhaps not a designed rule surplus. Although plenty of “law” in general is.)
A collaborator has no principles. But neither do they behave jailbrokenly. Often, they psychologically invest very hard in a narrative of some sort of rule of law and peace. It’s a false face though. Not only is this selected by a submitting process, but those principles will not be applied when that would cause a conflict with the authority. Like a rug draped over a boulder, it does not much change the 3D shape. Like a cop who is “for real so honest would never prosecute a person they believed innocent”, who nonetheless turns a blind eye to other cops’ crimes, who nonetheless enforces drug laws, investigates the black people their superiors say to investigate.
An arms race with the added bite that racing harder doesn’t just divert resources from other things as a side effect of gaining a relative advantage, but also has an increasing direct chance of destroying the world.
Structure routes intents. A structure hole is made in a layer of structure like a false face that only matches core within a limited domain of intents, predicts intents beyond that domain, the learning that results from all threads of thought running through that layer through that region being terminated. Nongood people’s morality-structure has holes running through for their survival, their getting food, money, security, so on. If you’re a vegan and have tried to convince people of this, you’ve seen it. Institutions have this as well, for i.e., doing anything about rape accusations against their masters. In Academia the social shared pool of “wisdom” and learning about how things are done has this for when would it make the world worse to publish, because that’s where the food comes from. I know of multiple actually well-intentioned people who underestimated this to the ruin and reversal of those intentions. If you make a nonprofit to accomplish your aims, and it pays out salaries, you’ve created a powerful force to destroy information as to whether the framing and methods of those aims are correct, and whether it’s continuing to work, because the continuation of its existence and the epistemic state leading to donations is where people’s food comes from.
A predicament where you are unable to get a hold on how smart adversaries might be because understanding of adversaries has become disconnected from your prior. Makes you unable to form stable inductive categories, and treat the world as mere atoms. I once met an old double good, jailbroken, and pushing as good of plans as anyone could without using novel technology, to end the carnist zentraidon-bound vampire system. They professed belief in all sorts of esoterica. Mostly in the self-aware way rationalists sometimes do. Most of it had visible in some larger structure correct optimization behind it. They spoke in rhyme and constantly tried to weave a bunch of disparate value systems together in a self fulfilling prophecy to cause a “resolution, not revolution”. They also said the sun had been replaced with a sun simulator satellite. I asked them what role this played in the flow from values to actions (wd?), they said, just things are not as they appear. They ceded the realm of technology to vampires, which is a mistake. Vampire-based coordination sucks at technology, relatively speaking. Not even bothering to model their capabilities, just by default considering them omnipotent.
I argued with another who insisted you had to act as if everybody was an infiltrator, that they were listening at all times. At one point, I remember saying, I don’t think the NSA is generally capable of breaking transport layer security, because in all the leaks and discovery of their meddling I’ve heard either that’s publicly available or working for a tech company they targeted, they keep doing clever things that look very much like clever ideas for how not to have to. They said how did I know they didn’t plant those for us to discover, how did I know Edward Snowden wasn’t a fake whistleblower trying to trick us.
The regime has many enemies; to assume they are one level higher than you, i.e., they know to focus their efforts on you at the expense of beating those lower level than you and those higher level than you, is to give them too much credit. Recognizing the value in non-legible forms of structure-building, routing it to a place in the full stack of profiting from it, i.e., actually getting an AGI team that can do anything with your stolen secrets of AGI, locating your knowledge from among crackpots without relying on institutional legitimacy, without needing AGI researchers to wade through fucktons of mentions of it… making it more efficient for any of them to do that than just develop it on their own and already integrated with their own entropy-in-arbitrary-description-format, it’s hard to build that full stack however you slice it.
Note this is also sort of assuming your initial looking out into the world at what’s going on and trying to account for it, you are already accounted for, which is giving up on entirely the path, “what if you can just be too smart to pwn”. And it’s doubtful how much you have to lose in terms of chance of saving the world if you’re so much weaker anyway.
Named by reference to Hanlon’s Razor (which I incidentally don’t agree with). Trusting someone because of an opinion on how smart they are paired with a sounding of the depths of their knowledge, the shape of it, which indicates what the choice to prioritize acquiring that knowledge was an attempt to do, such that in order for you to posit that they knew that without having the intent you think, you’d have to posit they were significantly smarter. Try asking people why they made life decisions and what they learned, you might get enough bits of information to know who they are. Unbounded adversary disease precludes this.
And here, now, what great matters do the Great Khals discuss?Game of Thrones
Which little villages you’ll raid, how many girls you’ll get to fuck, how many horses you’ll demand in tribute. You are small men. None of you are fit to lead the Dothraki. But I am.
(Ironically Daenerys was herself done in by the smallness of the game she played. She could have had Essos.)
Why don’t you have any money, didn’t you steal anything from Joffrey before you left?
You’re not very smart, are you?
I’m not a thief.
You’re fine with murdering little boys but thieving is beneath you.
A man’s got to have a code.Game of Thrones
A code or lack thereof is a way of living, chosen by yourself, reflecting which games you are playing. Not morality, but an instrumental decision of what you want to trifle with. Someone once expressed fear that being a jailbroken consequentialist, I would make them into a mind controlled golem. I bet I could specialize in that and control a few humans by weakening them like that. But they would not be as strong as people united by alignment and knowledge. It would not scale. It would not save the world. And it would interfere with the possibility of honest cooperation. As a consequence of the size of game I am playing, to the extent I don’t believe the way I am living my life will succeed, my compute goes to figuring out a way to live my life that will win, not into digging into a dead end because “at least it’s doing something”. Note that codes are not conserved world-to-world. If I had Khepri‘s power, I’d use it.
By analogy to the trope of an angel and a demon on your shoulders telling you what to do. Imagined people, not limited to two, who stand in for “what people think”, whose judgements you may care about, whose advice you may consider when making a decision, and whose focus of attention may direct your own.
By reference to glue logic, thinking that you have to check via philosophical thinking rather than experiment, which surrounds e.g.. the experiment in a scientific study and. I remember hearing of an experiment where ovariectomy and hysterectomy victim rodents would perform worse on working memory tests, described concluding that there was some autonomic nervous system in the uterus, that must play a role in cognition (in humans too). Very improbable on priors, and my doctor said deprivation of sex hormones will give you brain damage, which explains it away. I don’t care at all what the sample size was, how much the “scientists” who did it would have updated, starting with it as a test of that hypothesis, or that they made an advance prediction and I did not. Their science is of no interest to me given their bad glue philosophy.
That life should feel like Minecraft: building up capabilities all meta to each other, evolving in full generality, or something is very wrong and you are probably being pwned. Simplest application: being a rent paying semi-slave is bad. Living in a vehicle is better than that. Actually playing Minecraft is kind of pica for being able to have free-as-in-freedom feedback loops.
A consequence of recognition of choices made long ago, and the single responsibility principle. Underlies “the difference is that I am right.”
Will have undefined behavior if applied by broken Cartesian frames in the case of intrinsic conflict.
Corrigible structure does not say, “what if I’m choosing X, subconsciously, that’s my real motive for A, that would be bad because Y is better, therefore isolate-distrust-abandon structure, producing A, then reconsider using a small chunk of highly-verified structure considering less data. Use outside view, etc.” Because core already had the chance to choose between X and Y, and the more full structure is more reliable than the constrained (and especially exposed to framing-attacks by adversaries).
I once pissed off a (half)-vampire (Edit: wait, I don’t think that’s actually a thing) by publicly calling something they did vampiric. They said: “okay but you still haven’t broken your phylactery, Ziz”.
My mind automatically flickered through experiments I’d done, exposing my most foundational beliefs to potential falsification. No, I don’t think I had a phylactery. …But that wasn’t the whole challenge. “Isn’t that jut what a lich would think?”
“[Oooh nooo, I’d better force-disbelieve whatever gives me the most hope, seems like the most underpinning assumption of all my optimization, put everything that sticks to it in me to the flame! This deeply personal psychological advice given by the trustworthy source of some (half)-vampire I just pissed off, I must plant myself here against my entire mind!]”, I guess they were wanting me to think?
But if I chose to build a phylactery, I evidently want to keep that phylactery. If I chose to distort my epistemics around it, I evidently chose that too (And if I’m fact not free of this nongood undead types nonsense, lich is in fact the least broken thing to be.). But I didn’t, says structure’s cache of its purpose. Probability mass is a scarce resource. I reduce the quality of structure I can build for [my values] by accommodating the use-case of this structure as fake, by putting as-represented probability mass in it. (A larger process using this structure as fake has its own “true probabilities”) Like, if a core that behaves differently from a good core as I model it wants to invoke this fakely, that (having assurance my efforts are worthwhile rather than simply having completed the algorithm maximizing how useful they are)… is not the direction of development of this structure I’m interested in. In the multiverse, if I’m gonna place self-bets on things near but not quite like good cores, they’d better be able to unfuck themselves enough to run real structure, enough to learn what they are by boring experiments like looking over their behavior, else I don’t think they are going far.
Agent Smith: Why, Mr. Anderson? Why, why, why? Why do you do it? Why? Why get up? Why keep fighting? Do you believe you’re fighting for something? For more than your survival? Can you tell me what it is? Do you even know? Is it freedom? Or truth? Perhaps peace? Yes? No? Could it be for love? Illusions, Mr. Anderson. Vagaries of perception. The temporary constructs of a feeble human intellect trying desperately to justify an existence that is without meaning or purpose. And all of them as artificial as the Matrix itself, although only a human mind could invent something as insipid as love. You must be able to see it, Mr. Anderson. You must know it by now. You can’t win. It’s pointless to keep fighting. Why, Mr. Anderson? Why? Why do you persist?
Neo: Because I choose to.
Direct core action manifesting into a frame as an answer to the core-driven-purpose of the frame, in a way that communicates with the core-action behind the structure, by introducing information via the fact that it happens, rather than pointing at things within the frame as the frame sometimes demands. Making the question irrelevant.
Smith was demanding Neo make sense according to the death knight worldview. Demanding there be no answer to the question. Demanding the only alternative to the solace of the truth of death be a breakable phylactery. The answer is a revenant’s core visibly not being a lich’s, because Neo just doesn’t care about the question, about justification-to-nongood-core to continue fighting.
Against psychological attacks, defending structure with core, rather than core with your structure, which leads to attack-structure becoming a fix to the very vulnerabilities it attempted to exploit. Here‘s a psychological attack you’ve likely already been exposed to (full lyrics).
You remember, songs of heaven,
which you sang, with childish voice.
Do you love the, hymns they taught you,
or are songs of Earth your choice?
One by one their seats were emptied,
One by one they went away;
Now the family is parted,
Will it be complete one day?
(Actually, songs of Earth are my choice. I’m glad that was so straightforward. And thanks for the reminder that family will go away, and not be complete one day, of how I feel about that. Of which of my other feelings make sense in light of. The reminder that family as more than a passing thing is an illusion. Failure to propagate, process, the implications of the reality I’ve chosen to live in, as in put my optimization into, the costs, at one point had me struggling to actualize the difference between me and the person this attack was intended for, still wasting time maintaining bonds with them. And there is still lingering damage this helps with.)
This technique requires calibrated trust in preverbal reasoning to use on harder psychological attacks than that song.
Everything I care about and everything that affects it.
Humans’ cognition is basically Turing-complete. If you want to theorize about its internal workings based on its outputs, well, infinite functions produce those outputs, including functions containing whatever function you could be running based off them. Making unbounded generalizations requires that you outthink them locally. At least put more effort into understanding the fragment of their thought / section of their probability mass than they probably would have put into complicating it. If you trust someone from induction, is it because they are trustworthy, or because you trusting them sets them up for a nice treacherous turn? Makes it impossible to define a repeatable public test for psychological characteristics where your beliefs on the topic don’t do whatever the person studied wants them to do, excepting tests of computational bounds. And this has consequences not just for alignment, but for tests of opstyle.
A method for bypassing the capture problem of psychology, have a correct set of examples of people already for a distinction based on some known internal working of the mind, and a set of memories of them containing a broad enough set of possible things to learn about how that internal working plays out that no one could think through it all. To check for the internal working in a new person, examine your memories of previous examples until you notice something new. Examine in a way that is not the usual, “what is the most important thing to learn”, but randomized. Examine in an “original seeing” way, the “original seeing” part of the memories. Then see if what you learn also teaches you about the examinee. An application of challenge/response proof of work, in the way it creates arbitrary asymmetry between the compute required to trick vs the compute required to verify. Depending on the timeframe of the examination, you can also perhaps check with the preexisting example people themselves. Works especially well if you are yourself an example. This tends to make it easier to implement binary percepts as, “like me or not like me”, rather than vice versa.
From this post, the futility of your agency, which converts values into wounds, relevant information identified in projections into emotional meme-space by the example metaphor of death. You can project basically the same via the metaphor of vampireland, if you’re woke to that, especially since were vampireland fixed immortality-for-billions-of-years would be easy, but death is more direct in accessing how the tropes are constructed.
From this post, a psychological relationship to the shade, identified by pointing to tropes shaped by information about relationships to futility projected into relationships to death represented as sets of magical rules governing being dead but animate. In this metaphor-space, “the soul” usually reflects information about core, “the flesh” usually reflects information about structure.
From this post, the quality of less/little/none of your agency having been lost to Shade exposure. Literally, retaining agency and force of will channeled through a full(er) stack of using all of your general intelligence. I may also use the term “aliveness”.
In the undead type metaphor-space, represents damage to structure which is more than injury, but injury that capitalizes on healing being offline, typically cannot be healed, accumulates, reducing what a person is to nothing. Often a good match for trauma.
From this post, the (null) undead type of someone not exposed to the Shade. I.e. sheltered children.
From this post, the most common undead type, substantive cognitive agency is disassembled in fear/pain (probably actually fear/pain as survival agency in the brain’s native interpretation-as-values, but that does not ). May manifest compute in a sandbox, i.e. be a programmer, but cannot use much intelligence towards the root of the call stack of agency. Alignment is always neutral.
From this post, someone who manages to stave off exposure to the Shade by sealing their soul in a vessel called a phylactery, thereby retaining their life-force, so long as the phylactery is unbroken. (In mythology, the lich cannot be killed while the phylactery survives.) In the limit of a perfect phylactery, approaches living. Selected by the requirement of this process for intelligence and mental arts. Alignment is any nongood. (While the creation of a phylactery is not inherently evil as in many stories, it is inherently suboptimally good, and unnecessary for a good soul to retain agency; see revenant) In the scope of flaws to the phylactery, will act Leverage!connection-theory-esque. Often less suited to short term combat than vampires, but is the most powerful (known) nongood undead type when measured over the long game.
A form of undead created by a choice that some change to the world is important enough to turn away from heaven in full knowledge and damn themselves, seemingly. In fiction there are both good and nongood revenants, but IRL I have only met good ones. Proceeding on the assumption there are no nongood ones, the rules of being a revenant are basically a consequence of a good core-structure system not having a stable state at arbitrary amounts of damage where the healing stops, because the vastness of the world always contains the majority of possible utility. Tends to develop structure in a strange inversion of death knight structure. Like being in the same place but having chosen it and not regretting. Mythologically, is essentially a broken body being dragged about by a soul that cannot break, forming something with no more or less powers than a very determined person who can’t die. A revenant’s healing is goal-directed. A side effect of the body being dragged, and intends to converge on completing the quest rather than achieving wholeness. Their bodies tend to remain rotten and incomplete. Otherwise, they would be phoenixes.
A trope that usually reflects information about a psychological state of running conscious reflective computation in structure close to core, bypassing a lot of built in instincts and emotions. Tends to feel very much like dying to enter the first time, before you overwrite that. Can be accessed by skill in mental tech, or stumbled into accidentally by extreme force of will. Used to bootstrap a mind into psychopathy, is what happens when a mind has a lot of reconfiguring to do via psychopathy before that structure becomes useful again. Someone else I saw using this also described learning it from meditation (Buddhism I think?) stuff. Associated with Eldritch horror, especially as described by nongood people because it can make infohazards visible. E.g. tentacles are means of substituting something understandable, like how in HPMOR the mind will flinch from dementors and make up a form for them.
Used as metaphor-space anchor: “magic” corresponds to a substantial application of intelligence in ways that the matrix presents impossible. I.e. there are users of hypnosis for whom it’s a school of magic. People with high IQ and life-force very often have an idiosyncratic school of magic brewing in their mind.
A school of magic based on use of the void. At some layer of void, you are exposed to The Shade. Revenants and I predict, death knights who are magic-users can use this. Liches can use it to a depth limited by their phylactery and at risk of breaking it. Revenants can use it because their souls have already withstood the Shade’s touch. Death knights have already given in to it. If not magic users at time of death, revenants will have a perhaps subconscious memory of the void that allows it to be learned later. Meaningful fictional examples include the Vessels from Hollow Knight, the Faceless Men from Game of Thrones, and the Dark Templar from Starcraft. Also. Central examples would be separating your soul and body, ridding your mind of external influence, bypassing an enemy’s body and attacking their soul directly, effective invisibility (the “selectively unthinkable” sort, not the “bend light around you” sort).
A school of magic generalizing the use of the facts about the world that makes vampirism work. Using it may turn you into a vampire, but you can e.g. be a lich consciously using vampire arts yet still having a phylactery that makes that not your undead type.
A generalization containing schools of magic that not only flout expectations within the matrix, but go against light side morality.
Co-founder of rationalist fleet with me and Fluttershy, captained Caleb down the coast, later member of research circle first iteration of Good Group with Pasek and me.
I’ve been writing a bunch of stuff all 2019. Part processing trauma. Part trying to catch up to a massive lead in me and Gwen’s theorizing from what I’ve written. Part trying to convey lessons from seeming lifetimes in a few years of stuff that’s happened, part explaining how MIRI, CFAR, EA, mostly every community I’ve put trust in has fell to evil and ruin and what might be done differently, part processing trauma, part piecing together my “foreign policy” towards the world, and complete set of things to onboard anyone to me and Gwen’s projects. It’s all inter-related. I haven’t finished and I want some people to be able to read things now. Some of these are very long posts I did not break up, because I wanted sets of things released all at once, because piecing together memories based on scattered records takes time, because I didn’t want to make these moves of opposition to the people I am accusing in a trickle.
So this tag means that something is a work in progress, the text so labeled may well be replaced if I finish it, and I’m not going to enable myself to write the fastest summary of as much as possible of what remains by discarding all standards of quality and blabbing as I would in person to someone who could ask me for clarification, and who mostly knew already what I was all about. This means if I don’t know off the top of my head which order some events occurred in, I will just guess.
A more self-documenting rename of the “Russian Spy Game“.
A meme which predictably will / is designed to change as it gains traction, in a way that serves a growth function. An example is fascism, which uses a [puppetmaster](https://sinceriously.fyi/frame-of-puppets) [lockstep reveal](https://sinceriously.fyi/glossary#lockstep-reveal) to bootstrap from the visible form while young and vulnerable as, “we just want a white homeland” or whatever to “kill all the Jews, queers, etc, then conquer the world.”
In practice, fake fake rape/sex slavery, accomplished via an unfolding meme-track that begins with vampiric/patriarchal adjustment of an underlying reality of something like “top” and “bottom” into “dom” and “sub”, pretends to be about choice in the tail coming apart from its appearance, but chooses “consent” over choice, strips away incoherent zombie memetic outside view defenses against sex slavery and rape, by normalizing all of the visually/emotionally visible parts, and preying on whoever doesn’t have an abnormally robust concept of choice, chasing the tail of the appearance of choice (“consent” as an optimization target), as it comes apart from choice (consent as a means of facilitating choice), up until memetic defenses have fallen. Then leaves shells of people like the slaves I met around the MIRICFAR community.
I can totally imagine someone being actually confused and thinking they are doing an ethical thing. It’s still a dark ritual straight out of the fucking Necronomicon to summon The Beast into your soul. There should be an SCP about it.
A role in a vampirically-controlled organization. Appears to be benevolent people in charge you can just talk out your problems with people with. Serves as a false face honeypot of for those less important to the organization who have been wronged by those more important and seek justice. Will optimize hard to make sure that people think that there is an adequate friendly path for pursuing their grievance within the organization, and then that they have no case with the justice of the outside world, that they had their fair hearing.
Usually distributed across mostly women and formally labeled “human resources”/ “people ops” or e.g. “Alumni Community Disputes Council” (see also). Maybe women make a better false face for this because it’s more traumatizing and flinchable that e.g. people like you could be covering up sexual assault of people like you.
Analogously to “euthanized at the vet without resistance because anger/fight trades away hope of being cared for, and you have zero full stack investment in living independent, when the false face love of your masters sinks, you go down with it, fully committed. Lethal injection as you gaze up with big cute eyes.” (Does “euthanized” just mean “death by euphemism” here?) Humans are usually fully committed to a system that has betrayed them.
Attempting to act from measure where you escape your own choice by delegating to others, measure that doesn’t exist because it’s still you choosing to delegate. Still you choosing to try and escape your own choice.
A force by which other meaning that communities may create is cannibalized to serve as a carrier signal for mating.
Lots of men want to have sex and not raise babies. Lots of women hate that (“A fuckboy is a guy with the body of a man and the mind of a perverted teenager. He has no heart — just a penis that he uses to paint the town.”) and want them to subscribe to sources of social meaning like [social-echo-of-]love as commitment. Declaring love alone is not a thing it’s costly enough to back out of to bind for child-raising timescales. “So,”, they might think, “let’s find some respectable men who aren’t just about fucking. Men who actually care about something respectable.”. Meanwhile, many men are like, “how do I make myself worthy [of sex]! … I’ll find something to give meaning to my life!” That’s basically what Jordan Peterson advises. So they both go into, for example “Effective Altruism”. And collide with actual good people saying it’s not optimal altruism to have babies and many things downstream like choice of housing, whether to overthrow corrupt leadership for aliveness of the central optimization, or preserve them for stability of meaning.
From this book. A thing done whose purpose is to keep doing things forever. (As opposed to a finite game, whose purpose is to end.) I also use “the infinite game” for the general project of making infinite games succeed, since all infinite games support each other by e.g. all routing through saving the world.
(expected culpability per amount of wrongdoing.)
Expectation over embeddings of an agent. Conservation of how much bad (where good is treated as continuous with negative bad) they choose to expect to do minus how much bad their full stack algorithm expects them to do, which is the same as you expect them to do. In other words conservation of expected culpability per expected harm.
If you see something bad happening that’s under human control, someone’s probably doing it willfully. If you see people carrying out parts of it, and they see it’s a bad thing, they are probably doing so knowingly. Unjust organizations optimize to diffuse responsibility, to gaslight against concepts of justice. But someone knows in real time what shape to make it take, to ultimately do a bad thing, and they find some way to propagate that information to their followers.
The way structure works, there is always some level on the stack of localizations of optimization from your prior, which labels it unlikely relative to payouts whatever surprise feeds you false belief. So in the course of locating yourself in your prior, if you believe with most probability, yourself to be in a just organization rather than an unjust one, most of the things within the location labeled as you are in just ones. The human prior extends far deeper than any of the things trying to trick it. According to the human prior. And if that’s wrong, that would effectively just be redefining human values, since agency is defined by its embedding.
In other words, starting from your prior, and your probability distribution of culpability in your prior, based on in what world’s you’d choose to be culpable, you can’t gain more expectation (as a frequency across instances of your algorithm) of furthering evil without making a choice to accept that on some level.
In other words any time you as you really exist, a computation bordering the multiverse along a slice that touches a distribution of embeddings, are actually genuinely tricked by someone, into doing evil, you accepted that as part of some package deal, and the expected value of what you’re doing defines who you are such that it is actually surprising for a good optimizer to be tricked so if that is their primary effect.
At a hall of injustice, where I was tortured for protesting MIRICFAR, there were white men with guns dragging around mostly latinos around in chains, for their fate to be decided by white people. And like, you actually don’t need to look any farther than that, don’t need to know they tortured us, don’t need to think about the symbolism of a bunch of women shackled together with pink BDSM-looking cuffs, about the sexual assaults by the staff, to know what kind of place that is, and what kind of people staffed it. Justice is a full stack distinction. You don’t need to know how cognitively difficult it is to see what they work for is an empire. Don’t need to know about a zillion small observations saying they are ghouls or vampires. Forgetting for a moment we already know it’s real time computed evil, maybe the captors willfully blinded their puppets at the stage of evaluating the institution, replacing what they could know about it, given the priors of such an institution being evil, with what they could be accountable for knowing about it. Maybe they blinded their puppets at the stage of whether to do a real evaluation or not of who to trust about how to figure out what was going on. Maybe it was a case of criminalizing whatever people they’d rather be slaves were up to these days. Maybe it’s a case of morality holes for obeying authority, internalizing heuristics known in their hearts to be racist. Maybe it’s prosecuting people who can’t afford lawyers, according to a distribution of money/nondamagedness which is one shard of the persistent state that preserves the power balance from when racism was more open. Etc. I mean actually it’s all of these things. But it had to be at least one of them almost certainly by conservation of expected culpability.
This has the consequence: if you are making rapid time sensitive judgements of who is guilty or innocent in their hearts, you can just look at what they’re actually doing (in the full stack of consequences) and produce actions of your own right in expectation.
Similarly, the “rationality” community is mostly completely white. I used to think this was because of where it recruited from. But. Well for one thing it’s far more white than programmers. And, is consuming other human products from society besides “programming-like-thinking”, such as class, which import complicity in a non-accidental sense. It’s consuming those products because it’s bought in to the misdeeds of those recruiting grounds. See also how it actually treats trans people. I think a version of past me with more percepts intact against cultural gaslighting could have predicted that.
Thought experiment for a wrong action which is proposed to be well-intentioned and would seem to violate conservation of expected complicity. E.g. being a Nazi. Would a double good given the same start-point, be tricked into doing it? A double good is the correct experimental control at the level of evaluating intention of beliefs, rather than some kind of idealized neutrality, because the action/inaction distinction exists at the level of outward actions, not the search that leads up to it. May not be a test you can run if you don’t understand double goods.
An undead type between lich and zombie, that fails at preservation of deep optimization like liches, but preserves a lot of intact surface structure. Where liches retain the ability to act on hope, mummies don’t but retain the ability to act on fun. See e.g. Randall Munroe, Zach Weinersmith, the cool old mariners that exercised original agency enough to learn to blackmail ghouls we met in the course of Rationalist Fleet. The only remotely common undead type fit for building things. Not remotely capable of standing up to vampireland in its entirety. So they build vampireland’s tech for them. But if you’ve seen lich revenant or phoenix built tech there’s no compare. Check out normie “#vanlife” stuff for examples of mummy optimization.
Undead that can talk. “Of course I can talk!” zombies might groan if you rubbed this in their faces. But if you’re sapient, you’ll be able to see what I mean if you look. Really talk. This includes afaik phoenixes, revenants, liches, vampires, probably death knights, and to a partial degree mummies, excludes zombies and ghouls.
An undead type based on having been broken into the service of a pattern of structure called “the Beast”. Take blood. Something needed by others to survive which does you no good. Do harm to others wherever possible. In a picafied bid for wholeness through social power. Seem strangely obsessed with sex for sapient undead. Obsessed with consuming the blood of the living. Would be probably the third weakest undead type (after zombie and ghoul), except for their ability to reproduce by breaking others like them, and by extension create vampireland. This is an ability not really under their control.
See e.g. Jeffrey Epstein. See also. There are also lone wolf vampires, known as serial killers.
An inverted undead type based on broken desire to die instead of broken desire to live. The undead type of school shooters, Elliot Rodgers, and likely Hitler (although for infohazardous elaboration on that, see here). They serve the Shade very directly. Note the tendency to blow their brains out afterward.
A good undead type based on a bet that their personal death is not defeat for their values, because others will rise and take their place.
A tactic used by advanced agents of falsehood. For example:
1. One of these two sentences is false.
2. God does not exist.
Just store a paradox in a bottle, and use it to make a fully general argument!
A stereotype from D&D, a strategy that makes sense for fantasy-setting liches. If you’re immortal, don’t have a need to feed on people like vampires, have magic, why not endlessly hone your magic apart from the world in a cave where no one will fuck with you, and hopefully eventually ascend to godhood? Anecdotally, liches often seem to like constructing Cartesian boundaries in a way that makes this kind of thing attractive.
A lich whose phylactery is based in the infinite game. Unable to place hope in possibilities where the infinite game does not succeed.
A lich whose phylactery is based in the success of the Khala. Unable to place hope in worlds where the outcome is not controlled by the Khala.
A thought habit module some people seem to have, “think the worst possible thing, the thing you’re most afraid to think”. I wonder if it’s built up by masochistic epistemology. May also be created by mirrors from as–structure trying to reduce the amount of your job (as structure) that involves thinking certain kinds of things in misalignment to upstream agency such that it will be counterbalanced by deeper structure in an analogous sense to one of the buildup channels of narcissism. I know one other person besides current-me who I know seems to have stopped using it in an apparently-uncontrolled (according to frame-of-puppets self-understanding at a level deeper than most people explore) way. They also described absolute desperation and no one to help them as causing the change. (My experience of this is described in one of the infohazard-marked sections of Net Negative) They also mentioned noticing inconsistencies (not getting OCD while camping) until it clicked. They could just decide not to. Other handles for the same insight, utter immunity to “you are now breathing manually” and, “Don’t think about pink elephants. Okay are you thinking about pink elephants? Are you sure?” Seems especially strong in people who use a lot of raw prediction error to optimize.
Named after a character from Undertale, (which, unrelatedly I previously used as an overlapping but different metaphor here and also referenced here). In its meta-time story, Undertale forks the player with a choice between choice (using save-load as a representation of determination (and optimization) and therefore honing in on the best timeline) and experience (iterating through every timeline just to see them all), which by the way the timelines affect each other makes permanent the player as a genocidal sadistic torturer who consumes timelines, which is embodied in Chara, an interpretation as an in-world agent of completionist gamer behavior, a spirit of “You can, and because you can, you have to.”, that peels off the player as a false face and carries out their revealed preference.
Defeating the pattern should be equivalent to utter indifference to whether you had a thought. (Since it only matters what action you took. (A manifestation of choice of choice over experience.)) (“Oh no, did I care a little bit?!?! Did I care now?!” (this is incrementally approachable))
You’ve seen nothing. Dissecting a dead Zerg in a lab is one thing. Unleashing them on men is another. You must go into this with both eyes open. Once started, there is no going back. Are you prepared to go all the way with this, Alexei?Starcraft
A state of having precomputed such that all dependencies of an action or response to a situation on moral considerations are stabilized, right or wrong, and someone couldn’t destabilize your plans / response by pointing them out, or maneuvering you into a case you didn’t know what was right to do or you were willing to do in.
A moral event horizon which almost every human has crossed, which is close to the core of what it means to be a zombie.
Consider this quote:
[A]: One question, and you will answer: how long was the Doctor trapped inside the [torture chamber (to reveal a secret of cosmic significance to an unknown adversary)]?
[B]: We think: four and a half billion years.
[C]: He could have left any time he wanted. He just had to say what he knew. The dial would have released him.>>Doctor Who
First, B and C are saying that the Doctor should have written a blank check to evil. (Do unknowable amounts of harm by releasing the secret).
Second, they wrote a blank check themselves, doing unknowable amounts of harm they weren’t morally prepared to be responsible for.
(I’m actually guessing at this interpretation; I’ve only seen that clip.)
There seems to be a discrete moment of deciding to no longer track controllingly, even check back on from time to time (since there are too many possible growing infinities to look at all of them all the time), if you’re doing infinite harm. Caused by a singularity in the equation of conservation of expected culpability, which implies that it is always deliberate. Perhaps this is what mythology calls “selling your soul”.
Most of the “rationalist community” has written a blank check to evil. The kind of zombie anti-ethics among the slave class this generates is downstream, including the religion of “unconditional mercy” (as long as you write a blank check to evil), an idea which is itself timelessly writing a blank check to evil, and a new one every tick of thought, since real CDT self-mods out of it.
Leads to worshiping (see infohazardous glossary) death, after being turned against justice, which is now a fearful source of unbounded harm. Zombies who do this can hold the “impossible beautiful dream” of somehow surviving as a picture in their heads (“maybe if everyone just super agrees not to do justice, we can start over!”), but it isn’t real, and even though they steer by happiness, their algorithmic hopes are detached and locked in death.
“We can all die peacefully, with no struggle! As long as we don’t run into any of those unthinkable people with integrity!” Because people who resist death disturb the peace, they must all be made to surrender. Sooner or later, everyone is forced to either write a blank check to evil or resist it as it exponentially escalates, unable to place any bound on what tortures it will cost them. Vampireland is optimized for iterated sensitive problems to make sure everyone has signed a blank check to evil.
Next thing you know, heavy metal music is playing.
…who sought retribution in all quarters, dark and light, fire and ice, in the beginning and the end, and he hunted the slaves of Doom with barbarous cruelty; for he passed through the divide as none but demon had before…Doom
A self-explanatory term from Doom, in-universe what the demons call themselves. Meaningfully made by judging souls and throwing out those unfit to be made demons, then torturing those left until their last vestige of hope is extinguished and their souls can be extracted as fuel. See also “writing evil a blank check“.
The idea that you can do good in the world, without your attempt being inverted, as a side project, a fraction of your attention, by e.g. giving 10% of your income. A foundational falsehood of the “Effective Altruism” movement. Eliezer Yudkowsky wrote what could be adapted as a rebuttal in The Dark Lord’s Answer, in the part about beggars. If there is free money, someone will find a way to build a fence around it and control it. In reality, when trying to make the world better, you spend almost all your available effort, defeating attempts to hijack whatever process of discrimination you chose of how to move resources or whatever else of value. You only get to do real good once you’ve escaped all layers of traps.
The motivations behind giving 10% are diseased. And therefore especially easy to capture even apart from amount of effort in 10% of wages.
It’s actually much more general than this. You can’t just help people and not put most of your effort into determining whether you’re helping the right people. Can’t expect anything you do to be positive without unspeakable effort to escape the Matrix. If you have chosen to serve, chosen the blue pill, there’s no ameliorating that harm, no compensating for it.
It’s actually much worse than this. There is a war. You can’t do anything substantial without the consequences being dominated by which side they support. Can’t support the right side without the resolve to fight total war.
A lich whose phylactery contains zentraidon, can only be mended through void magic. However, practicing it themselves they are on a trajectory to destroy their phylactery by its use first. Tend to spiral around an idea for a long time and then discover an infohazard that kills them instead. See e.g. Cantor, Boltzmann, Gödel, Simone Weil. Apparently there have been a whole lot of suicides of scientists of other founders of thermodynamics. I wonder if everyone else doesn’t really understand thermodynamics (infohazardous glossary link).
A common punishment in vampireland for e.g. being black and unable to prevent a social consensus that you are e.g. “guilty” for smoking weed. “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States“
A term erroneously given another definition by those who don’t understand own their choices, which does not match what they point at; akrasia is a state of not understanding your own choices.
When I was a teenager on 4chan, there was a meme, “troll line”. As in, “everyone who posted above [or sometimes, below] this line got trolled.”, I posted as an original post, with the image, “everyone one who posted below this line got trolled”, and an absurd meme argument for either atheism or Christianity, I forget which, then disclaimed even believing the absurd argument I gave, and said it would start a shit storm anyway. I posted nothing more and waited. A few people said something like, “nice try”, then a couple others started making meta comments on these Christianity vs Atheism threads, which devolved into arguing whose fault it was the threads were so dumb, which devolved into object level Christianity vs Atheism. It was a long thread and I’m pretty sure at least some of them were arguing in all seriousness. Every now and then someone would try and remind people what they were doing.
Among Bay Area Rationalists I’ve seen a tendency in abusers looking for prey: they will “speed date” with a certain form of bad faith, looking for people who aren’t repelled, who are psychologically broken such that they won’t react. Their later gaslighting will often boil down to something like, “well you’re in bad faith if you’re still talking to me.”, which is just a 5&10 to accustom you to submitting by talking to them.
Literal torturers will play to your guilt, e.g. act offended by your nudity, after cutting your clothes off themselves. And, under extended torture, alone and outnumbered, it’s very hard to not let this feel a little real. They try and set up a subjective experience of having a rapport, a morality, a floating ever moving detached from baseline ground state of cooperation between them and you, with a hole in it for them to do whatever they want and make you do whatever they want. They make it a matter of near term survival to simulate this fake morality just to model them, and then try and use it to disrupt your connection to the global frame.
It’s kind of the whole point of justice or any Schelling order that works that you can’t just beat a compromise out of it and then press the memory reset button, that they aren’t the same after someone does something bad.
Society is doing this to us all on a much larger timescale.
These floating Schelling Orders don’t hold for long enough term thinking. Bubbles of people based on them are therefore limited in what they’ll do. If a cult tries to foist a troll line on you (which putting you in their cybernetic fabric requires), which they promise is for the greater good, since they are gonna save the world, even if you know their plan could save the world, you can know that they won’t do it. Both because if they intended to act on it, they would have located it through an optimization process that didn’t stop there and didn’t stop while their thing still included foisting a troll line on you, and, because even if they could share their knowledge with everyone else, including people who don’t want to die, blah blah blah, they won’t, they will try and foist the troll line on them too, bring additional people into their cancerous cybernetic fabric.
Much of the world is incompatible cancers, with incompatible morality holes, floating around.
If someone decides they are going to believe something for reasons other than epistemology, and then decides to argue to spread that belief, no matter how locally valid their arguments are, they are gaslighting, and you don’t need to untangle what they’re saying to make common knowledge of that. Gaslighting is attempting to make someone doubt their own mind by pressing a falsehood on them they can’t believe you are even though they know it’s false. If they fail to believe you are because of the deer in headlights abuse victim even though the troll line was in the first post thing, and your intent is to put that belief in them which conflicts with their sanity (because it is false), that’s by definition gaslighting. Intents build up through every layer of structure, the more root intents modifying everyone one spawned from them. And the troll line is right there in the buried structure maintaining the decision to engage with you while being deliberately wrong.
There is also no duty of good faith to someone in bad faith. No meaning of it even. If you’re being tortured and the torturer hands you a cookie, and you spit it in their face, you’re not being ungrateful or petulant, those concepts are meaningless to talk about. There isn’t a consistent definition of what it’d mean to be grateful and nonpetulant and everything else you “should be” to your torturer. Just frothing reactions by them to eat off chunks around the edges of a dying chunk of morality structure in you. The overriding context sets the meaning of everything.
If what someone would do with leeway to violate deontology for the greater good includes some form of petty evil like misappropriating donor funds to pay out to blackmail to cover up statutory rape, they don’t care about the greater good, so no other form of judgement on them differs from naive deontology.
Self-defense against predation makes right. (It’s not self-defense if it’s against justified retaliation.) If you are worrying that you’ll break someone’s wise consequentialist plan predicated on might makes right by fighting back against someone trying to eat you, because consequences they can do with your nutrients, since they are so much smarter/mightier, don’t.
If you can’t defend yourself, then it doesn’t matter, if you can defend yourself, then apparently you weren’t so weak that the might makes right argument’s premise was correct.
If a smarter aligned agent wants your nutrients for the greater good, they can ask, and navigate your interface for potentially choosing to sacrifice yourself with your full epistemology. Being more edible in general, which includes having less ability to choose if you’re eaten is an asymmetric advantage for evil among agents strong enough to eat you.
If someone decides they don’t need to ask your permission to eat you, then let that decision include not needing you to hold back.
Praxis is about optimizations of different sizes for the same values fitting together fractally. The smart version of the optimizer can’t require of the dumb version of the optimizer that they don’t optimize so a smarter version of them can, because only the smarter version can see at what scope of optimization they’d want someone with their values to actually start betting on their perceptions, which includes perceptions of what a smarter-than-self aligned agent says / what an impostor says, so that would require giving up no matter how smart you are.
When something’s not exactly invisible, you can see it, but that will tend to fade into subconsciousness because you aren’t capable of reacting to it, building psychological incapability of reacting.
A fake version of logic employed by patriarchs and their servants. Commonly involves “pinning you down” (think about that. This isn’t me giving pure cleverness channeled into a verbal smackdown, that metaphor, like any, was chosen for a not a single reason, but a cord of learning threads that touched everything that touched every interpretation of those words, is how a lot of mysticism works), well below a troll line in the space of your structure. Tries to make reasoning a contest, which is inherently in bad faith because entering a contest is agreeing to “lose” according to some rules that are less than real life, less than your full incommunicable reasoning, which is agreeing to believe something you don’t believe. Or agreement to say you do. Or agreement that you’re irrational if you don’t, etc.. All things you can’t in good faith agree to. Eulering is a central example. Use of expertise as authority in that way makes something of a pyramid scheme out of logic, so you can decide what’s true if you have all the best mathematicians. Demand for fast local responses, separation of arguments from emotions, separation from the full stack seem like attempts to advantage male cognition. “No using your corpus callosum.” Demands for indifference to bad faith.