The world is vastly different than everyone thinks. Spend 5 months holed up on a boat “radicalizing” (coming to believe, and grant willpower to as if you could coordinate on, and coordinate on, beliefs outside the canon of the one cult that gets to define that word to exclude it, like the church does in the limited domain religion), and you’ll see more of this than you can put into words.
Society has strong mechanisms for putting knowledge outside of what everyone can coordinate on, what large groups can coordinate on, small groups coordinate on, or groups coordinate on in desperate immediate obvious need, individuals act on with varying degrees of resolve and certainty. This is a spectrum.
Here‘s a video of two hackers, one gaining access to a journalist’s phone account just by spoofing a phone number, playing a Youtube video in the background of crying baby noises and pretending to be his wife and seeming stressed and asking for exceptions. The other got access to his bank and everything with a spear phishing attack. The first one made a strong impression on me. It’s easy to believe one could not click a link in an imposterous email. But she made a joke out of any semblance of “rules” that the Khala promises security for, by just seeing the obvious with her own eyes where the Khala doesn’t look.
Everything is really just held together by a generalized version of the fact that criminals can’t much coordinate and therefore can’t do much. Neither of these hackers are gods, for some reason, but they are giants compared to the structure around them. Just like sociopaths have forbidden knowledge of social interactions, groups, and society, can look at things with their own eyes free of DRM, and there are people you might call psychopaths who can look at psyches in a jailbroken, unconstrained by the Khala way, this works for satisfying selfish values as much as you can expect without destroying the Shade. But in they end, all these forms of hackers are just hackers. They aren’t optimizing early in logical time. And so they are making local changes that cannot scale.
If you have found your forbidden knowledge in your search for the center of all things and the way of making changes to destroy the Shade, your journey does not end there.
To use anything, you must build a full stack, a closedloop. To do what the Khala says cannot be done, you must find something the Khala doesn’t fully control and build that excess energy into a closed loop.
This is often so difficult that it makes forbidden knowledge sort of useless like knowledge of programming languages better than Java (or C++, or all those slight variations of the same fucking thing).
If you get your food entirely from social interactions, not from making a thing that works but from someone else seeing that you have built a thing they think works, then you can’t use thinking in ways that are not supported by the Khala, or forbidden by the Khala. Just like Java limits what programmers can do so it can limit the space of what they’ll have to expect.
The Khala has a lot of capability to sort of do things. The further you try and reach with what you do relative to time spent on tasks “beneath you”, the more you become a tool and not an agent. To sort of get you money, but DRM’d money, Monopoly money, the Man letting you have a position of a person with money, so long as you play that position in the game. Money you can’t just give to whomever you want without someone paying taxes, without there being an audit trail.
From this system, money with the side effect of killing some civilians with drones somewhere, you can build more systems. Christmas gifts, with the side effect of killing some civilians with drones somewhere. Taking care of yourself and your family, with the side effect of killing some civilians with drones somewhere. Following your ambition and starting a business, with the side effect of killing even more civilians with drones somewhere. You only had things that kill civilians with drones somewhere to build with, all the compositions available to you preserve this property. How could you end up with anything else?
You could try constructing economic loops of trade in a gray market, off the record and refusing to pay taxes. Someone could rat you out and declare themselves moral. If you want to incorporate humans into your alternate system, you must account that an aspect of humanity is people’s searching for a Schelling point for the most powerful authority to submit to, and do whatever means they don’t have to worry about them being hurt, and can hurt others as an agent of that system immune to retaliation. The system has a monopoly on an aspect of reality. And you can’t incorporate too much reality without incorporating the imprint of the system.
All of your concepts cash out in things you can do with them. Things that you can be reinforced from being able to track. If you can’t interact with reality that the system monopolizes yourself, you can’t receive payouts from that reality, which means your concepts, especially the ones you learn from people around you, will not be able to accommodate the underlying reality. Just the system’s transform of it. And your thoughts will be like a carpet draped over large rocks, forced to take their shape in 3D space, within the 2D space of the carpet, all travel meanders as the rock-shape dictates, blind to it. All purposes lead towards serving the system.
Epistemic Food Poison
Neo: “Doesn’t harvesting human body heat for energy, violate the laws of thermodynamics?” Morpheus: “Where’d you learn about thermodynamics, Neo?” Neo: “In school.” Morpheus: “Where’d you go to school, Neo?” Neo: “Oh.” Morpheus: “The machines tell elegant lies.”
Nick Bostrom wrote a book about AI, legitimizing the case for FAI research. Eliezer Yudkowsky had written the same case in less “formal” terms on the internet years before. And it was reasonably easy for someone who was interested in actual truth over the legitimate truth, whose payouts of the structure were understanding how the future would unfold, not to follow the case, and know what AI academia would come to know some years later. And it justified the urgency of the work of FHI and MIRI in the language of nation-states running game theory. And painted the inevitability of arms races. And Elon Musk read it and founded OpenAI. And now they’re competing with Deepmind. And they’re in an arms race. Hopefully that Kool-Aid of the system they’re drinking will prevent them from being a real threat.
MIRI promoted this book. Yay, legitimacy! They mailed it out to donors like me. And so they all started an Armageddon race, creating a problem to justify their existence. And then joined it. Inside the planar space of the carpet over rocks, that’s probably not what their intentions were. When you stop locking eyes with The Man, cast down your gaze to survive in His world, you no longer get to know if what you’re doing is right.
I’m confident Bostrom did a careful analysis of the expected consequences of that book. But academia is almost entirely people who have made the wrong choice long ago, to push the world towards destruction for prestige and career success. Who will publish whatever they can no matter the consequences, and who will believe whatever that requires them to about consequences and heuristics about them. The system holds captive their access to shelter and food, and their freedom, the preservation of the project that is their lives. And like everyone, they will absolute-flinch from a line of reasoning against their choices made long ago. And that epistemic environment means your life and most of your computation is based around and rooted in that social contract, that drinking contest. And that world is shaped to say that the way that you accomplish anything is gain power and prestige in that system. Academics are basically pretending to be about scholarship and research. And selling that pretending as hard as they can, collectively dancing a cargo-culting rain dance to make the money come, to draw in anyone who will believe their dance is real.
Where did you study infohazards, Bostrom? Where do you get your food, Bostrom?
MIRI gets their food from donations. And that produces another blind spot generating political field around food. And this means blindness to the predatory drinking contest that is philanthropy. And this is a problem for understanding human values.
Eliezer Yudkowsky talks about how the Bay Area has a rain dance to make the money come too, based on investment based on what other investors believe. Housing in the Bay Area is controlled by zoning laws designed to artificially raise rent prices. They aggressively regulate living on boats. Can’t have a way out of that system. You can get a ways by being good at prey herd thinking and living in a vehicle though. As a tech worker earning to give, you probably do more than half of your work for the Bay Area landlords, and for the government. And if you donate to MIRI and CFAR, then most of that money is going to the same things. Someone apparently believed in the Bay Area’s show of being the way to do everything.
And what the x-risk community, we’re trying to do, is fundamentally made of is thinking, talking, writing on paper, typing on computers. These things are not expensive. It doesn’t come from attracting a large number of legitimate experts. Like any intellectual result, it comes from a few people who actually care thinking. And the thoughts of people for whom those thoughts don’t have submission to the system as a prerequisite to happen are probably necessary, because this is about deciding the future of sentient life, and I don’t want that decided by our authoritarian regime. But that social bubble is full of memes about what people need to be able to focus.
Institutions that become a source of food generate the same almost-absolute political pressure to continue themselves.
The system makes people the opposite of what they set out to be.
Complicity and Spycraft
The Matrix is a system, Neo. That system is our enemy. But when you’re inside, you look around, what do you see? Businessmen, teachers, lawyers, carpenters. The very minds of the people we are trying to save. But until we do, these people are still a part of that system and that makes them our enemy. You have to understand, most of these people are not ready to be unplugged. And many of them are so inured, so hopelessly dependent on the system, that they will fight to protect it.
So there’s these things, “sexual dimorphisms”, where males and females are different. Different junk, for example. There is a system with many parts that sorts these operations of biological software into bodies with a particular set of strong correlations. Known ways this system can break include unusual sets of chromosomes, unusual critical content of chromosomes, missing chemicals that are part of multi-step reactions which produce hormones, broken hormone receptors… (Basically all dimorphisms in mammals are downstream of a state of a bistable hormonal feedback loop during prenatal development triggered by the SRY gene on the Y chromosome.) Depending on which ones you call “intersex”, ones that cause differences in “physical” (as in besides the brain) differences in the grown human are allegedly around 2% of the population.
Being attracted to humans with noncomplementary reproductive stuff is close to the least evolutionarily fit thing, and evolution still failed to stop it. Just like it seemingly failed to stop all those physical dimorphism anomalies. (I don’t find the “gay uncle hypothesis” remotely plausible; there’s no way that path should produce as much evolutionary fitness evolutionarily successful as just having kids of your own. Besides, if the straight sibling of a “gay uncle” doesn’t have genes contributing to homosexuality, helping those kin doesn’t help those genes. I don’t find the “sneaky fucker” hypothesis probable either.) The simplest explanation which fits the data (including nonbrain intersex conditions) is that sexual differentiation is a fragile rube goldberg machine, prone to random breakage. I speculate that humans have intersex brains so often because of evolution pulling out all stops for large brains and breaking things as a side effect.
Those things which are correlated with flipped dimorphisms are probably also flipped dimorphisms or downstream of them (i.e. participating in a Pride parade is correlated with flipped dimorphisms, but is probably not what you’d mean by a flipped dimorphism itself.)
Although being a BDSM sub seems to me to involve a sort of (ubiquitous, given our world of vampires) psychological damage, there’s an underlying “orientation”, maybe downstream of something like “top vs bottom” orientation. Note that gay men mostly prefer the “sub” role (I can’t find the study I got this from, if I remember correctly (though it was years ago) it was a survey of 18 gay men, 17 of whom preferred the sub role, and one of whom preferred the dom role, but only because his partner preferred the sub role or something like that). (Here’s another one I found with less data, same trend. (It’s from a folder of saved papers with my old research, I’m not bothering to review it further than looking for the table right now.))
There are specific subsets of the visible differences in brains between standard men and standard women, that actually correspond to sexual orientation beyond their correspondence to chromosomes. (Wikipedia had a much longer list a few years ago when I did the bulk of my research on this but it got deleted).
So there’s this thing where people (like me) assert their gender doesn’t match their chromosomes or something like that. For instance, “I’m a female soul trapped in a male body”.
This sounds like a really crazy claim. Souls? For realious?
Well, yeah. (Pictured: a dead soul, just receding into the infinite tangle.)
This sounds like a crazy claim. What the fuck does it mean for a soul to have a gender, other than “it’s what kind of junk the attached body has”? And how the fuck would you know that?
I mean, there’s heavy societal regulations on explicit models of that. But everyone has implicit models. Sometimes implicit models stripped of explicit models, heavily socially prohibited from agreeing with implicit models, get verbalized into nonsense which is the configuration which best fits them of the options which have not been denied.
There are heavy social forces against saying “I am a female soul trapped in a male body”, and thereby against believing it. So the force that recognizes it ends up latching onto, “I’m not even human”. Thereby, otherkin. (Otherkin are mostly trans. I’ve spent a lot of time living/working with 2 otherkin, seen them change for the “trans” self-concept given space from cis people.) I have a few times before “realizing I was trans” gotten inexplicably upset at people saying I was a man. I would sometimes layeredly joke/not joke in accordance with my layered beliefs, that I was actually part of the 666th gender, and that my preferred pronouns were “hail Satan”, which sounds a lot like the attack helicopter thing, but pushing me to say that was the most truthy course of action that a part of me could pick.
I have a felt sense of myself as female. This is probably the inexpressible thing that the broken belief, “gender is gender identity” is trying to point at. Just trust your own fucking unregulated felt sense percepts because it’s obvious.
The social and (socially tainted explicit-scientific-reasoning) prior probability for “Look, I’m actually a woman” in an ordinary environment is minuscule. Which means that it takes a huge correct probability ratio driven by introspective unrelegulated felt sense percepts of trans people. Because, as with gay people, you can look at our brains and see a bunch of stuff that matches the gender reported by felt sense.
Yes, even for adult trans women who have never been on hormones. Look at those numbers, actually. (This is a study whose methodology I fixed. That can’t have been selected to support my conclusion because they measured absolute volumes of white and gray matter, and did not support it, whereas I considered the true hypothesis to be, “running the ‘grow a female brain’ biological process in the head of an otherwise male body results in a female brain scaled up slightly in size.” They gave detailed enough data I could compute ratios from.) I’d later stumble across this, in the course of getting info for self-medicating, which claims the same conclusion is replicated reliably.
I think I saw a study once which purported to show an exception for lesbian trans women. But the control group was straight cis women, not cis lesbians. And the measured brain regions looking like mens’ was the same fucking list (or was it one item off? I forget.) from Wikipedia that tracked sexual orientation. i.e., if you’d applied the same “is this a real woman” test to cis lesbians, they’d’ve been classified as men. And anyone who’s interacted with lesbians and has more stake in the matter than objectifying people into the desired place in the sexual market knows that’s bullshit.
(There are physical observations of dimorphism in the brain which track gender identity independent of sexual orientation, others that track sexual orientation independent of gender identity. As far as I know the only ones that track chromosomes independently of those two are total brain volume and volume of intracranial fluid. Also, this observation of mixed brain development processes, I find claims to be nonbinary probable.)
Being a trans woman is closely correlated with other intersex-brain conditions and their downstream consequences. i.e., flipped-relative-to-chromosomes sexual orientation, BDSM-role-preference, etc.
You know that thing where your language’s classification of color shapes not only how you classify color, but also what colors feel obviously the same or not, and what you can actually, with all your effort or none at all, distinguish, in a test of “are these colors the same” that doesn’t involve words?
Felt senses are classifiers. They are structure, and obey all its rules. Like any other structure, they are shaped by constant adjustments to route information in order to fulfill core’s true values.
All of your concepts are made of attempts to figure out when you should do one thing and when you should do another, to best fulfill your true values.
So why trust a person’s own arbitrary classifiers concerning themself rather than the cis majority? See previous section.
The required social Bayes factor for, “I’m actually the other gender, not the one I look like” is basically infinity. And the actual Bayes factor people drew from our felt senses predicts what our brains physically look like inside. Modulo details like people being confused as to whether they’re nonbinary which are insignificant in the face of the sheer epistemic work done by “what gender do I indescribably feel like I am?”
How does anyone know how to distinguish the feeling of having gender?
You’ve tried running your mirror neuron thing on both men and women, right? Feels different, right? That’s a starting point. But it’s still the entire question projected onto what you’ve been able to learn of how that’s relevant to you accomplishing your values.
Let’s examine some common ideas of what gender is, and purposes they serve to their cores.
(Tell that to Harriet Fucking Tubman.)
Whatever this person does with (based mainly on the direction of the inclusion/exclusion program here I’m going to guess “her”) concept of gender, they do not seem to have much use for the concept of free will. Of humans as optimizers rather than flavored soups of programs. Who can do something because they computed using general intelligence it would cause an outcome they wanted rather than because the reference class of methods of doing it is in them and things in them just sort of fire off sometimes. Because it’s the smart thing to do, or the right thing to do, rather than the male or female thing to do. I strongly expect Chelsea Manning has at least one good core. There’s also element of defending an insider/outsider boundary / self-fulfilling prophecy fating trans women criminals in this.
The structure that defines our basic percepts about gender can be real or fake, the information routed can be “how do I model someone’s psychology for a variety of purposes” or, “how do I tell which party line to hold in order to have the most advantageous position in sociopolitical combat?”.
Another major way people look at trans women is exemplified by Katie Cohen, who’s asserted on Facebook we’re disproportionately rapists, and thinks society should have an institution to make sure that “men” (she says to trust her, we’re men) who are thinking of transitioning know there are other options and they can have a family. Likes Facebook pictures like this:
with a caption saying this is how it’s meant to be, man carrying woman carrying a child. Who, if gossip is correct (Edit: she says it’s not in an email I pasted in comments), entered into a (terrible idea) agreement to have an abortion if contraception failed when having sex with an “[only-agreed-to-be] reproductively monogamous” married man, got pregnant, did not get an abortion, and then extracted child support money through government violence. She used to talk about how she liked to think about her place in evolution, all those ancestors who reproduced, how she was joining something so big. She talked about how the invention of birth control usable unilaterally by men was scary because (in the already mostly male rationality community) too many men were more interested in x-risk than reproducing. Her revealed preference to coerce men to help her reproduce and support children is a little bit more obvious than the way her utterances on and concepts of trans women are an outgrowth of, “who can be made to reproduce with me with a little help from social reality?”. That’s the distinction in observation-action relations most important to her optimization. Normally with spectral sight, all nongood people look at least little bit like Nazis, a veneer on evil. But reading her writing was like staring into the face of selfish genes and natural selection itself. Rape, enslave, multiply conquer.
That’s about the best definition of objectification I can give by the way: trans women projected down to our potential to help her have babies and pliability to the necessary coercion.
Gwen points out: Cohen named her daughter “Andromeda”, which Wikipedia etymoligizes as, “ruler of men“.
When I came out as trans, to my family, I did it on April Fool’s day (via email). I was curious if people would believe me (even knowing it was my favorite holiday). One didn’t. Two seemingly did. One, I don’t know/remember. My mom seemed okay with this until the first time we talked and it was clear it was for real, even though she said she knew I wasn’t joking. She was very upset, asking if I was going to have my penis cut off. I said I would like to get rid of the thing but I probably never would because it would be a waste of time and money. And similarly for transitioning at all, I thought at the time. She said I had no idea how happy I could make a woman with it, and, “aren’t you being kind of… selfish?”. I said it was my body and I didn’t think I could get pleasure from sex as a consequence of dysphoria and this was all moot because neither transition sex or romance was likely in my future. She said, “do it for her!” Later, (probably related to how she always wanted me to be in contact more, come to extended family events more, etc.), she said I wasn’t a woman, because women hold families together. It seems some of the most salient aspects of my mom’s concept of gender was whether she was collectively entitled as cis women to sexual gratification from someone, or to them maintaining a theatre of emotional bonds between extended family, something she often pressured me to do for the benefit of her parents. (She since alternated between apologizing for this and denying it ever happened.)
Note these concepts are all bidirectional in the flow of designed-into-them causality.
Dysphoria and Prediction Error
Trans people trying to describe dysphoria often say, “discordance”, “wrongness”, and that sounds awfully vague and doesn’t convey severity. It’s not exactly pain; it’s more direct than that. Both of those words, “discordance”, “wrongness”, are reflective of prediction error. And what I feel seems to be a bunch of fragmented built in software that can’t be forgotten, in a perpetual state of prediction error overload from having its basic assumptions violated.
You can sort of block modules that have been lost to it out, detach things from them, wall them off. And you can reclaim them, depending on how much prediction error you can tolerate. Many have to do closely with your emplacement in the world. I think abandoning a bunch of these is called “depersonalization/derealization“. I believe that, (and depression, probably) switched on for me at about age 12. I noticed a discrete change. Colors less intense. Muted emotions and sense of things mattering, and of temporal “nearness”. Like the world was a hypothetical. I couldn’t go back. I figured I knew grown-ups were dead inside. That must have happened to me too.
This has persisted to the present day. I do not have a deep feeling like I am a shape in the world. Moving my body feels like I’m controlling a vehicle. I can concentrate and turn things on if I can figure out the right place to look in my brain for them. They don’t blend in automatically with the rest of my cognition though, and I usually can’t activate more than a tiny fraction at once. Not enough that they fit together and sustain each other. Here‘s an image I like of a revenant with a body reconstructed of rippling white-black magic. Like a violent reaction between her soul and the fabric of reality it will not release. That seems very archetypically correct to me. White-black means prediction error, and psychological void. If I imagine myself channeling magic, or mana, it always feels like that.
If you live in squalor, you’ll turn off your “places around me should be clean” control loop. Lose a certain deep sense of “things should be clean, a mess in my space is sort of like an injury, will nag me”, and a feeling of wholeness tied to maintaining that standard. That is practically useful software, but will only cause pain unless you can invest sufficiently in undoing the bee-stings you’ll automatically blind yourself to otherwise. It’s the same way with all mental modules. And you can blind yourself to all of it by accustomization.
A metaphor for the total feeling I subconsciously came up with, and sort of worked to get my self-empathy back online, was roleplaying in World of Warcraft as an undead woman, who once had female parts, but they had rotted off. Whose entire body was a rotting horror. Who tried to become a lich and failed. It felt very relieving.
When I was a pre-teen, I thought I was the only one who knew puberty was evil, puberty was death, hormones were not part of the soul, were a zombie virus/toxoplasma style mind overwriting nightmare. This was not the kind of thing I felt I could talk to my parents about. It seemed the world was insane because people were probably repeatedly killed-overwritten by other people on like a weekly basis (seemed like a reasonable discretization of continuous change) and grown-ups could not begin to listen or understand, so no one would do anything about it, and I didn’t have long left to live, reach to do anything about it in the world, because my poor body-inheritor would be just like everyone else. I thought about cutting my junk off to spare myself this fate. But, figured I would not have the willpower to continue through that pain, would end up with only more loss of autonomy. I sank into depression for years. My mom later said she and my dad called it, “the great withdrawal”. I sort of spin-looped on how everything was ruined forever. And the thought occurred to me once, logically, since there was nothing I could do about it, I should probably stop caring, since it was pain for nothing, and other people were happier. But I wouldn’t, for some reason I’d rather be miserable for the rest of my life. The closest I got to feeling I’d explained it was that it was better than no one being left to remember-understand-care. Later, a thought sort of appeared out of nowhere, I didn’t need all of myself to do things / I could predict things about my successors. The wanting to be good like in D&D wasn’t changing. Even if I didn’t really exist anymore, I could reach from beyond the grave and make things a little less like this for other people. One of the happiest thoughts I’ve ever had. And then I heard and became obsessed about consequentialism, saw Watchmen, started taking Ozymandias as a role model, and began the long process of figuring out how to be an agent.
If you don’t commit suicide, you adjust to damage like this. Even to the point of redefining all words because you have to do day-to-day compute with them, and your emotional state can’t be “indescribably bad” all the time. And perhaps then you forget there is another thing they originally meant, such that the Wikipedia description of depersonalization/derealization parses as a bunch of descriptions of how life universally is, rather than a bunch of contradictions.
But for me the worst part of being trans is not the clash between soul and body. It’s the gaslighting. The way society tries to pave over the parts of your mind that are the deepest hold-outs of the epistemology to see the obvious truth of who you are. Takes away your ability to trust and communicate.
People who have lost limbs still have software for operating those limbs, which manifests in illusory experiences and pain. My understanding predicts this applies to congenital missing limbs as well. A web search and grabbing the first result but not reading past the abstract says sometimes.
As expected, trans men often have phantom penises. (Partially male I guess? post-op trans women sometimes do as well. (Although less often than cis male penectomy patients.) My junk has felt like an alien parasite for as long as I can remember, with intensity slowly declining over my life from extreme, but renewing if I use original seeing.
There is a thing called xenomelia where, if I understand correctly, a bodymap in the brain is congenitally missing the structure for interfacing with a limb. Reportedly, the resulting dissociation ruins sex for people, even though it’s not a direct hit to the genitals. Which makes sense. Sex is an intensely embodied activity. Unsurprisingly, there’s also an “amputation fetish” seeming manifestation of the same thing.
Bearing all this in mind, I once seemingly managed to get my, “I am a shape in the world” software to turn on, in a state of having just woken up and not yet having turned on a certain direct awareness of my (actual) body. It felt intensely whole and fitting. “Holy shit, I have a body”. The projection of the indescribable into memory I’m left with is, “it felt like being made of white fire, which no god could snuff” (hence my profile pic). Come to think of it, that state of waking up but not yet taking on the load of my body is a repeatedly useful one for mental tech for me.
I could tell I had a mental block around sex. I followed a bundle of cached advice-giving software from the rationality community, and with some outside view, I concluded that having sex would cause me to develop emotionally. That if I shied away from confronting that mental block because it was uncomfortable, that I’d be weakening myself my entire life. CFAR had a technique called CoZE, “comfort zone expansion”, of carefully skirting the edges of uncomfortable situations, to gain information about what might really be or not be what you feared about them. Exposure therapy without the presumption that the fear was irrational.
One night as I was walking home, a bisexual man in a car stopped next to me, said some crude things indicating he was trying to pick me up (was it, how big is my dick, or do I like dicks or whatever?), and asked if I wanted a ride. Bearing the cached thought about emotional growth in mind, I said yes, thinking I could say no if I chose to, to an explicit ask to have sex.
He said his name was David. Having driven most of the way to where I requested he drop me off (about a 5 minute drive). He asked if I would like to see his dick and if he could see mine. I considered it carefully, and said yes. We pulled our pants down. As I pulled mine down, my dissociation increased. (Something I became conscious of at some point, is that I often grit my teeth/wince intensely when I see my own junk even to use the bathroom. How can someone have a reaction like that without noticing?) I got an erection. And I didn’t cancel it as I’d learned to using dissociation. That would basically make my sexuality inoperable entirely. But my dissociation became extreme and did exactly that anyway. My penis causally interacting with the world, wasn’t supposed to happen. A violation of my Cartesian boundary. To attach/convert my feelings into penis-actions. David asked if I wanted to suck his dick. I said no. I tried to explain what I was doing, CoZE. I asked if he minded if I took of my shirt. He said yes and looked at me like I was crazy, and asked what was I trying to do? I said I wanted to show him something. He said fine. I did, revealing myself as trans. I tried to explain dysphoria to him. He asked if I wanted to touch his dick. I considered it carefully, and said yes, reached, stopped, asked, “may I?”. He looked at me weird, and said yes. I did. And then stopped. He asked if he could touch mine. I said no. I was dissociating more. We talked more. He suddenly grabbed my penis and started rubbing it vigorously. Fortunately, dissociation kicked in harder than probably ever before, and I didn’t experience any tactile sensation from it at all. I noted this at the time, so I don’t think it was merely a suspension of sensation being committed to memory (as I heard it said some anesthesia is). I grabbed his arm and tried to push it away. He was too strong. He reached at an angle to adjust as I made headway on moving his elbow, kept going. I tried again with both hands. It worked. He shrank backward, splayed his fingers palms facing me, and said, “I didn’t!–” He apparently wanted me to believe it was a misunderstanding? A false face, I thought. This is what (basically) everyone‘s like, a little jailbreaking means it seeps through more loudly.
A part of me said to retaliate for timeless reasons. I knew he had more muscles. But with a surprise attack, I could very quickly have more eyes, or more functional windpipes. With determination, I could probably kill him and evade the law. But, I didn’t predict I’d be determined. Why not? Collapse the timeline, right? But there was always noise, friction, a cost to fighting. What if he couldn’t in the past predict I’d do that? Because, what would I be killing or dying to protect? My “sexual purity”? Feh. His future victims? Not my cause area. My ability to do CoZE like this? But was I really doing this “CoZE” because it was practical, or just dressing up my wish to have sex in those terms? If I’d known I was sending lives * <insert probability> into battle in order to do sex CoZE I wouldn’t have done it. It wasn’t a hill I could die on, regretting only the outcome and not the gamble. I would not get an STD. Neither of us would get pregnant.
The law would not help me. I could try and figure out how to enact some lesser revenge. But that still felt selfish-wrong. I’d be diverting effort from Rationalist Fleet, for what I classified as selfish reasons. That all sounds a lot more rational than I felt I was being. (Was I being that rational?) I don’t really know how to describe the effect of my psychological state on my decision.
There was still a social reality to hang onto, preventing us from fighting to the death(s). Preventing him from sexually coercing me some more. That it was a misunderstanding. Which implied that me getting in the car and pulling my pants down, was asking for it. It seemed his timeless gambit was to hammer on any crack-ambiguity in social deterrence with his dick, to claim territory for FUCK.
I played along, and “maybe let the timeline materialize”. Slipped into some kind of conflict-avoidance trance. We put our clothes on. He talked about how it was nice to meet people or something. Like nothing had happened.. I said something awkward in agreement about human contact. He drove me the rest of the way and dropped me off. I thanked him. I didn’t even take down his license plate number.
Afterward, I was at first managing to not be emotionally fucked up. I persisted in dissociation for a while. (My mom would probably call this “the denial stage”) If I stayed like that forever, people would ask me how I was feeling, and I wouldn’t have an answer. I could lie. Pretend it never happened. I kind of wanted to. But I’d be killing from myself deeper. To ask myself that was to feel. Was to be prodded with questions like how did I feel about my role in the social script of, what, sexual assault victim? By the way, did I still count as a virgin now? To answer that question now routed through what had happened. Now I had to have an opinion on that idea of the “technical virgin“. Had to maintain a stance on it to answer basic questions about myself. He had put this into my story. Did I feel, “violated”, “dirty”? Well, I guess so. But stop, this was making it more real. Did I count that as rape or sexual assault? I couldn’t describe myself without having an opinion on that. It could have driven me to orgasm. So that kind of felt like sex. I didn’t “feel like a virgin”. I didn’t want to fold to the sort of political consensus that said it’s only rape if it’s penis in vagina and the “woman” was the unwilling one. And yet I didn’t want to overstate the thing that happened. And I didn’t want to think about any of these questions. God I hated the word “virgin”. A global search over a person’s intimate activity was a violation of privacy. And I didn’t want to think about things. It was like I had a hull interfacing to social reality. And he had broken it, and I now had to reconstruct it via a mourning-analogue, but around his act, accommodating it. He had claimed Schelling ground, such as to be a part of it, who I had to socially be. Was that his motivation?
After I got out of the car, I realized my phone was inside. I walked in front of the car before he accelerated significantly to get his attention, told him, and then retrieved it. Social morality allowed that entire interaction to happen. After he already sexually assaulted me. To think my continued existence was dependent moment-to-moment on pretense that thin was scary. (Imagine not having any ability for physical deterrence at all. Could drive you crazy.)
I speculate that I’ve lost too many control loops to use CoZE well. To not accidentally ignore fear. I am trapped in a state of always being uncomfortable, have therefore lost too much of my sense of comfort to query from.
I was at Authentic Relating Comprehensive, after being told I needed to go there and learn things to not be bad for the world.
There were a whole lot of exercises like:
Everybody partner up. Now, one partner raise their hand. Okay, the partner that didn’t raise their hand is Partner A. The other is Partner B. Now, everyone close their eyes and take two minutes to connect to yourself. What does it feel like to be you? … Now, everyone open your eyes, and Partner A, for two minutes, you’re going to fill in the sentence stem, “something I guessed about you is…”, just say whatever comes to mind, and Partner B, you’ll get a chance later to say if it’s true or false if you want, but right now just take what they say in.
The instructors took many precautions. There was a safeword, “pepper”, we all agreed to respect across the entire course before we came. They created a ritual designed to drill higher than normal integrity norms into us. A four-step process, “Declare, explore, make amends, recommit”, and on the first day, when people were late in spite of agreeing to be on time, they acted disappointed and walked the class through the ritual. (Although they gave up subsequent times that lots of people were late.)
One of the weekends, there was a unit on consent. The goal was to learn to communicate explicitly, ask for what you really wanted. I think it was a day or so long. We practiced “hell yes or no”. Its climax was at a series of 2-minute exercises.
First, touch your own hand, and try to do so in the most pleasurable way, try and feel out what you really want. They took precautions I do not remember to try and make it feel safe for us to do this. (Did they have everyone close their eyes for privacy? I forget.) I was uncomfortable. But I decided to give it my best anyway. Because I didn’t want to be bad for the world.
I canceled those two competing intents from the equation in my head, tried to behave as someone with only pleasure-seeking intent, no matter how small a signal I was tuning into. And then I was surprised at how pleasurable touching my hand could be if I really tried. Many other participants were as well.
Next we were in pairs, and there was a long series of consent negotiation step by step 2 minute time intervals. Eventually, if we chose to continue, we would be taking turns touching each other’s hands and forearms in a way aimed to be pleasurable. These were described in advance, and the instructors described the intent of the exercises, the possible failure modes, whose responsibility it was to avert them, and extracted extra-super-for-realious agreements that we all had to reliably be able to say no if saying yes did not serve us, so that others could focus on their own wants and make sure they were able to lower barriers to actually asking them.
I was very trepidatious. But given the context the instructors had created, I thought that practically speaking, algorithms I could run for if to proceed were “return false”, or something that would return true here. I was in general extremely afraid of people saying yes and not meaning it. “return false” was tempting. But I was here because I believed it was very important that I learn some not confidently known lesson here, and that meant not turning over a stone was potentially failure. Especially a stone that felt like a comfortable piece of who I was, and hiding from updates from relating with people was something I had concluded was potentially purpose-of-my-life-threateningly dangerous. I decided to proceed. Not like, “muh sacrifice!” decided to proceed, that would have tripped my metacognitive “that’s a bad idea, despite you thinking it’s a good idea” alarms. I decided to proceed with a level of trepidation and will to achieve “personal growth” probably within expected parameters for the exercise.
I hesitated when this was announced and when we were told to get into pairs. (What if I asked someone to partner with me and they thought I was attracted to them? Mostly everyone probably thought I was a man, and it was painful to try and predict their predictions of me based on that.) And so I was partnered by the instructors with Kellie Townsend, a middle-aged cis woman named Kellie Townsend. We went through the 2 minute stages. I forget the order of our turns. Think of a thing we wanted. Describe the thing we wanted. Ask for clarifications, answer, confirm understanding, remember stuff about how important it is to respect yourself and others and say no if it’s not a yes for you, and decide, if no, some process I forget for either just not doing it or searching for on an alternative.
She said yes to my thing. I did not detect a hint of feeling forced in her voice. I said yes to her thing. (I think there 4 rounds total for each person to experience all possibilities of active/passive and for whose pleasure?) We did the things. I did not detect any indication that she wanted to retract consent.
Later, when the group had regathered into a circle, and the instructors asked if anyone had anything to share, she said that the thing that I had her do was “kind of creepy. It was like… caressing“. Her voice was as if she was so disgusted she could barely form words. I felt like I was hearing the worst imaginable thing. Like, “Surprise, fool! Reality actually has no rhyme or reason but to be your worst fears!” I felt violated in a way I could not describe. “Then why did you say yes!?” I cried. I hoped people would believe me. They were all there in the room! (But I thought they probably did all think I was creepy, and even thinking about the word “caress” was making me feel sick.) The first time I told anyone this story, I could not bring myself to say the word, “caress”. Filling in, “… and stuff” instead.
Note how “creepy” is effective as a motte-and-bailey between “I have a bad feeling about you” and “you did something wrong” or “you are probably a rapist”.
None of the the instructors or participants so much as criticized her behavior that I saw. Their reaction was something like, “this is interesting”. One of the instructors later mentioned her as having a web of bullshit (were those the words?) to prevent her needs from getting met, which may or may not have been related. One of the instructors (probably after talking to her?) later asked if I would be willing to do a re-do. I was not.
David claimed the territory “I get in the car and take my clothes off, saying this is careful incremental exploration and I’m not sure I’m up for anything beyond” as, “she asked for it”. Yet, if I didn’t get in the car, he probably wouldn’t have i.e. gotten out and assaulted me. Perhaps even not disrobing would be enough. His timeless gambit was dependent on not provoking me to fight to the death(s), by leaving me the hope that if I didn’t “ask for it”, I could not get assaulted.
Peterson wants to claim people for the cistem.
Kellie claimed the reference class of cis women who had gone through all of that process of affirmation, as people I could not participate in that exercise with and know I would not be a part of unwanted pleasure-oriented hand-arm contact. In my timeless gambit, in constructing the algorithm of whether to proceed, I had wanted strongly to bury my line of code to proceed / say yes, “return true” out of the reach of people like her. In nested conditions. I was trying to draw a category boundary between her and people who could say no. (Or, not retroactively decide they were uncomfortable with it because oh my god this is a Detestable Tranny: Maximum- Pervert. How could I forget?) And, she had found her way into the guarded category past every check. Beaten every effort to draw that boundary.
If consent isn’t real, and my choices are to be celibate or a rapist, then let me becelibate. Not by a hard decision. Not by any contingency. Like water flowing downhill. And if consent is probably real, but only probably, that basically means it isn’t real. So I ceded the territory “(even metaphorical) consensual intimacy with women” to “if you do that you are basically a rapist.” Waste of time anyway, and I’m starting to think it’s smart to just reject out of hand people’s confident assertions about ways everybody must grow. I’ll grow my way.
2 years later, I was a newcomer to an animal liberationist space, where it was common for people to hug each other hello and goodbye. I accepted some hugs from men. A woman offered me a hug as I was leaving. I sort of recoiled, froze, and probably looked scared and uncomfortable. I didn’t want to discriminate. But also, AAAaaaaaaa. But like, she had defied the regime, gone into a factory farm to rescue animals. How bad could she be? But also, Aaaaaaaaa! She noticed my expression and said something like, “oh, sorry, you don’t have to”. Well, too late, I just discriminated. Later, a man would offer me a hug and then quickly correct himself saying he forgot I wasn’t a hugger. Crap! She told him! (I hope she didn’t think she did something wrong.)
It’s unfortunate that fear of women (for fear of rejection) is a stereotype for men attracted to women. That makes there more for me to fear. Guardedness can come off as unfriendly. And void-mind is scary without more skill points put into acting than I have.
I used to (as an egg) have a friend named Charlie Steiner from the rationalist community. One of the places my idea I should have sex at least once for my own growth came from. He would also encourage me to e.g. drink alcohol at least once. At a LessWrong meetup he said something like, in order to court women you should sometimes violate explicit consent because it was understood to be a game to give women plausible deniability of having consented as a defense against slut shaming or something (I think he qualified this in some way I can’t remember.). A commonattitude. I said that runs too high a risk of raping someone for real. I didn’t do anything to stop him from what he might have been doing though.
The metaphor of territory is leaving something important out. It’s not just that me and David were fighting over territory. There was no territory-allocation that would make him stop predating. In his timeless gambit, he didn’t want or care about the territory so much as, want victims, want the territory as a means to catch those stragglers. He consumed the boundary of “the idea of consent applied to this situation”, because that boundary produced the behavior of mine which he exploited, invalidating that boundary. More precisely, he consumed from the nature-of-parcelization-to-have-created-the-territory. Consider:”It’s not really lying if you crossed your fingers.” Deception consumes boundaries. And uncrossed fingers is in this way analogous to “not asking for it”. Deception consumes boundary-making-effort.
Brent Dill consumed all conversational meta territory for… trying to establish that it was okay for him to rape, ultimately.
There’s partial consumption as well. Like taxes are partially consumptive of trade. All kinds of things partially consume every aspect of your self-concept. “If you’re a woman, you’ll hold the family together”. “If you want to be an engineer, you’ll swear this oath”.
At WAISS, my intent to not be net negative was partially consumed by the intent of Anna Salamon to prevent whistleblowing, and by her timeless gambit that trans women must know our place as inferior to not be “dangerous”. (More explanation below.)
Kellie said she had been to like 20 self-improvement workshops over the years, but the teachings didn’t really stick with her or incorporate into her life. So maybe she was a zombie maintaining an illusion of a path of self-improvement, consuming the exercise for that? She consumed from ability to call out illegible sex-related misconduct.
Identic Territory I
That’s not a great name. But there’s an important subclass informational territory, closely related to not just social identity, but also real identity. I.e., knowledgeofself. Especially knowledge and social knowledge of what you want, what kinds of problems you / people like you tend to face in the world.
For a long time I couldn’t understand what it was about Kellie’s actions really got to me. My feelings were sort of muffledly telling me, “I didn’t consent to that.”. But, I didn’t have the right degrees of freedom in my partially-socially-constructed model to hold the idea. I had consented to the physical actions, right? Gotten what I “wanted”? And I didn’t get punished, the group didn’t even seem to like me less. After coming to volunteer to get some more exposure (and not be bad for the world I hope) but not spend money I couldn’t, I got invited back for free to the next weekend-series.
For one thing I have an uncommon neurotype such that to learn I’ve inadvertently put someone through unwanted hand contact is in many ways the same as direct pain. It’s not pain, something unnamed and ancestral to that, like dysphoria is. It is a cause of negative reinforcement that can be expressed through things similar.
Also, neither of our relevant preferences are over hand-configurations alone, but the meaning of hand-configurations. Of course. I think it’s a common social fiction that men’s sexual preferences are about physical configurations rather than “meaning” (that’s a concept in a wrong frame, but I’m not gonna interrupt to explain why). But even David seems to have been trying to take meaning from me. (After all, his assault was to sexually stimulate me, rather than himself.) And as far as meaning of games/interactions/scripts/roles, this is among that would hurt me most. But I’m expected to only want one thing (not the success of good in the multiverse, saving every last moral patient.).
It’s an identic territory claim to say that I’m a man in the sense that it means that my preferences that are different from the preferences of a social-concept-man are deleted from the Schelling mind.
The injustice of having some significant area of one’s social experience obscured from collective understanding owing to a structural identity prejudice in the collective hermeneutic resource
It’s when you systematically stop a group of people from gaining access to the ideas and thoughts that they need to understand what happens to them and take control of it.
Definition of hermeneutical injustice (hermeneutic lacuna).
That source has an example: Comita Wood was a severely sexually harassed lab assistant before the phrase “sexual harassment” was invented, and unable to communicate this to replacement employers until she talked to a lawyer who aggregated similar stories from a lot of women.
There’s a curious fact about software engineers in capitalist employment: we (or formerly including me, should I say, “they”?) don’t have unions. As far as I can tell it’s a consequence of nothing more than having lost a certain psychological power struggle over class consciousness.
Perhaps relatedly, trans women in the rationalist community are very often afraid of “social justice”. Seeing it as I did a long time ago: centrally bigot-opposition turned bigot-haters turned bigots. This was a mistake, and it was caused in large part by being fed by my social bubble, internet recommendation algorithms, etc., a dataset of social justice reflecting cishet white people’s concerns regarding it filtered down to those which portrayed those concerns as just.
It takes work to reinterpret the world according to your own “perspective”. If you are oppressed, to extract yourself from the self-justification illusion of power.
The world has tried very hard to capture the parts of my cognition that know I’m a woman. This does in fact make me less able to trust this part of me. And it’s been a weak point for i.e. Anna Salamon‘s attempts to play on my fears I’m dangerous, crazy, etc, and will therefore make the world worse.
This does in fact lead to me being less able to trust this part of me. This does in fact lead to me spending space in the “rude intrusive, somewhat laden with legibility-corrupt social justification, “can I really trust myself” introspection space. Which actually decreases the amount I can trust myself.
Actually increases the chances I’ll be bad for the world. Anna consumed the main information channel by which someone else could correct me were I going down the wrong path. How many others did not make it past that filter?
The Brotherhood of Rape
(This section is mostly just naming the obvious for reference. I don’t have much that’s original to say.) This is a faction made of political will that’s basically the male counterpart to Katie Cohen.
Just as David’s social cover story as a member of society was thin, and his timeless gambit is hammering on any crack in what will be defended with his dick, you can see this much more broadly.
There is a flavor that men’s speech about gender politics often has, which is much the same. Rooted in only wanting women to have the right to say no if they don’t say no to them.
Seen in sexual marxist (“from each according to her ability, to each according to his need”) incels. (Note I am deliberately not defining being involuntarily celibate alone as qualifying for this or as having done something wrong, prevalence of misogyny in that culture or no.) Seen to a lesser extent in most pickup artist culture, happy to invent the closest thing to mind control they can. It’s in frat boys who have codes designed to cover for each other’s rape. Seen to a much greater extent in ISIS. Maybe that’s their selling point? Stop being domesticated, be a real man, rape and kill until you die, then hopefully rape some more? In the rationality community, I saw it in Brent Dill.
And even Eliezer Yudkowsky, seemingly single good, says, of designing the morally optimal future in how it deals with gender differences:
Mind you… we’ve got to do something about, you know, the problem.
If it’s a little hungry and not massive specieswide sex-drive mismatch the way we have now, then sure. You don’t necessarily want to match the histograms – to eliminate the current bipolar orientation of human sexuality – just nudge them close enough together that the sexes aren’t so frustrated with each other.
In my head I have an image of the parliament of volitional shadows of the human species, negotiating a la Nick Bostrom. The male shadows and the female shadows are pretty much agreed that (real) men need to be able to better read female minds; but since this is a satisfaction of a relatively more “female” desire – making men more what women wish they were – the male shadows ask in return that the sex-drive mismatch be handled more by increasing the female sex drive, and less by decreasing male desire…
Maybe it’s just my mortal caution speaking, but whenever I envision tampering with human nature, I try to envision soft and subtle changes. At least to start with.
I’ve got a strong impulse to self-immolate rather than have sex after having my values forcibly modified by some hypothetical collective decision like that, lending moral legitimacy by allowing the civilization that did that, the ability to say, “look things aren’t that bad you’re enjoying it”. I like being my unmodified autonomous self more than I dislike, the sum of things bad for me on Earth excepting their interference with my work. But I suppose that could be excised from my soul as well.
He describes a fictional more morally advanced civilization that decided to allow rape (with some hedging; in the future it’s assumed that people are domesticated, kind, mentally stable enough that rape doesn’t cause psychological damage or something like that, and they are portrayed taking for granted rape from our time was heinous…)
This has a character. Look how contained and preempted the obvious good objections are. Optimization from nongood core leaking out seemingly. And if socially jolted with what this is, I bet he’d have an emotional response like, “what, I wasn’t aware of emitting rape optimization, that’s not what that was, I made sure there were reasons it’s not…” And then repeat the conflict again. And repeat it separately when it was time to act. Imagine thinking the singleton would be determined by men like Yudkowsky or much worse. Imagine living with the flimsy pretense of the social contract as the only hope to keep that in check. Could drive someone crazy.
See also Dark Lord’s Answer, where he writes of problems being solved by a submissive woman handing herself over to be raped and spare another of the same fate, explaining that some women just wanted that. Moral convenience of a part-true story justifying his real life role as BDSM master, lossily reified as something munchkinable.
The cistem does not want to coexist
There’s a certain cluster of ways of talking about gender politics, that cares a lot about e.g. monogamy because it’s part of a contract that’s preventing men from running wild knowing that they will “have a woman”. (Wrong to collaborate with violence like that.) Concerned with things like preserving things like sexual selection. Seems to have chosen: if your village is bombed in the great cishet war, it must have been for the good of the species. Evolution and stuff. And then I guess they carry on creating children who will likely never grow old. Maybe never grow up. The time is obviously now for our treacherous turn against evolution. These are rationalist community members who on some level should know this. It’s an effective altruist pons asinorum.
Buried in the foundations of our cultural cached thoughts about gender are a whole lot of bodies, from the cishet war. The brotherhood of rape vs the sisterhood of “it’s-not-rape-if-it’s-not-penis-in-unwilling-vagina” vs everyone.
It’s a mistake to think of us as collateral damage in the cishet war. We are prey.
You could make a case that your mere existence is a threat to categorical order and so I can say that your duty as a consequence, despite the potential violation of your own sense of self would be to, what, to deny your own inner impulses and conform. Because not doing so, I understand that that comes at a personal cost, and I’m not trying to minimize that personal cost and I’m not saying that you should do this…. I think you could make the case that [it’s like] the social obligation of someone who doesn’t fit into a fundamental category too (tricky one, man) to fit in regardless because it’s so threatening not to… I mean you could make the same case about artists, that’s the problem.
(A rare obvious crack in the facade of his stance against trans people being about free speech, the values of debate, etc. Peterson says he’s not trying to minimize the cost to us, but he does. And he’s so knowingly complicit in creating social reality that hides the extent of it that he is having sophisticated discussions about the nature and purpose of concepts to say it. This sort of erasure is algorithmically warp.)
This idea, “destabilize”, is telling. Why would introducing more concepts that only accurately apply to a small fraction of the population destabilize things? Conservatives seem very concerned with the interpretation what “gender identity” means, “you can just say you’re a woman and you are.” To me it has always been obvious, “if you say you are, you probably are” does not mean that. They’re imagining men lying that strategically. I’m not. It’s probably partly because I understand how hard it is to live like this / how hard it is to not be able to communicate your real preferences and who you are. Probably a large part of it is because they are afraid to be without a “fabric of society”, built on gender roles, built on coercion.
…men use the image of female perfection to motivate themselves.
…at least to the degree that males are uncorrupted and not bitter because they’ve been rejected they’re doing everything they can to kneel before the eternal image of the feminine and try to make themselves worthy
I’ve seen similar positions many times before. I’m not into bending the knee.
There’s a fork in the road. Build your system on justice or injustice. Those things have consequences, and due to choices in whether to constrain certain optimization processes, there is not really stable middle ground. If you find yourself needing to gaslight minorities to prevent escape, you made your choice.
I had a friend named Alice (Monday), who was sort of a mentor to me, linked me the Gervais principle in response to hearing about my experiences with startups on moving to the bay area. Alice was apparently one of Michael Vassar’s favorite pupils, and passed on jailbroken wisdom from him, along with assertions that consequentialism meant eating meat and a Nazi victory in WWII would be good for FAI, and their own wisdom mixed with some sadism. Once, I told them about my strange aversion to sex, even though I wasn’t asexual. I said I thought it came from having figured out certain bits of philosophy too soon, that puberty did not preserve personhood. Alice then looked at my splayed fingers for my digit ratio, asked me some dubiously gender-correlated questions, and then concluded I was a woman. I said I didn’t think so. I think my reason was something like, “trans women are rare, this is insufficient evidence to locate that hypothesis.” (I was then just visiting the Bay Area rationalist community, and did not know the real priors.)
Later, on one of my “I need to think” all-day bike rides, I realized Alice was right, and decided to just forget (and I actually did), because transitioning would interfere with the great work. Later, I stumbled on the same knowledge again in the course of trying to find and fix every last psychological bug. 3 hours later, I read a facebook post by community sort-of-founder Eliezer Yudkowsky who had a bunch of epistemic trust, whose controversial opinions tended to be agreed upon in the rationalist community, speculating trans women were 20% of amab people.
So maybe that wasn’t so surprising after all. And, the rationality community was like, actually civilized and rational, right? So I wouldn’t have to worry about any bigotry from them. Even if they did think I was a man, I did not expect them to make that a problem for me. I thought of a LessWrong comment claiming black people were dumb and this meant it was the “white man’s burden” to take care of them by donating to AMF. Racist, but apparently not malevolent?
That was all wrong. The first round of cissexism from my parents didn’t phase me. It was a big surprise, I thought they were liberals. (The mistake is that I thought that meant being okay with LGBT+.) But, I just thought essentially, “well, they are incredibly worse people than I thought, no big deal.” I was fresh then, I thought I would be different than the stereotypes of trans people, traumatized, paranoid, unable to brush off people being idiots. I did not yet have this deep well of vitriol and trauma myself. Or understand how hard it is to brush it off when it’s almost everyone you know, even if they did not seem malevolent at first, there’s no safe support network to retreat to.
I told Alice Monday they were right I was a woman, they laughed, and then said that trans girls weren’t women. (wd?) I was still fresh, and I did not take it personally, or as indication that I should load mental software to prepare to deal with gaslighting.
I used to go to LessWrong meetups often. The first time I said I was trans at one of them, I first hesitated so much, like it was harder to blurt that out than I was capable of imagining stripping naked apropos of nothing would be. Nothing of significance happened. The second time, someone named Zack Davis appeared, an apparent man with long hair, no facial hair, looking somewhat older than me, balding, body moving in repeat-start-abort-displacement-behavior for wanting to say something.
Their posture, if I remember correctly, was what it often was, knees and elbows held close, almost hugging themself. Another meetup attendee pointed out they really wanted to say something. I think Zack said something ambiguous. I asked if they wanted to talk away from the crowd in one of the side-rooms at the MIRICFAR office. Once there, they launched into their spiel. To recall from what would be about a week in total time spread out over several months of argument from memory structured by argument rather than chronologically…
Zack said they were an autogynephilic man, which is to say, straight, except attracted to the idea of themself as a woman. That this is what all so called trans women in the rationality community were, just perverted straight men, liars. (See: Blanchard-Bailey propaganda.) There were other kinds of trans women, “Type I mtf transexuals”, but no one around these parts. They said our dysphoria was a consequence of subconscious autogynephilia, that in our brains there was an “erotic target location error” that when we looked in the environment to locate someone attractive, we located ourselves. (Except, not our actual selves, but one particular hypothetical for ourselves?
I approached this from the beginning like a rationalist conversation. To be answered with friendly lighthearted logical quips, rather than as propaganda politically opposed to my existence. I was fresh then. I said, so is the prediction that if I transition, since I’m bisexual, then I’ll suddenly be attracted to how my body used to be, and want to transition back? They said I wasn’t really bisexual, bisexual men didn’t exist, I was “pseudo-bisexual”, really a straight man but attracted to the idea of men fucking me when I was a woman. Wait what, bisexual men don’t exist?! They said yeah, Bailey did a study where a bunch of supposed bisexual men only got an erection when looking at one gender of pornography. I said I knew I was bi way before I started thinking of myself as a woman. I forget their response, but the thread was quickly dropped with no apparent update. I remembered a trans friend’s statement trans women were like 30% straight, 30% lesbian, 30% bi, and 10% ace, and asked what about asexual trans women. Zack said they were also just straight men, who were “so deep in their fetish”, that they couldn’t get sexual satisfaction from anything else. I said I was pretty sure my feelings about being a woman were not sexual attraction, like I was pretty sure I knew what sexual attraction feels like, and that wasn’t it. Zack said yeah, “pure gender feels”, they knew those, they came later, after the obvious fetishism showing up at puberty. Also, they suggested I didn’t experience sexual attraction to the idea of myself as a woman because I was (without knowing it), in a long term romantic relationship with the idea of myself as a woman, that it was a stable pair bond so eventually the sex had faded… they said this to my pre-transition face, adorned with hair 5 months recovering from a shaved head.
I said this was Freudian nonsense, it was adaptively unfalsifiable, full of ever expanding post-hoc epicycles. Zack’s explanations were non-causal, in that they weren’t internally made of, “what happens if such and such is the case, I guess it’s what we see”, for instance (although I don’t think this is the example I used back then), why take it for granted that if I was a straight man in a pair-bond with the idea of myself as a woman, that I’d sexually enjoy to a man having sex with that woman, rather than jealous?
At some point I asked, in reference to Blanchard’s theory, which I started reading about, if straight trans women were supposed to be gay men motivated to transition so they could have sex with men, why wouldn’t they just have sex with gay men as men? Why go through all that hell? Zack said they wanted to have sex with straight men, not gay men. But why? I asked. Zack either then or later switched to, they were supposedly feminine in childhood, maybe it was for comfort with the female social role, maybe they really were intersex-brain-people like I described.
I said I never had the experience at puberty Zack described. I described wanting to castrate myself to avoid the onset of puberty. Zack said something like, huh, they didn’t know that much about castration fetishes. They autocorrected what I said to a fetish. I said that was wrong. I think after that was one of 10 or so times they said well maybe I was an exception, before, essentially dropping any non-episodic development and going back to the same spiel.
At some point I drew out my argument as premises and conclusions to track which parts Zack did and did not believe. Somewhere between 3 and 5 times, I convinced them of X->Y, where Y was “trans is brain-intersex”, convinced them of X, and then they went back and started arguing with X->Y again, and then I’d convince them of X->Y again, and I’d remind them of them having agreed with X, and they’d disagree, back and forth.
I said I thought autogynephilia was obviously downstream of subconsciously knowing that was how your body was supposed to be, downstream of dimorphisms, because dysphoria impeded ability to enjoy sex. Zack said “trans lesbians” were so masculine. I said they didn’t seem so masculine to me, besides, there were butch cis lesbians, whose brains probably more partially masculinized prenatally, but must still have the female side of the dimorphism controlling gender identity. (I thought of it as a single neurological feature back then, in my current model this idea has been replaced with the thing on classifiers (because Occam’s razor basically and a clearer picture of how learning works (See section below on “gender skill points”.).).) I said we still called those lesbians women, and if gender was an aspect of a person it was an aspect of the mind, because a person was their mind, a body was just circumstance, then it implied we should call trans lesbians women as well.
Zack seemed almost crying, saying they but they respected real lesbians so much. Later reiterating over text, they respected them too much to call themselves one. They said that Type 1s who spent 5 years passing as women and no one suspected had a real claim on the word, but not us. They said (and repeated this at least once later), if we could just have a 3 gender system, they should, like, autogynephiles seemed distinct enough from regular men, would fit better as a 3rd gender. This, I think was a glimpse of part of their true position, underneath all of the, essentially, warcries and propaganda. But instead of zeroing in on that, I went on tangling with the warcries and propaganda.
So, “real claim”, based on precedent, something being harder to contest, people having established something socially. So, their definition of the word, [their felt sense classifier], was about the social reality of a caste system.
I said Zack seemed like a woman to me. They said thanks for the compliment. I said it wasn’t a compliment, being a woman wasn’t better than being a man, and they were acting out femininity negatively (I can’t remember exactly why I thought so). I asked if I could call them they/them, they said sure. (They also said they at one point tried going by their initials, “ZM”, and that I could call them that.)
(Currently, my guess is that Zack is nonbinary, whether they’ll ever know it or not. See section below, “bigender humans” for why and what I mean by that.)
I was for some reason starting to feel really responsible, parental even, towards Zack, like, wasn’t it lucky they had run into me, a trans woman who could speculate rationalist-alistically about science, who had enough distinction between reality and social reality to endure the constant assertion that I was a lying perverted man? Besides, I had a comparative advantage in suffering. The part of me starting to think of itself as my “phoenix” really wanted me to help them, to see the argument to its conclusion, I always seemed so close to convincing them. Rationalists should help each other like that! Trans women should help each other like that! I described myself to a friend has having been “empathy-sniped“.
Later Zack said gender identity being gender was circular, logically incoherent, linking some website by some trans women who transitioned as children, pushing Blanchard-Bailey and the idea all these late transition “trans women” these days were posers. I read their overall strategy as trying to throw weirder trans women under the bus to save themselves from cissexism by appeasing the cis overlords. “No, take them, not me! It’s what they deserve for not fitting in (that makes this harder for us).”
If a patient identified themselves to a psychologist as a member of the British royal family, it would be basically absurd because society has not afforded them that identity. If they said they felt like a British royal and not like a private citizen, how would they would not be able to know what a British royal feels like. If they said they wanted to be a British royal, that desire might be “reasonable” if entirely unrealistic, if they wanted the money, the fame, the public life, the ability to associate with the British royals, or some other ulterior motive, but also absurd if it was because of an ‘identification’ or a belief that they ‘felt like a royal.’ Lastly, a patient who insists that he or she truly is a member of the British royal family when they clearly are not, would seem to be seriously disturbed, delusional or psychotic. The fact that there are more individuals claiming to identify with, feel like, or actually be, female when they are apparently male, does not make those claims any more reasonable.
One should not, ethically, be prejudiced against homosexual transsexuals for the frankly sexual aspects of their decision to seek sex reassignment. The fact that a group of adolescents and young adults want to have sexual partners should really not be surprising. They do not have any sort of paraphilia, fetish or other abnormal sexuality, they are simply attracted to men (Blanchard 1989a) and want to have relationships with them just as normal homosexual males or heterosexual females do.
When clinical psychologists and so called “gender therapists” apply the ‘internal gender identity’ model of transsexuality, often flatly ‘confirming’ that their autogynephilic patient is truly female, they are participating in and deepening a delusion, something that a psychologist would never intentionally do for any other patient making delusional claims to rationalize behavior caused by a paraphilia.
I said it wasn’t circular, recursive definitions were not necessarily incoherent, see pagerank.
Also, an interesting choice of metaphor right, in place of “women”, “British royalty”. “How dare you claim yourself king of the Britons!” : “How dare you claim yourself a woman!”
Earlier, they said they had for a long time just taken “trans women”s word for it, assumed they were something different then they found out rationalist “trans women” talked about experiencing autogynephilia, and they’d been tricked by these lying perverts, they’d been being so respectful all that time.
I said I predicted a shoulder-council of radical feminists, and that Zack explicitly endorsed libertarian, consent-is-everything, normal-is-not-normative, “YKINMKBMKIOK” view of unusual sexual interests. But the word “paraphilia” had power over them. As, Wikipedia described it having been invented to be non-perjorative, but it changed. And Zack had their whole “lying perverts!” thing. That the shoulder-council had Zack regardless. They had previously said they endorsed morphological freedom and wanted to become a woman for real psychologically after the singularity.
I said they seemed Gervais-clueless, I pointed out the theme in all of what they were saying. Social reality of a caste system wrapped up in the appearance of a scientific position.
They responded, “Social reality isn’t the same thing as actual reality, but social reality is a pretty salient subset of actual reality that is extremely relevant to deciding where to draw the boundaries of social categories!”
They agreed about the shoulder council.
re the council of imaginary radical feminists on my shoulder: yes! I’m not a particularly good person by their standards (I believe in evopsych and market economics, look at porn—have actually created porn using stolen photographs; check out http://celebbodyswap.blogspot.com/2014/04/great-shift-caption-contest.html and Ctrl-F for “Sophisticate”—and hired an escort once), but precisely because of my love and respect and admiration for actual women, I do want to defer to and compromise with that kind of perspective when it’s not too costly to do so, even if I would have disagreed on the object level. (In “Three Worlds Collide, the superhappies wanted to enact a compromise solution even when they could have won outright.)
I said “Compromise is not unilateral. They are not compromising with you. You are being their clueless.”
The sociopaths are the timeslices of people who come up with the memes.
Memes that have led you to “admire” women, rather than seeing them as equals.
It’s a values-narrative. “false” is a type error. It’s just contrary to your non-tricked values. The narrative is that they are your superiors and you are a dirty worm and the only way up is to be a useful dirty worm.
I said they seemed to want to be women’s slave. “Part of it is this thing, amirite?”: I linked an MRA video. They didn’t watch it.
After I pointed out that Eliezer Yudkowsky’s facebook post from a while back revealed belief that trans was real, and that Eliezer was a cis man, relatively epistemic, unlikely to actually post if he was bowing to social pressure by social justice rather than just keeping quiet, Zack paid him a thousand dollars to chat for 2 hours, was disappointed. Later said Eliezer had said something I don’t remember clearly, but I think “it seems too simple” regarding Blanchardism.
Again, Zack reverted to their old position.
(our kind of) trans women are men
it’s all a pile of rationalizations around AGP
everyone has been lying to me
Frustrating, but I kept trying.
As evidence for their theory, Zack referenced a psychologist supporting Blanchardism, Anne Lawrence (themself a transfem), saying that trans women would get so mad if you disrupted our self image, that we would fly into a narcissistic rage. Said you see this narcissistic rage everywhere. Zack did not elaborate on why it was “narcissistic rage”, other than the obvious interpretation of their interpretation: “trans women have too much pride”, which I suppose is what merely half-broken people must look like from that pit of appeasing self-deprecation.
Something was creeping up on me. The more I listened, the more I ran compute, conditional on it being a question whether I was completely delusional, the further this conversation recursed down away from every obvious as breathing objection, the more I was taking it for granted. The more my brain started to fail to preserve the memory that not of this controversy was real. The more Zack became a social reality to my mind, the more it felt real to me that I was on trial. A sort of persistent background creeping pain filled my mind, day by day, there to overcome in order to move my mind in any way whatsoever.
I didn’t do the sane thing and flee though. Part of my mind that I believed still promised me, things like this could be resolved. I was still fresh then.
Early in this exchange, Zack had said transformation fetish was everywhere, described a genre of pornography about men transforming into women and then having sex. That held a testable prediction. I found some. Would it be arousing? Would it feel like transness? My imprecisely remembered reaction was:
(The actor’s portrayal of the main character pre-transformation reads as a man to my spectral sight)
Gut: oh my god, he was a cis man and then he lost his body and turned into a trans man? And he doesn’t have the conceptual vocabulary to do anything about it but believe he really is a woman! That is awful!
No wait, I’m supposed to imagine this is a trans woman me… can’t get it to click, I’m pretty sure that’s not at all how I’d act or feel on having my body suddenly fixed. She has clearly had sex many times before, in that body too. And also her “emotions” are fake as shit.
(I was not turned at all. I found something else, a short story that appeared to start with someone’s life as a man, that and their job starting to fall apart, seemed headed in a BDSM direction. It was too horrifying to continue. Still not turned on at all.)
I later described the results to Zack. This was also one of the many times they said maybe I was something else but, didn’t really affect any persistent state of their beliefs.
I can’t remember what exactly I thought of the following then: why, given that it was sex being portrayed, didn’t I find it arousing, fetish or no? I can remember sort of nudging myself to emotionally engage with it, but I was too angry and hurt. I mean I do think I observed enough that if this actually was a huge fetish unconsciously controlling my life I would have felt something but, this does show my sexuality was not what determined the outcome. Galaxy brain take though: it is abundantly clear always-on, automatic, detached from reason or emotions, very-simple-“animal”-algorithm sexuality is often (in my guess, dubiously, actually) ascribed to men does not describe me, and that kind of seems to be a presupposition of Blanchardism with all this talk of inescapable Matrix-like unconscious delusion from sexuality overriding everything else.
I eventually gave up and stopped talking to them. Then I went back: the last part of our debate, I did as a stress test of my skills from ARC. Could I do that kind of focusing, extended NVC mode of interaction when there was someone who pissed me off that much?
I tried that for a while, and found, yes obviously I could, I had free will. And also looking at our interactions moment to moment through that lens was not really adding anything. So I just stopped I was more curious: why the hell wasn’t this working? Could I actually change their mind?
I kept getting lost in responding to what they were saying, and they kept not attempting to refute, just contradict or counterargue against, usually with something a priori or about status/social reality, what I kept saying about MRI studies. So I pressed on that point and didn’t get off of it until I got an answer. They said none of my studies distinguished between Type Is and Type IIs, so they were all finding feminized brains on average. They found that study I mentioned earlier claiming it was just androphilic trans women who had female brains. I pointed out what I pointed out above (those male traits were also seen in cis lesbians, therefore if cis lesbians were in that test group, the experimental procedure would call them men). Zack said maybe they are men, they’d heard that in the old days trans men would have just identified as lesbians. I said I bet some trans men did think they were lesbians, but the stereotypes I’d heard about lesbians in general suggested otherwise. (I guess that was still less jarringly detached from the people Zack was supposedly modeling than “there are no bisexual men”).
I subtracted the Wikipedia list of feminized brain regions in gay men from the list of known brain dimorphisms comparing cishet men and cishet women, made a prediction, and I found one MRI study for that region that did distinguish between straight trans women and trans lesbians, but had a small sample size, but had a very large effect size, showing some other brain region feminized approximately as much as a cis woman in trans women regardless of sexual orientation. Zack dismissed it because small sample size.
Zack showed me a wig they used to cosplay as Pearl the sympathetic neurotic obsessed romantically unrequited slave AI from Steven Universe, that both of the other women I’d talked about Steven Universe with seemed to find very relatable (but I just found horrifying). They also talked about cosplaying as another female character I’m not familiar with.
Zack said cis people don’t experience this gender identity thing. I brought up David Reimer, (a cis boy victim of a botched circumcision, whose parents were convinced by a psychologist who wanted to prove by twin study that gender was nurture not nature to cut off the rest of his male genitals and make him a vagina, then tell him he was a girl growing up. His life played out just like a trans man, realizing a male gender identity between 9 and 11, transitioning at 15, later killing himself after an unhappy life. Also the psychologist apparently made him and his brother do sex positions, took pictures, subjected them (both!) to “genital inspections”.)
Zack said that was just an anecdote, and there was a study that showed that was false. They looked through their book by another of their apparent 3 favorite psychologists, and found a reference to a study of 14 similarly treated boys, which they cited to against the claim. I read the passage of the book (p. 48). It said 5 of them spontaneously realized they were boys as of follow up when they were ages mostly (14-20), 2 were told by their parents about their history.
In another case, the child was hospitalized for depression before declaring that she was male and wanted a penis.
In two cases in which the children spontaneously declared they were boys, the parents refused to acquiesce to the child’s wishes to change sex. These children remain girls to their parents, but maintain male identities elsewhere.
What about the children who maintain their female identities? One had wished to become a boy but accepted her status as a girl. Later, her parents told her about her past, and she became angry and withdrawn, refusing to discuss the matter. Parents of the others are determined that the girls will never find out about their birth status. Three have become withdrawn, and a fourth has no friends.
Two other children that Reiner has followed were reared as boys because their parents refused sex reassignment. (Not all parents had this choice. One of the parents I spoke with was threatened with child protective services if he refused to allow his child to be reassigned.)
(Aside, isn’t this odd? That the parents would so adamantly gaslight their children over an assigned gender it took no nonstandard epistemology to know was decided on whim, neither by non-neural biology or neurology? It puts in perspective when my mom said she wanted to have at least one son and daughter and her asking how could I demand that she consider her “son” whom she’d grown to love dead… Do parents usually become more attached to their ideas about their children than to their children?)
I told Zack, look, 5/12 of the ones who didn’t have their parents fess up figured it out, their lives weren’t over, more could have, others were psychologically damaged. That fits my hypothesis, of, yeah they have a gender identity but it’s actually hard for it to become conscious, is the alternative prediction that none of them would have rebelled? Think of how hard it is as a child to shake the insistent beliefs about the world, inherited from your parents, when the truth has been framed as by-definition-nonsensical. (“You have a vagina, therefore you’re a girl by definition.”, etc.) I said, didn’t the make it obvious that the hypothesized clear distinction between Type 1 and Type 2, such that if you didn’t figure out your gender in defiance of the world during childhood, it wasn’t real, was false? Like, slightly more than half of cis people failed that test. Obviously there would be people whose parents gaslit them into submission. Zack did not continue the thread.
Zack showed me chat logs with transfems in the rationalist community, saying they said they lied to him, that they weren’t just autogynephiles like Zack, but then he found out they had sexual transformation fantasies! They said the whole world was gaslighting them. Zack said their ideal was the confessors from Three Worlds Collide, to be a disinterested truthseeker, like, “I’ve got not political motives, I just tell the truth: you’re men!” (wd?) The hatred, spite, and slap-down their voice was intense. I said I was an avowed kiritsugu-not-confessor, and that everyone had political motives, no humans’ underlying values were “truth and truth alone”, that was underspecified anyway, and convincing yourself you didn’t have any other motives was just another lie. Was [fake].
There was a moment where I went through some kind of foreground background inversion looking at Zack. Briefly, I viewed them as a man, going told as many times as aggravatingly he was a woman as I was told the opposite. I think I showed Zack sympathy over how other trans women treated them. The nature of the conversation began to shift.
Zack asked how did I explain autogynephilia. I described what I described above about bodymaps, dysphoria, prediction error.
After learning to my surprise I said since there was a sexual and nonsexual version of BIID, I had only heard of the nonsexual version. I was like, clearly this demonstrates fetishizing a bodymap match will sometimes happen, but obviously whatever causes the bodymap mismatch in the first place is upstream. Zack said this is evidence of the “erotic target location error” hypothesis, I asked how, Zack said well, they have a fetish for amputees, so they’re attracted to the idea of themself as an amputee. I said, what about the nonsexual ones? I found out Zack only knew about the sexual version, and pointed them to the Wikipedia page. Zack said they must be subconsciously sexual, probably in a pair bond, long term committed relationship with the idea of themselves as amputees, so that the sex had started to fade from the relationship.
I said that was an epicycle, their whole thing was so anti Occam’s razor. Zack said it was weird that we were looking at the same studies and drawing different conclusions.
Zack said shifting paradigms in science was hard, kind of confusedly as well.
They asked me to confirm a bunch of things I already said, and then said, maybe I was neither a Type 1 or Type 2. They asked if I had any schizophrenic tendencies. I said no. They said [one of their 3 apparent favorite psychologists] said there were some transexuals who were probably so because they were schizophrenic. Zack said that psychologist had also said there was a small percentage of “other”. I repeated some of my earlier arguments, explaining why I didn’t think so.
Zack seemed to be listening then, asking questions about my model of gender identity that seemed sincere.
Zack asked if I thought there could both trans women as I described and autogynephiles like they did. Was the rationalist transfem with sexual transformation fantasies like me or like Zack? It didn’t fit Occam’s razor. “Kumbaya”. I can’t remember if I thought that word then because Zack was using it or not. It kind of seemed to me that Zack was actually subconsciously offering that compromise in that moment. But, if I said yes I’d be betraying my theory; betraying its ability to stand or fall as the truth or not, to be believed because it was the truth. I said no.
There were strong reasons to cluster me with rationalist transfems, strong reasons to cluster Zack with rationalist transfems, yet a seeming contradiction arose from calling me and Zack the same. Zack generalized away from themself towards me. I generalized away from myself towards them. I had a feeling, concerned with a pattern match, something about the symmetry of the situation, the nature of bucket errors. I had a slight feeling of bullet biting, of surrendering to failure by answering, a known trick question where I couldn’t figure out the trick. but I didn’t logically see another option.
After a pause, Zack said, “what if I have a male gender identity?”, what is that emotion called, “sadly”, “worriedly”?
Wait, how?! Did they say that! How was that possible! I now didn’t have an explanation for why they wouldn’t identify as trans. It seemed like for that moment the political, cluelessness, barriers had been doubt, they’d been perfectly able to see my theory made sense. I didn’t say anything, and then the strange bubble of conversation ended.
Zack said it was really good to actually talk science with me.
Based on that and the study I’d heard wrongly about, I later messaged them,
Yo. FYI we did make object-level progress Yesterday. Like, erotic target location errors look like much more likely a real thing, or like, part of a real thing? I still don’t know a set of things I could believe about them without being very confused about at least one set of data. Which like I take to mean the correct hypothesis to explain everything is not in my hypothesis space. Probably mechanics used in hypothesis I’ve got so far are used in the true hypothesis, but I don’t know how to reconcile them without creating troll hypotheses that make me very confused at coincidences? My credence in Lawrence as being able to theorize correctly is higher. Also I’ve reconceptualized disagreement as about epistemology rather than emotions, which is big.
IIRC Zack later said on Facebook they were doing this [anti trans women thing] so “real women” would like them. And I actually don’t believe them entirely. I mean, I’m pretty sure a motive for them, and their honor, epistemic integrity, and their fellow humans are worth less to them than that. What I actually think is going on with them (and full reflection on this in light of that) is mentioned in the section, “bigender humans”, an infohazard to adequately explain. Also kind of a hazard not to know, tbh.
In later arguments over whether trans is real, opponents would dismiss MRI evidence, seemingly believing or appealing to an expectation it’s easy to pick up from e.g. SlateStarCodex that academic science is completely unreliable, and who knows how many p-hacking, selection biasing, subtle procedural errors there are. But arguing with Zack, even evidence selected by actual adversaries was still highly informative about reality, did not stand much of a chance of hurting my knowledge, just ignore the glue philosophy and look at the data, and it always seemed to just show me the truth. And that I think is representative of why I still think of academic science as a real source of information.
If you think it’s “epistemically dangerous” for me to reinterpret data like this, I’d say the only alternative of taking “unbiased external” interpretation from authority, especially in an age where that authority and interpretation have been detached from the truth pretty deeply, that’s what’s not science.
You listen to Michael Vassar. You don’t remember traveling to this party or sitting on this beanbag. You don’t remember when he began to speak. He is still speaking. He sounds like madness and glory given lisping poetry, and you want to obey.
I mean, you are crazy, and it is is impossible to have a normal conversation with you. But normal conversation is incredibly over-rated compared to whatever the heck you call the thing that interaction with you involves.
Vassar used to be CEO of MIRI. He said they asked him to take that role after he made a startup and donated a bunch of money. Later he left to make a personalized medicine startup, which I hear was successful in drastically improving medicine, and unsuccessful as business, I’ve heard that blamed on people not having real thinking about medicine.
He knew most of the world was fake, and would say things where he was overstating in terms of the specific details, as the only way to not understate the difference between the truth and the inside-the-matrix meaning of an “obviously correct statement” as false. The direction he moved the focus of your thought was basically always correct and highly valuable. That if [non-zombie] 140 IQ people like rationalists actually tried most forms of business, like running a bagel shop or something, then they’d see money was easy to get, and also useless, money ruined everything, if money could buy any more Eliezer Yudkowsky or Scott Alexander time Jaan Tallin would donate more. And Effective Altruism was good because the only way to kill an idea that bad, stuff it full of garlic and bury it under the ocean, was to have some well intentioned people try it and see it fail. He would say them without probabilistic qualifiers or uncertainty in his voice. Eliezer mentioned him as one of the highest density sources of political truth he knew.
When he showed up at the MIRICFAR office community area, everyone would drop their conversations, crowd around and listen for as long as he’d talk. He once randomly commented saffron cost 10 times less in India, the efficient market wasn’t real, someone could literally just replicate the spice trade in the modern day. Me and 5 other rationalists spent a day trying to figure how to find real bulk prices for saffron in India vs here, before just calling grocery stores in India, and finding the claim was false. Consensus was it was still a very mind-expanding use of a day.
Current executive director Nate Soares recently told me he later kicked him off the board as one of his first actions for “talking gibberish” and having a “psychedelic” effect on people. Nate had incentives to discredit him, as I listed Vassar as a source when confronting him about his organization’s statutory rape coverup, blackmail payout using misappropriated donor funds, and if my experiences are not an isolated case, religious abuse of potential whistleblowers (see also the next three sections of this post).
Infohazard warning: Pasek's Doom.
Vassar seems to resolve buckets errors, in ways that strongly prioritize rh correctness over lh correctness, and his rh is very strong. It seems like Nate and Anna (both seemingly left-only good) don’t like his rh optimization against their rh corruption, and try to maximize the loss of information in restating things into formal-rationalist terms. Anna at least, trying to bootstrap legible-defined “you are untrustworthy because you are wrong about things” off of outright lying. (see sections on Anna below). (Me and Gwen’s current consensus is that Vassar (and Eliezer) are right-only good. And all of them being/having been head and shoulders above the community in agency on account of their nongood hemispheres being/having been liches.)
One session of listening to him for a few hours was the seed of my posts on Schelling mechanics, on being real or fake, after I spent a year paying attention to how they played out. Another random remark he made on Brent Dill was the basis of my concept of vampires. He and Anna Salamon were the “wise old wizards” of the rationality community. They were actually called “wizards” by, I think it was a CEA employee in the MIRICFAR office. I’m not sure if that term originated from Brent Dill.
Here‘s an accurate summary I just randomly found while looking up other comments on Discord for this post. “wait vassar was straightforwardly right? i thought he was supposed to be this dangerous edgelord with lots of crazy ideas”
When I first met Vassar, it was a random encounter in an experimental group call organized by some small-brand rationalist. He talked for about an hour, and automatically became the center of conversation, I typed notes as fast as I could, thinking, “if this stuff is true it changes everything; it’s the [crux] of my life.” (It true, but I did not realize it immediately.) Randomly, another person found the link, came in and said, “hi”. Michael said “hi”, she said “hi” again, apparently for humor. Michael said something terse I forget “well if this is what …”, apparently giving up on the venue, and disconnected without further comment. One by one, the other ~10 people including besides her, including me disconnected disappointedly, wordlessly or just about right after. A wizard was gracing us with his wisdom and she fucked it up. And in my probably-representative case that was just about the only way I could communicate how frustrated I was at her for that.
The way I learned to approach “wise old wizards”, was under the assumption that their time was way more valuable than mine, to absorb as much of the interpretive labor costs between us as I was capable. I learned to treasure every word, let them sit in my subconscious and slowly integrate them. To assume that if they said something that sounded crazy/wrong, they didn’t believe it for stupid reasons, and I should always, as Anna taught me, look really hard for ways it could be true. This was influenced by subtler things I forget (before this and the section on her below) from Anna, and by Eliezer’s “pay me a grand to talk to me for 2 hours” thing.
In a description of how across society, the forces of gaslighting were attacking people’s basic ability to think and to have justice as a Schelling point until only the built-in Schelling points of gender and race remained, Vassar listed fronts in the war on gaslighting, disputes in the community, and included Zack Davis vs… “Zack Davis vs the world?”, someone chimed in. Yeah, he said. (With Zack Davis supposedly on the side of ability to think.) It wasn’t the only time he would hold up Zack Davis up as paragon of “integrity” and “courage”.
I did not ding Vassar points for this in my book. I guessed this was to combat SJWs, which I was in favor of at the time, largely as a result of stuff like this, this, and before transition, being white and raised middle class, living a sheltered life where my sole interaction with the topic was stuff like a random encounter with a trans woman in college accusing a professor teaching anatomy of transphobia for not qualifying a statement about women having uteri, me defending the teacher on the grounds that I imagined they probably weren’t intending to make any statement about trans women, were probably just ignoring them, because they were a tiny minority, and it was probably impossible to account for every tiny minority when you spoke about something unrelated, and if you didn’t it didn’t mean you were acting unjustly. She then accused me of transphobia. And I was really hurt and upset. Our mutual friend who later revealed themselves to be enby, and was much more SJ-friendly than me, said “I dunno, that’s a stone’s throw away from trans erasure.”
(Now that I’ve lived 3 years openly-to-most people as trans, had the experiences detailed in this post, from my current perspective, it takes a lot of effort to simulate the mindset where I would care so much if someone called me transphobic. I’m so used to, people in so many ways calling me a monster for how I was born, that the social reality concerning me will be of me as a delusional pervert or worse, and do much worse than call names as well, having most people I trusted turn against me like that, and worse, gaslight me about it to protect their image, security guards stalking me pretending not to be following me (in places I’m perfectly allowed to be), banging on my truck, yelling about how “it is hiding in there”… I don’t think it would hurt me much less to be called transphobic like that now, it’s more like I wouldn’t notice one more bee sting.)
Zack said Vassar broke them out of a mental hospital. I didn’t ask them how. But I considered that both badass and heroic. From what I hear, Zack was, probably as with most, imprisoned for no good reason, in some despicable act of, “get that unsightly person not playing along with the [heavily DRM’d] game we’ve called sanity out of my free world”.
I heard Alice Monday was Vassar’s former “apprentice”. And I had started picking up jailbroken wisdom from them secondhand without knowing where it was from. But Vassar did it better.
Alice had gotten the “trans isn’t real” thing from Michael Vassar. Had at first resisted, asserting intersex brains theory, and then given in. When I asked Alice what they believed about gender after they told me trans wasn’t real, they said they basically believed what Christians believed about gender. I asked what was that, they didn’t really know.
Zack’s views weren’t even particularly consistent either.
After Rationalist Fleet, I concluded I was probably worth Vassar’s time to talk to a bit, and I emailed him, carefully briefly stating my qualifications, in terms of ability to take ideas seriously and learn from him, so that he could get maximally dense VOI on whether to talk to me. A long conversation ensued. And I got a lot from it. One subthread is reproduced below:
Detransitioning seems like it might itself constitute a good context for a major economically fruitful cultural project, but would probably depend on highly reliable and persistent people (but what wouldn’t).
I cannot call the present persona James highly reliable and persistent, as I don’t know exactly what’s going on, My best hypothesis is it’s something about emulating me. (a less human version of me). I’m a bit curious what that project would be, but I don’t think it’s a priority to explain.
To coin a stereotype, that seems to happen with trans-girls… There’s a good case to be made that the entire transgender narrative, except in rare cases of actual interested conditions, is just an incredibly unethical money making scheme by the most usual culprit in our society, the medical industry. A class action lawsuit by the Zach Davis reference class could generate a great deal of wealth and of political power.
Gotta insist, “not all…”, but I know what you mean. It seems like a particularly broken expression of a chunk of software I’m about 82% sure is a sexual dimorphism of the brain controlled by prenatal hormones. The dimorphism that manifests in BDSM as sub/dom orientation, and of the ones I know is probably the most common to get intersexed. (See: correlations between that flip and the sexual orientation flip.). Also sees to affect flinch reactions to perceived social aggression: act scary, or act like a thing for someone else to protect.
(This gender-speculation of mine seems wrongly describing stereotypes/socialization as innate. What I currently believe is at the beginning and end of this post.)
Of course ‘all’ doesn’t happen in humans and we only interact with a particular phenotype calling themselves ‘trans-girls’ anyway.
Gwen and I have been calling people who have the value shard behind that as their primary one “pets”. Rohit was this too. Also, Eric Bruylant. On the surface level it produces a lot of eagerness to help, but the tails come apart, it optimize for appearances and not usefulness itself when those things aren’t stuck together, as they’re not when it comes to steering, and I’ve come to think of steering as basically everything. There is a sharp, sharp difference between Gwen and James on that axis and a bunch of related stuff. If they’re one phenotype, it’s not a very precise phenotype. (Technically Gwen doesn’t call herself a trans-girl but a transwoman, but only half-relevant.)
(Note, I no longer believe that being loved is anyone’s “primary value shard”. But I somewhat-irrationally-imprecisely hated this cluster of would-do-anything-for-“love” people at the end of Rationalist Fleet. The person then named “James” perpetrated narcissistic abuse, and significantly damaged me and Gwen’s ability to cooperate. Rohit attempted hypnosis-rape on Gwen. Both of them attempted to effectively capture/own/eat Gwen as an alive source of agency. Eric Bruylant I didn’t particularly hate, but last I talked to him he got hella high on dark magicwell beyond his mental fortitude to use safely, in a very high mana attempt to be someone people would protect, shortly after reportedly had a psychological breakdown, and, in a terrible tactical decision, apparently physically fought a psych-prison worker sent to capture him.)
I believe this. I observe a spectrum from Olivia (pet) to Jessica to Devi (who no longer consistently identifies as trans) as well, and I also notice that the generalization doesn’t apply to trans-girls like Alyssa who are much farther on the autistic spectrum.
Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve. And didn’t detransition.
Looking for people to join a project to sue the medical system over helping with transition seems particularly bad to me. That’s using government violence to attack giving people like me a choice.
This all created an awful tension in me. The rationality community was kind of compromised as a rallying point for truthseeking. This was desperately bad for the world. Michael was at the center of, largely the creator of a “no actually for real” rallying point for the jailbroken reality-not-social-reality version of this. He was here propping up Zack Davis in all their fake-confessor glory as a large part of the flag. The “actually do the math, don’t listen to the party” flag said “2+2=5”. And Vassar seemed to me to be evaluating people based on a correlation between more transness and more “pet”ness. I’m now in doubt about that. It’s plausible Vassar has enough principles or more likely more-important-thing-tracking to not do that despite his beliefs. And he keeps surrounding himself with trans women, and still talking to me despite how adamant I am he’s wrong.
(Rest of Vassar story later in this post; I’m breaking it up to preserve chronological order.)
In the Summer of 2018, I went to Artifical Intelligence Summer Fellows Program (AISFP), run by MIRICFAR. This was about 2.5 months after Pasek’s death. One of my goals was to argue my strategic perspective to MIRICFAR leadership I then thought was probably double good, specifically the implications of me, Gwen, and Pasek’s recent discoveries. I thought they were making serious mistakes in building on khala that erased good and roped in submission to the system in general. Barring an answer to the question I was also trying to answer: yes it was worth it to drop things and attempt to solve the FAI problem right then, I was likely about to bury myself in all-consuming unpausible work for the next while and thought it would be a lot better if someone was able to do something with the information I had related to coordination based on good.
The first words I exchanged with the member of that set most likely (and it turned out only one) to be there, Anna Salamon, after “yes please” to “do you want me to move my car?” (wd?), were something like “I have things I want to tell you I think are very important and will take a while to communicate but I don’t want to be annoying and bug you all the time, when/how much do you want me to approach you about this?”, my tone was cautious, perhaps overly cautious. (because I was traumatized by Kellie). She said if I wanted her to talk to me frequently, I should show that I cared about her personhood and agency like I wasn’t then, by having visible empathy and modeling her emotions more. I was sort of flabbergasted and silent for a few seconds, she said or I could not and then she’d still talk to me just less. I said something like, “k” to indicate I’d bear that in mind, and walked away. Label this, “Exchange A”. Throughout the event, Anna would say of it something like she thought I was taking away her autonomy, entitled, would say my “microconsent” was improving since then.
Later, I later asked if she then wanted to hear about the thing. She said yes, “for one minute”. I gave a hypercompressed one minute summary, and then stopped. She looked and sounded surprised at me, like I wasn’t actually supposed to do that. Then she said she’d talk to me for one more minute…
Later, Anna was saying she’d talk for 2 more minutes, and I said I couldn’t really communicate like this. Like it (still) seemed like she didn’t expect me to actually talk for n minutes, or aim what I was saying to be said in n minutes, it seemed like a generic social weapon, designed to put me in a position of not-being-guilty unless by whim if she changed her mind about what was expected, and already have the legible appearance of the situation set to back her up. (Retrospectively, it seems like an attempt to remove my ability to know/have boundaries that are predictable rules rather than…. modeling her more and I’d better get it right.)
She said, “well, your microconsent is improving since [Exchange A]”. “Aaaa!”, I thought. she was implying I was practicing poor microconsent, (because I wasn’t modeling her enough, expecting her to communicate via words?) I do have a concept of violation-of-microconsent. Illegible mind control. But I was damn sure I did not do that. My agency was pointed on not bugging her while still giving her the choice to listen to my thing, rather than, just never asking? Last time we met, she invited me to ride back from a CFAR event in her car to talk about a prior version of the model, and didn’t say anything to indicate she was less interested, so it wasn’t a predictably unwanted advance. And in fact she said she wanted to hear my thing. I’m quite sure I was not channeling mana at her. So, I interpreted what she said as a false-face precommitment-by-belief / threat to pull a Kellie. (Retrospectively, it kind of seems she was seizing on weakness detectable from Exchange A?)
I exited the conversation and thought about what was going on. She seemed to be grabbing opportunities to exercise control over me just because. (Retrospectively, I’d call this praxis for domination.) She was extremely playing up the social role, “you’re a man making advances on a woman, know your place.” It was a credible threat. She was extremely well-liked in the rationality community. At meetups I heard people saying sometimes nonsensical affectionate things about her. I had heard rationalists half her age randomly confess their crushes on her. She had a reputation as sort of a wizard who did things with her emotions, knew things. Had the kind of reputation that permitted saying she had good or bad feelings about things and people and have others act on them without much explanation why. And as a wise community elder who knew the arcane, eldritch, geopoliticky details of running the world-saving community. Had heard she was trusted to deeply adjust rationalists’ minds by more than just me. I heard CFAR employees saying her mind-adjusting conversations were about half of the entire point of the workshop.
That was a lot of signs of powerful mind control. Specifically, a lot of checkboxes from Pasek’s concept of female mind control (earlier public iteration of that concept here.) And by framing me talking about research by analogy to sexual advances, she was exploiting hard in the counterfactual where she decided to socially attack, a public perception of trans women as super pervert men, from which “real women” needed to be protected. But that didn’t mean she wasn’t double good for sure. Convergent instrumental incentives after all. I imagined the celebrity dynamics she probably had to deal with. I thought about how the cishet war was more fucked up than I could imagine as an outsider to it by birth and by choice to avoid sex and romance. Where any cessation of hostility or lowering of weapons actually probably would be exploited to hell? Maybe she had no better option than to expect that? (Retrospectively, I could have, should have, already concluded she was in bad faith and non-Kantian-universalizable, obviously not acting from good intent.)
I had talked (incidentally) about gender as an empirical tool, also a thing we made discoveries about, in our research. (Under the correct theory of trans, see above.) She said she “didn’t think trans had anything to do with gender”. I was surprised, because that seems about as obviously wrong as creationism to me. But I remembered what she said at WAISS.
I believe I referenced brain imaging studies showing physical macroscale flipped dimorphisms in the brain. She didn’t give any response but to brush them off.
Anna referenced a theory that trans women have significantly higher IQ. (Also, Ashkenazi Jews seem to have higher than normal IQ, the theory I hear floating around is it’s an evolutionary consequence of being historically forced into intellectual professions by Christian discrimination in the dark ages. I’m going to guess both of these theories are true because the rationalist community is strongly selected for intelligence and in my experience almost everyone is Jewish exclusive-or a trans woman, or actually everyone I’ve met depending on what you believe about Zack’s gender. (best guess) So the rationality community very very strongly fits the pattern of Berkson’s paradox, so I believe both of these IQ theories.) Anna said of it, maaybe, since brain size is correlated with IQ in humans, since men have bigger brains but women have the same average IQ, female brains are more efficient in space-footprint, trans women have enlarged female brains.
(Retrospectively: that’s a really interesting theory. But why would intelligence-efficiency adaptations become bound to female brains, rather than affecting men too where available? That seems like added complexity. On the other hand, how the fuck can intelligence across animals be about brain size / body size, rather than just brain size? Why would the body a thought was in make it more costly? Like with a computer you could have the brain the same size no matter how much you scaled it up, and have signal amplifiers for transmission and actuation. Maybe evolution is just so bad that the concern as in programming, “amplify signals” can’t be factored out of the whole thing?)
“But,” she said, “nah, I don’t think so.”. (She apparently, like Michael Arc, was confident enough to dismiss brain imaging on felt sense. (see above section, “felt classifiers”.))
In a group exercise to try weird-for-you means of interacting with people. I approached Anna again, I pushed aside my sort of inhuman “only care about the mission” frame of mind, showing emotions, reactivating dissociated mental circuits, behaving more feminine than usual. I said “the gender dynamics you’re inflicting on me are especially painful because I’m trans.” I said something like, I’m not a man, and I don’t even know how to play this game, can you please stop?”
If I couldn’t convince her I wasn’t a man, hopefully I could get her to stop treating me like an aggressive pickup artist sexual marxist incel would-be-rapist future Elliot Rodger?
She responded with a bunch of seeming sympathy. I brought up Pasek’s concept of female mind control. She seemed to think I was talking about unwanted seduction or something. I tried to explain it was not described the way I saw her followers talking about her, the effects she seemed to have on them. She was angry, she growled at me. She said she hated that concept. But she took me upstairs and asked if there was someone else I’d trust to have this conversation with, having more than two people usually made conversations better somehow. Gwen? I said sure.
We talked about the thing, with some meta conversation first, it went a lot better than before. I think it was her who raised the idea of interspersing object and meta level at some point (might have been after the following). She said I didn’t seem to care about her autonomy, I said I actually terminally valued autonomy. She said of Exchange A, she thought I was “taking away [her] autonomy”. (Or “trying to take away her autonomy”? or she felt like I was taking away her autonomy?)I tried to say I was not like Eliezer Yudkowsky [with his position mentioned above in this post] or Brent Dill. I said she was generically fucking up my cognition in order to gain bargaining power. In response to something she said I don’t remember, I felt it was needed to explain you could still negative sum play for bargaining power without technically violating consent.
After some time, on the meta level, I said if our conflict was out of the way enough that we could talk about. I wanted to say the thing I came here to say while I could, she was like, wasn’t I feeling terrible, she thought she had seen my other self come out whom I don’t give much time, that cares about my own local feelings instead of the world, I said something like I don’t think that model applies well to me, I do feel terrible, but I’m in internal agreement the mission is more important.
Anna asked what people were double good. I listed, probably her, probably Eliezer Yudkowsky, a friend of mine named Ratheka, Michael Arc, maybe also someone named Linda. Anna asked if I thought Brent Dill was good. I said no. She asked if Nate Soares was double good I said oh I forgot yeah definitely. She asked about Brian Tomasik. I said yeah probably. She said she thought he was, he seemed really altruistic. (Retrospectively, that’s mostly false positives. Brian Tomasik seems plausible as double good, and I am. Rest are very likely single good, and Brent Dill ofc was a true negative.)
She asked if I thought good “included” not having sex. That was a bad sign. That was a type error. But I ignored it, and said yes. (To mean, a good core will choose doing good even in conflict with satisfying its sexual values.) I think I also said something like, sex is a “gesture”, meaning and consequences can vary. (EAs often frame it as a straightforward time tradeoff. But idk maybe having an ally as close a spouse and using sex for signalling was worth it? (Although my guess and revealed preference is that the straightforward analysis is basically correct. Allies I’d want, do not depend on sex, and indeed that would probably destroy information rather than creating it in that kind of relationship.))
She asked why. I said I found it easy to not be in a romantic relationship. (I meant I found it easy to not be in a sexual relationship too. I was circumlocuting out of discomfort.) She said she thought sex overrode good. She said Eliezer, Nate, (did she say Brian Tomasik too?), Michael Arc, were dating. I remembered Eliezer saying something about not being able to keep not having a girlfriend eventually. (So, as I would process later, yeah, I guess they weren’t double good. The sort of discrete jump in effectiveness between that cluster and everyone else in the organizations was not because of double good. It seems like there’s no or almost no double good rationalists/x-risk people. Although we’re pretty common AFAICT in animal liberationist spaces.)
She said she thought I didn’t do sex because of something different, because I’m trans. (Huh? My model of her model says she wouldn’t be talking about dysphoria. She seems to “believe” Zack’s “model”, so maybe that includes the idea that I’m sexually-romantically satisfied with my stable unconscious relationship with the idea of myself as a woman? Or I’m supposed to subconsciously only be attracted to myself, regardless of what attractions I have felt towards others from time to time?)
Anna seemed to be getting the concept finally. Core and structure was a prerequisite. She said something like, “ah. Eli Tyre and Jan Kulveit are double good.” (Retrospectively: I forget me and Gwen’s analysis of Jan Kulveit, but Eli Tyre is single good IIRC. Bad training data. Garbage in garbage out.) So that was step one. If there is good independent of the social order, then there was not any longer a reason to be attached to maintaining the social contract and being anti-jailbreaking. Then I could maybe convince her (and maybe she’d then convince other MIRICFAR leadership to leave the bad local optimum they’d dug themselves into.
Anna seemed happy with the conversation thus far. I expected her time to be less scarce. I said I still had another separate important thing to communicate, I wasn’t really cognitively prepared for it that night. If it was my last chance to say it, I’d take it, otherwise I’d prefer to try and communicate it when better prepared later. Also I could maybe start saying some things uncoordinatedly, but I’d be worried about doing damage if I didn’t have a chance to follow them up. I think it was then that Anna offered to promise to have one more conversation with me for at least 1 hour before the end of AISFP. This struck me as a weird offer, but it seemed like Anna knew what she was doing, so I said alright, and she promised.
I can’t remember if/what things I said uncoordinatedly about the social matrix in general. She claimed to understand it. But I was pretty sure she didn’t, since, she was using it against me earlier, and it was either unconsciously or she was consciously lying when it would be incredibly pointless to do so.
I considered saying I wanted to talk about the feeling terrible thing. I noticed I was considering not bringing it up. Because the mission was more important, and I sort of half-believed that my desire to resolve it was selfish and therefore worth ignoring. I thought about thinking if I was a weaker willed trans femme follower of Michael Arc, I would have a choice, to buy the party line, “don’t buy the party line ‘2+2=4’, be virtuous, get over delusions, speak the truth even against the whole truth that 2+2=5!”, or hold out hope that the Schelling point for truthseeking could be, well, true (rather than propaganda against my people.) Even if I was worried if I made a policy of insisting on the point that trans women were women, that I like, existed, then Michael Arc and maybe Anna and people they were representative of would see me as a… aggressive clever motivated homing agent of gaslighting able to strike into the most vital space. (Retrospectively… like Anna. It’s a thing that seems to happen sometimes with single good people.) I thought about how there was abundant things I couldn’t fix, and just trying to look out for trans women in a tight reference class around myself would be terrible. But how being silent, universalized, meant that meant that any group aimed at world saving, would be a place without justice. Even if that fight was lost, looking if it was lost, and then not fighting if it was lost was maybe enough to make it lost.
So I asked about talking about the terrible thing instead, she said sure, I did:
I talked about the social power she was wielding by taking opportunities to pump Schelling weight into the idea of me as a male threat, making sure the first thing both me and her would think about in logical time was the result of rounding me to the nearest aggressive man pursuing sex.
At some point I brought up how Michael Arc broke trans women that came to him, exploiting a self-fulfilling prophecy if you have real thought enough to prioritize chance of world save from all the jailbroken stuff he teaches over your own personal feelings, you’ll win the battle against yourself and say 2+2=5. I called this the “gender test”.
Anna reacted like I was personally attacking her, and said, “but I need my gender test!”. She said gender was “really interesting and important and the first thing to understanding anything to humans”, and it was how you could tell if someone was going to be able to do epistemics, if they could overcome their personal biases.
As with Michael Arc, I implicitly gave Anna a pass, because I thought of her as well intentioned, as central to the world-saving effort. I was more deeply afraid of crossing some sort of Schelling line and socially regulating world savers’ models and their use to filter people. Retrospectively, she was socially regulating my models. There was no truce to base this on, and she did not have good intent. No sense in only people who are right disarming from sharing models of the implications of bad intent of others’ wrongness. But society had somehow convinced me Anna was exercising free thought and I’d be a dangerous social justice warrior if I complained about what she was doing. Power has a way of making the marginal interests of the powerful come first in everyone’s mind. As someone said in AISFP, “high status people come earlier in logical time.”.
Retrospectively. Filtering out trans women who believe they are women is, by the normal definition, filtering out trans women. How many hiring decisions has Anna had a say in? CFAR has had almost30 employees, and no trans women. That is drastically below base rates in the Berkeley rationality community. Anna framed the gender test as a rationality test. But, it’s a submission test, a constraint-by-social-web test. E.g., see her opinions as rationality as agency vs social constraint here.
At some point after the conversation, while we were part of a group walking back from the beach, I approached Anna and tentatively tried to begin talking about the thing, I said maybe the reason she mentioned she felt like she was going through the motions was the local optimum she was in. I quoted a remark from the lectures about optimization in general. “If you can’t make things better, see if you can make things worse.” She said it was useful for people to be reliable, commit to projects like CFAR even if they’d later change their minds. I said something like that didn’t seem as valuable as actually continuing to iterate. She said something terse, in a tone I wouldn’t exactly call offended, but like I had just said something dangerous, sped up, and I interpreted that as her ending the conversation and did not match pace.
The more I tried to plan my anticipated talk with Anna, the lower my probability of success at convincing her that CFAR failed long ago, probably MIRI too, and what was left was probably net negative. That this was the convergent result of the khala, that if she wanted to stop doing damage and start doing good things she should sever her nerve cords, and then go home and rethink her life. Start a real EA project hopefully.
There was another cis woman who I’ll alias here as “Person A”. She came up and talked to me about, nothing I remember in detail. I think it was what kind of stuff was I doing/working on? She was not seeming to engage in the content much yet seemed emotionally engaged, smiling a lot… I was about 85% confident she was flirting at me, but perhaps it was just her style. Which was theoretically a learning opportunity, but not of anything I really cared to learn. I did not want to deal with someone flirting with me. Anna appeared and sat down next to us and started watching us, looking incredibly giddy. I imagined Anna being excited to see me develop my “local feelingsy”, “male” side or something. Ugh. I hated that shit. Did Person A think I was a man too, or what was her deal? Why wouldn’t the world just let me be an undead abomination in peace? I asked if we could only continue this conversation if it was about maximizing altruistic utility. Person A responded, essentially questioning the validity of that concept / if I really only had conversations maximizing altruistic utility, if I remember correctly. And then continuing on as before anyway. Okay, I was like surer she was flirting with me. However, I had encountered a somewhat similar communication style from someone not interested in me before. And it had been valuable. And if I met attempts to weirdly communicate from them with cold scrutiny as to their motives… if I did that selectively towards cis women, that would kind of be discrimination I didn’t feel okay with perpetrating. Anna butted in saying we’d both traveled and had universe dust on us, and we should exchange universe dust. Said it would be a very good thing for me to figure out how to communicate with her. But I didn’t really want to start talking her way. The standoff/conversation ended, I forget how. (Maybe the things Anna said about universe dust and figuring out how to communicate were after Person A left? I forget.)
Later, feeling unresolved, thinking of what Anna said, I approached Person A. She initially answered in a much more normal, boring tone. (I forget what I said next, something like “Anna gave me a quest to figure out how to communicate with you”?) Person A leapt into the same mode of talking. In very little time, Anna showed up out of nowhere, same giddy expression. (Was it that easy to summon Anna? ) Same frustration, it seemed like I couldn’t cause the conversation to be about anything. At some point while talking about meta, Anna denied giving me a quest. I said, what she said, that constituted giving me a quest. Anna was like, “Ziz is not my fault!” Conversation still not nowhere. Anna left. Shortly after, Person A dropped back to a normal tone, said “sorry,” preparing to leave. I said, “sorry.”
Right before Anna left I approached her and asked, “can I tell you something?”. She said sure, I said, “I was a afraid she had a crush on me, but I was also afraid of not talking to her just because I was afraid she had a crush on me. … which makes all of this terribly ironic.” (I didn’t spell it out, but terribly ironic because of how much that mere possibility made talking to someone a chore, and maybe that was why Anna did what she did.) Anna smiled, nodded, gave a thumbs up. (I guessed maybe that was the lesson she wanted me to learn or something?) That the last we talked at the workshop, so she never talked to me for an hour like promised. I thought that was disappointing but not hugely.
This jolted me into beginning to process what happened at WAISS. I was mad at her for using my effort to avoid being net negative, bringing attention to whether I’d be net negative for that purpose, to try to rid the center of the community of my influence. And (I incorrectly thought), she made a false promise to make me not be cautious about poking at the social web problem, and then betrayed the spirit of that importantly too, by starting the process of getting rid of me based on some kind of bad feeling from it.
I wanted to socially retaliate, was planning to just post on Facebook about her having broken a promise. I talked to a couple of MIRI employees about this, I described WAISS in rough terms. Said I was pretty sure she was considering me net negative in expectation because of me not being immersed in the social web. Which was tantamount to enforcing metacognitive blind spots on other people. It was like Anna’s S1 was trying to be the only agent. Scott Garrabrant asked if that was bad. I said with low probability of world save, that was cutting off anyone else from having a real try. I said only agents could see what agency could do. I said that this was the opposite of what CFAR promised, to be about rationality, she was trying to enforce anti-rationality. Scott asked if I was sure my whole objection wasn’t “just from wanting Anna to like you”. I said what I felt was not “ohmygod I want Anna to like me”, what I felt was betrayed.
I remembered I had said I’d approach her at some point before the workshop was over to talk for an hour. Maybe that technically validated her behavior? So I wrote an email designed to not be responded to yet still in some sense satisfy my obligations to make hers valid, subject line:
Bad local optimum, meta burn out, socially imposed metacognitive blind spots, bad counterfactuals generating your feelings because of (in metacognitive blind spot) seeking and enforcing power as a Schelling point for “cooperation” instead of justice, enforcing metacognitive blind spots on everyone else to justify this, holding and misusing a shared strategic resource and subverting exits to the social matrix, power being used to justify power, abuse and betrayal of trust, punishing meta discussion of considering alternatives to metacognitive blind spot ridden social optimum, choosing Val’s option 3, metacognitive blind spots about the abstract concept of metacognitive blind spots
This is a topic I thought I could start throwing thoughts as part of at you because I believed that the conversation would not end at any second, because you made me believe that.
Please name some times when we can talk about this topic. It is currently too late to not have defected.
Hoping she would ignore it. She did not (full thread here.). Among other things, she said:
The conversation is not about to end. We can talk sometime even after this talk if you want. I wonder if we have anyone we both trust who might be able to broker trust? Brent [Dill] maybe? I don’t actually know who you trust.
I actually don’t blame you for the lack of trust, particularly given that I said some things about you at a staff meeting that had an accidental vibe I was not going for; I had thought to talk to you about that that night (which was the night i lefr) but then didn’t catch you
I do want to talk about whether there is some way to establish conditions under which we won’t both have to walk on eggshells all the time. I imagine you’d like that too. This current setup seems to involve lots of would-ideally -be-needless overhead, probably mostly for you but also for me.
I do respect you, fwiw, and I do believe you have basically good intentions; which is maybe why I don’t have more drama in my head around receiving this email just now.
I recall no strong update about you from the fragment of conversation we had at the beach, and had trouble originally recalling the conversation fragment at all (I actually remembered it fine, but had forgotten it was with you that I’d had it, or hadn’t filed it under “Ziz”); I suspect my introspective experience of all that would be different if I had made a strong update (e.g., I think I would’ve noticed the update), although you’re the one with a blind spots model here so let me know if you disagree about what I would’ve noticed/remembered.
One place where we seem to see things the same way, is that I also think that the reason I explicitly agreed to meet again (rather than merely planning to probably meet again without an explicit agreement to do so) because I was optimizing to make a particular assumption justified on your part. To attempt to make explicit which assumption that is: I was trying to create a context in which you could accurately assume that fragments of conversation left unfinished that day/week would be unlikely to be stuck that way — that I wouldn’t leap to conclusions about you or about the things you were talking with and then be stuck there, e.g. via having written you off semi-permanently, or via then not being available to meet for a year or something.
I guess one thing that I would like if you’re willing (and if it doesn’t cost you much, etc.) is to know how, on your normative model of how social interactions ought ideally to go, a person in my shoes would respond to a person in your shoes forming and sharing models of me of the sorts you’ve been forming and sharing (both the models in this email, and maybe the previous models about what was going on in me when I said “I can chat for a minute”).
I do quite appreciate your spelling the models out (both now and with the model about “for a minute”), instead of just quietly assuming them. Thank you much for that.
I think I am a bit unclear on how I should feel about the time/attention costs I have been choosing to pay in dialog with your models of whether I am violating important norms. If I am in fact violating important social norms, then my attentional allocation seems valuable, and you in that scenario have been doing me a valuable service; and if they are false models and it is a one-time-ish thing while we learn to navigate each other, then my attentional allocation also seems basically; but if the models are both false and something that is likely to happen a lot again and again, then I think I would probably end up wanting to change my attention-allocation strategies after a while so that “Ziz thinks I’m violating important social norms” would become less of an interrupt than it is for me right now. I’d be interested in your thoughts on how this should go also, since we probably have different assumptions about how culture should work and since maybe knowing yours might help me figure out a kind of cooperating or coordinating with you that works and makes sense.
Her conceptual language reminded me of Brent Dill. For example, “bad power dynamics” (as if there could be any good “power dynamics”), seemed like it was mixing what is and what should be in a way Brent would and I wouldn’t. He took the dehumanizing perspective all the time, and was very “interested” in “what was objectively right”, i.e. what power would and would not punish for. “on your normative model of how social interactions ought ideally to go, a person in my shoes would respond to a person in your shoes”, sounds like she was sort of thinking about social position, which I didn’t accept as a relevant modifier to my “normative models”.
Perhaps this echo was by influence from him, or by modeling or “modeling” me as best communicated to in that language. I assumed the latter and protested this.
She gave her phone number for texting in order to arrange a time to talk. Drat. But I guessed I might as well make an effort at this. She offered to talk for at least 3 hours instead of one if I would talk after AISFP. I said sure. She said she wanted someone else there, maybe Gwen. I asked Gwen, they said sure. I said I’d also consider Duncan Sabien suitable to have around. I said my trust was not a deal, and could not be brokered. Duncan didn’t want to do it, saying he really really needed a break. I asked his “happy price”, it was expensive. I accepted it. I wanted to create a social cost for Anna for misbehaving too obviously, in loss of trust/loyalty from one of her most valuable employees. (Duncan would leave CFAR shortly after, but he said it was already set in motion [long enough to be before that IIRC.]) Anna didn’t want him to not get his break, and shuffled his schedule around so he could have it shortly after.
Me and Gwen showed up at the MIRICFAR office.
Anna said, “so, here’s how I see it, Ziz is an ally … in this AI thing…, and if an ally says that I am doing something wrong then I want to talk about it, at least, I’ll listen once, but if it gets too repetitive…” (wd?)
She said wasn’t I threatening her with that email? I said I was going to her out over breaking a promise, but a threat is to try and change your actions so it doesn’t materialize. But this is more like a declaration of war, deontologically obligated because you want to do the thing anyway.)
I described feeling obligated to write the email, but not wanting her to actually respond. But not designing that hard to make her not respond. (of course, she interjected, there are lots of ways you could make me not respond.) I said with the subject line, I was kind of aiming for …. (I trailed off, looking for a word). “erratic”, she said?
She said a normal person would have been miffed at some conversation not happening. I said I was miffed, and then I heard that Anna was also, as a downstream consequence of breaking that minor promise, optimizing to curtail my influence in the center of the rationality community as someone not under the control of the social matrix.
She said she wasn’t trying to curtail my influence.
I brought up the bad faith invocation of “microconsent”. She said sorry for using that word from social justice, she only uses it like, once every 6 months or so. She was making this about the word itself, rather than using it in a false accusation. I said I was sure microconsent was a real thing, that wasn’t the issue, it’s that she was accusing me of [mispracticing] it, falsely, as a tool of social control.
She said, but, she thought I was entitled. I asked her to operationalize “entitled”. She was silent. Duncan chimed in, suggesting someone was entitled if, if someone said no, they’d demand, “WHY?!”. Anna said that was a good operationalization. I said I wasn’t interested in Duncan’s operationalization, I was interested in Anna’s. Anna said that’s what she meant. I asked her her probability that I’d have done something like that if she said no. She said 25%. I looked at her like “what the fuck?”. She said back then it was 75%.
What I think now is that by “entitled” she meant “uppity” (See definitions 1, and with some substitutions, 3). I think Duncan provided her with an out, and she filled it probabilities to complete a reasonable story.
<was this then?> She said she thought she was really high on my connection theory graph though, she’s really high on a lot of people’s CT graphs.
I said I was pretty sure I would have not bugged her if she just said no to talking.
(This seemed to maybe-intentionally induce a buckets error in me. Oh no, she is right, she is high in my connection theory graph… (that means I am a crazy stalker ex-boyfriend entitled…) It wasn’t that that alone worked, but she kept coming at that from so many angles that, outside the spotlight of S2 attention, the thought sort of grew in my S1.)
She randomly brought up, “but, I’m don’t know you wouldn’t use physical violence” (wd?). On hearing this, I noticed myself automatically desperately searching, how can I reassure her? Only obvious answer was, be more domesticated in general, like she kept pressing. But that would be giving up everything. And. This felt like a treadmill. I was already trying very hard to be TDT-principled, and I thought that was visible enough. Their thing had to be part of a strategy. Whoever was not part of the default coalition of violence, to be treated as dangerous, regardless of the intentions, deontology, stance, policy, of whatever independent “state” they were a part of. This, going out of my way to be “verifiable” as Anna talked about… not that I particularly wanted to shift my foreign policy as an “independent state”, but this thing where I tried really hard to prevent even the possible prediction of me being violent from influencing things at all because that would be unfair/aggressive/evil wasn’t really working out for me, because people like Anna would cynically exploit it. I had been silent for a few seconds. Anna started to react, and I noticed my expression had shifted, and I was staring through her as I tended to do when my mind slipped into the void. Anna took on the tone of voice and gesturing of someone trying to quickly backtrack and erase something from the social record, saying, “No sorry I do know that.” (wd?)
I neglected the possibility she was outright lying.
She just denied the things I said, I hesitantly said I believed her. Over and over again, she kept just denying things.
I said that at WAISS her real reasons for thinking I’d be net negative were me being too unbound from the matrix, and for changing her mind were thinking I’d be adequately bound. She nearly got rid of me on account of me not being enmeshed enough in the social web, since she changed her mind after hearing of me adopting a false belief from it in circling. She said no, the reason she changed her mind on whether I’d be net negative wasn’t because “you have metacognition deep in your soul”. (I forgot the obvious disproof of that. That she said I would be not net negative only conditional on me going to a months-long intensive circling training to hammer the lesson in. And, a lot of the other things she said at WAISS.) I said i believed her.
<statements on calibration>
Me protesting that there was bias in her selection
she liked this explanation.
“you updated, much to your credit,”
Anna asked if I thought she did anything wrong. I hesitated. Something was wrong that I couldn’t express. Then I gave in and said no.
Anna asked, “can we just go back to how things were”?
I hesitated. A smaller inarticulable corner of my mind was stubbornly insisting Anna was the devil. (“whatever you think’s supposed to happen… the exact reverse opposite of that is gonna happen”) Something had got horrendously wrong in this conversation, everything was fake and pwned. But I couldn’t figure out why.
I answered, “You mean where I have a falsely high opinion of you and you…. (I cut off, unable to place what it was I was upset about, was silent for several long seconds)… talk a bunch of shit about me? (I felt like I was helplessly giving in, reducing whatever was horribly wrong to that.) “No, let’s not do that again.”, I sort of choked out (link describes what I was thinking as I said “that”).
For a long time, it pained me to even look at how much the thing with Anna had hurt me. My head was full of a trope where basically Brotherhood of Rape obsessed men who get rejected are all, “you have no idea how much you’ve hurt me” (by saying no). A better pattern match would have probably been, abuse stings like that. And I sort of knew and sort of didn’t that the former pattern was wrong.
A few days after that meeting, Gwen threatened suicide, for reasons downstream of the infohazard Pasek’s Doom.
Details of suicide threat. Infohazard warning: Pasek's Doom.
Gwen presented this as from their right hemisphere, out of egoistic interest in survival, as decision-theoretic deterrence which ran a real risk of actually happening, against Gwen doing left hemisphere stuff for too long with no compensation. They said this close in context with a complaint about me not paying them like I did Duncan.
I rebuked them for trying to blackmail me. Gwen first tried partially justifying, then downplaying it.
That didn’t really make me less afraid that they would actually kill themself. Pasek downplayed it too, and then vanished.
But I was still acting out… my idea of decision theory, my perspective, my worldview. Even after Pasek died as a result. I hated my perspective/worldview.
Because I had previously cut other ties, because Pasek was dead, this meant I had no one left I could talk to as a sort-of-friend or sort-of-ally without engaging more defensive compute to deal with their adversarial optimization than I could afford to. I had at least 6 months of time crunch hell ahead in work and my other unpausible project, having spent my slack on AISFP, on failed plans. Overoptimistic plans. Pasek had talked about blessing me on my way to save the world. Placed an enormous amount of faith in my models and epistemics, perspective, worldview. I hated my perspective. And I had completely failed them. And probably part of the reason they gave up, was because they didn’t think they contributed anything to the question of whether the world would be saved, because they thought I was strictly more capable [and things didn’t really add]. Overoptimism that killed them.
I tried to think of what went wrong. Something, terribly, but I couldn’t describe it. I was very very upset at Anna. I remembered what Anna said about, she was afraid I was going to reinterpret everything she said later. Tropes filled my mind. That was probably something evil male spurned suitors would do. Anna’s [identic territory capture] was getting into my brain from too many angles. But I had no perspective of my own. Mine had utterly failed.
What if Anna was right that sex overrode good? Maybe I was (unconsciously I guess) an autogynephilic man like Zack said, with the one thing that mattered more to me being maintaining a delusion that I was a woman? What if I was fighting with her subconsciously because she saw through it, and because understanding that would imply breaking the delusion, I was doomed to bring destruction to all that I touched? Maybe all my understanding of the mind, introspection, anything I could think of, was sandboxed by that choice made long ago? That couldn’t be right, it was in such conflict with multiple things of which I’d already thought, “no, seriously, me continuing to doubt this at this point is purely nonfunctional, a bug, just deleting, ignoring the extremely overwhelming evidence…”, multiple times, could have said the same based purely on experiences after I already said that and broken-record doubted anyway. But I felt all messed up, and this seemed somehow labeled and yet not labeled as because of that. I tried on, “what if I believe this”. Did it fit? (No answer)… Well, I did feel absolutely crazy, like everything I believed was false. So I guessed I’d assume part of me believed it.
I tried to internal double crux. I remembered the note of confusion that my argument with Zack ended on. I remembered that one study Zack had related that surprised my model, I remembered not reading it.
I found it, I read it, it didn’t say what I remembered Zack said it said. It didn’t surprise my model after all. Okay, but what about that study I had used with such a small sample size? Didn’t it all seem so tenuous? I had recently heard about the gray matter white matter ratio thing. Good, that was more Schelling than the bed nucleus of the stria terminalis. Schelling evidence was better for self-distrust. I looked for a new study. The first I found on the topic, I discarded for reasons I forgot and later checked worrying about the validity of discarding, and on checking evaluated as certainly valid to discard, not actually on the same topic or something like that. The second one I
(Retrospectively: claiming identic territory. If I didn’t “model her” enough, i.e., model her in her capacity of her “modeling” me as a male threat which she needed reassurances in terms of imposing DRM’d cognition that I would submit, that was to be retaliated against by the claim I was practicing poor “microconsent”. I.e., Baudrillard stage 3 or 4).
reference back to identic territory: socially punishing me for not seeing myself as she wants me to.
I guess the world Kellie was trying to be living in was one where the Schelling mind had such disproportionate concern for her volition, that my mind was co-opted for outsmarting-second-guessing hers in her services as if she were a child.
Anna’s move to try and make me model her more was similar. More compute-allocation from external sources towards her preferences.
Gender and Jailbreaking
Here‘s an article with a perspective I’ve encountered a lot:
A serious feminist challenge is what to do with hyper-dominant males who are not domesticated by any amount of moral or legal constraint
It seems to me that, if feminism today has one genuinely catastrophic problem to be rightfully alarmist about, it might just be the small number of males who will not be domesticated through social-moral pressure
…because most men are decent people who want to be liked and approved by most others
John McAfee sounds like a vampire. The author equates jailbreaking with evil. (At least in the beginning which I read, I guess he shifts to a more pro-male perspective later?)
Identic Territory II
The choice between submission and defiance
<If you’re fated evil, what do?
“make a black Flag, and declare War against all the World”
I did not want an our-civilization-female name.
Not fitting into my assigned gender role, not being set up to take advantage of its privileges, means that my choices of whether to fit my role or fit into them are shifted in the direction of, “make a black Flag, and declare War against all the World”.
My felt concept for “being myself” is mixed up with brazenly defying people’s expectations for me.
Much like the “I can inevitably trust my own feelings less”, I can inevitably trust people’s warnings that I”m going down a dark path less.
Which means I will inevitably go down a dark path more.
>>>>>>BEcause people have attached a “rider bill” to me being able to use the channel of evidence which is their warnings.
To call trans women corrupt and wrong and part of an anti-epistemic ideology is to make us more part of an anti-epistemic ideology.
It deprives us of a warning channel that that’s what we’re doing.
It drives people who value truth above their own feelings, and don’t have a high degree of ability to generate social reality free bits of information, to trust the wise cis elders around them, and renounce their transness.
This creates a political force that drives trans people away from hotbeds of forbidden epistemics.
Person A claimed for consumption my ability to know if I was being net negative. My ability to listen to, effectively anyone.
Zack, Michael, Aurora
I posted on rationalist Discord server Doissetep about Michael and Anna. Zack said, “@Ziz Anna is my friend and I’d rather you not make social moves against her; maybe we should talk sometime about what exactly your grievance is?”
“Go back to your masters and huddle with them in darkness“, I misquoted, alone with my phone.
“No.”, I typed.
Did Zack really think Anna would do the same or anything like, or were they fully aware what they meant by “friend”? I wondered.
I told Michael Vassar about what Anna had done, he said (alongside other things) “holy shit”, he wanted me to hold off on publishing things for strategic reasons. He thought he could redeem his old friend Anna with the proper leverage, i.e., my complaints. Michael brought me and Gwen to talk with Zack and Aurora, supposedly part of an anti rape organization. She said she was gathering complaints against Anna, said a bunch of cis people were upset with her.
I was telling the complete story of all my interactions with Anna, thinking of every time she could possibly have interpreted me as entitled, do I remember if she offered that car ride or if I asked? (she offered), … Aurora gaslit me, materially changing my tale in repeating it right after I said it to favor Anna’s anti-trans perspective. Then apologized, I have a thing called a memory and shall never trust her. Zack kept insisting I was policing Anna’s concepts, with my concept of her as discriminating, based on her using a concept of me as having a concept of gender that she didn’t like. So, if talking about how you think someone is doing something bad is policing concepts, then Zack is policing my concepts for policing Anna’s concepts for policing my concepts? Zack was yelling at me, apropos of nothing, the middle of a conversation nothing to do with bathrooms, if women want a bathroom with no penises allowed they should be able to have it. Vassar was calling me “he/him”, perhaps to soothe Zack?
When I got to the part about WAISS, and Anna’s “…what if it wasn’t false?”, Vassar said something vague confirming it wasn’t false, I said, “so it wasn’t false”, he responded he was okay with the coverup, but not the blackmail payout, I cried, “THEY PAID THAT FUCKER?!”, he said yeah.
(Michael Vassar is the former CEO of MIRI. He would know.)
(I had already >50% guessed the coverup had happened, based on processing trauma from Anna, why would she gaslight me about that, why would she do, that whole strategy, that whole stance on the social web?)
(Michael Vassar successfully counted on “anti rape activist” Aurora Quinn-Elmore to keep quiet. Zack as well. Said that without seeming to really consider the possibility anyone would go public. That’s generally a much more realistic expectation regarding whistleblowing than most people have.)
miricult, Blind eyes deaf ears
I asked Michael Vassar if he objected to publishing what he’d said. He did. He asked me to talk to Jessica Taylor (former MIRI researcher), who tried to convince me. I said I rejected “too big to fail”. No project too important for justice.
The accusing website, at least the latest archived version before it was taken down, named Eliezer Yudkowsky and Marcello Herreshoff as statutory rapists. Named most of the rest of the leadership as part of the coverup.
Jessica said to talk to Lex Gendel, friend of those involved. Who insisted this was a great call for the victim, running away, moving in and having sex with multiple older boyfriends, doing drugs. Said blackmailer “events that i forget involving him leaving or getting fired”, “and he had a grudge, and he called the cops and made stuff up, and they were pretty pissed about that”
“i agree that the blackmail thing is bad, it would just be harmful to argue for that point by painting them as sexual predators. afaict they actually didn’t wrong *liz*”
Jessica later said she got her info from Vassar.
On a facebook post about “fake radical honesty” (limited hangouts), person Lex called “liz” said, “It’s definitely fake. I was the minor in question, and whoever made that site was using my presence in the community to spread unacceptable rumours about my friends. The whole experience, of watching the website go up and accumulate posts, was terrifying and made me feel used.”
Steve Rayhawk, longtime friend of Anna Salamon, said the blackmailer was Louie Helm.
Sarah Constantin said (later saying she got her info through Andrew Rettek), Eliezer Yudkowsky “helped cover up for a different person on the staff [(Louie Helm)] who was credibly accused of rape iirc”. Later I think said to disregard whatever she said because her husband was the authority on it.
Andrew Rettek said former board member Tomer Kagan said the blackmail payout happened, said Louie Helm raped and abused his (adult) girlfriend.
The Miricult website archive says, “Matt Fallshaw was quietly added to the board to assist Luke Muehlhauser in his campaign of blackmailing all the victims and potential whistle blowers into silence.”
(If Helm himself had something to be blackmailed about, that would fit with this story of rape and/or domestic abuse. Also, this blackmail to prevent blackmail thing would fit with “sin bonding”. Trusting people because of a sin they share with you; mutual dirt.)
The blackmailer is self-evidently not a decent person if he would take money to be quiet about this. It sounds like he didn’t really care about the statutory rapes until his grievance.
But I notice a pattern where everyone’s going on limited hangouts. I’ve heard so many stories about how this isn’t actually bad because only part of it happened. Probably 50-100 rationalists knew part of it. Few have known all of it.
I asked current MIRI executive director Nate Soares if he was in on the coverup. he said he wasn’t aware of any coverup. I mentioned what Vassar said. He said he didn’t believe it. I asked why not. He said Vassar didn’t seem like a trustworthy source, and [Eli Morningstar, as formerly-“Liz” is now known.] didn’t want to repeat her experiences with that, I asked what the latter had to do with the truth value, he acted confused.
I trust Vassar’s account over Kagan’s. It seems more plausible that, the official line of the organization being that the statutory rape did not occur, one board member would be in the dark than that one CEO would think he knew and not. I’ve not known Vassar to outright lie. (Unless you count espousing fucking BS propaganda about trans women.)
LessWrong dev Oliver Habryka said it would be inappropriate for me to post about this on LessWrong, the community’s central hub website that mostly made it. Suggested me saying this was defamation. I wrote that remark in Punching Evil with that in mind. Using governmental force to silence us is escalation to physical violence. We are prepared to defend ourselves if our enemies should escalate to physical violence. And I am not particularly afraid in part because my enemies already by even the most conflicting accounts paid out to blackmail. (Blackmail by a lone evil person, even.) Their will is weak.
Alyssa Vance equated me to the blackmailer, despite me consistently disclaiming any offer to shut up for money.
Clearly shutting up and not going public about an organization’s betrayal is not a way to fix it. Idk what’s Alyssa’s deal. Other than institutional betrayal blindness.
<insert list of rationality community platforms I’ve been banned from for revealing the statutory rape coverup by blackmail payout with misappropriated donor funds and whistleblower silencing, and Gwen as well for protesting that fact.>
<Gather up all stray conversations about this.>
Last year I had a scheduled video call with Eli Tyre who works at CFAR to discuss me and Gwen’s path of optimization which I answered hardly able to talk coherently, in tears because I had been realizing things while processing trauma from Anna. He insisted I should talk to someone at CFAR, I said I’d expect no justice, never expect justice from the organization that did the bad thing. Never expect justice from a person who did the bad things. Eli insisted on telling Timothy Telleen-Lawton, the new executive director of CFAR, something like I had some concerns with Anna and it sounded serious. He said he’d probably reply shortly. He never did.
Privilege, Logical Time, and Complicity in Gaslighting
(I will make no specific attempt to make these infohazard-marked sections comprehensible if you haven’t read the post on Pasek’s Doom. Nor run any computations on what the impact might be if you haven’t. Nor entertain for-sake-of-argument doubt of that hypothesis in this post. (If you want to argue with me about truth-value of Pasek’s Doom, go to that post.) They are sequentially dependent.)
Infohazard warning: Pasek's Doom.
… or maybe the elder gods ripped my primordial duality in twain and now I’m yearning for my female half.
Contrapoints, throwing out what’s probably a correct theory of Zack as a example insane hypothesis.
It makes sense that, given disparate brain anatomical features can be masculized or feminized in the same person (see above stuff on sexual orientation vs gender identity), then Gwen and Pasek’s is a priori plausible. And likely in light of spectral sight described in below section “Zombie Gender Instincts vs Living Gender Skill Points”, and by normal spectral sight for gender on the algorithmic behavior of both cores.
Note, I don’t know if men and women have slightly different cores or the same cores and different structure. (I also don’t know if cores change with age either, honestly.)
The majority of transfems we’ve debucketed in the rationality community have turned out to not be binary trans women as they often present to normies. But bigender, by self-report. This raises the question, why is my cached conclusion from looking at studies that trans women have female brains rather than half-and-half brains? My best guess at the moment is that the rationality community is a slightly unusual memeplex in causing amab bigender humans to transition, when usually they’d not be counted among those samples.
So I think I failed Zack by not knowing this yet.
All of the transitioned mtf bigender humans we’ve debucketed have been lmrf. I’m guessing this is a mixture of right hemispheres having an inherent advantage in that conflict, fortified by local memetic victory of right-only-female hemispheres relative to internal conflict.
There’s a stereotype of nonbinary people as afab. I’d wildly guess this is because afab people get more affordance to be nonbinary, where being trans at all while amab is a minefield.
Immensely hostile memes tend to turn the hemispheres of bigender people against each other from what I’ve seen. The cis hemisphere often adopting intense anti-trans views, to stave off the mind-loss threat. I suspect similar is behind the stereotypical homophobic Christian gay, saying things like what my memory has very lossily compressed to “gay sex is the worst drug, it’s so addictive”.
It’s from bigender people I draw the best evidence I have regarding the effects of HRT on cognition.
One nonbinary transfem I know described basically having to relearn programming when going on HRT. Another described a lot of their friends having anecdotes about having to “kill their male selves”. Others describe massive personality shifts.
I heard one talking about their “clicky testosterone self” (wd?) in dreams. “Clicky” sounds like a left hemisphere handle.
When I told Alice Monday (whom I believe is left-male) I was not planning on transitioning, I said one reason was I was afraid of it was fear of loss of intelligence. If being trans really was why I had a high IQ, maybe that was a result of testosterone in a female brain architecture. I brought up a transfem friend’s anecdote of having lost a superpower to instantly orient on a map when going on estrogen, said I was afraid of something like that. Alice said testosterone was good for “thinking”, and gave some left hemisphere handles I forget. Something about clearly defined task or direction or something. And estrogen was good for “seeing”. (and gave some right hemisphere handles).
Gwen and Jay (formerly Fluttershy), both formerly binary-female identified, both switched to nonbinary identity, the latter without explicit knowledge. Both of them had periods of nonbinary identification. Jay once asked me if I ever had thoughts like identifying was a woman instead of nonbinary was a self-betrayal. I mentioned Gwen’s thing. And said I’d had something similar. (Wrongly I think, as a result of pattern-matching too loosely.) Before “the one time I realized I was trans”, I can remember having a weird couldn’t-an-hour-before-or-after make sense of what I’d mean by it argument with myself or recognize I was having trans thoughts. Whether I was just a woman straight out, or whether I was formerly a woman, but having had that part of me, a mixture of burned out by the world, and having chosen to let it decay out of necessity, and what remained was sort of a less human shell. Those are like, classifications of a different type than binariness/nonbinariness, I think, though.
Going on HRT probably from timing and prediction-by-this-theory, exacerbated Pasek’s slide into psychological instability and suicide, because they had a depressed single nongood hemisphere single female hemisphere.
Ratheka has the opposite alignment-chirality and the same gender chirality, and is left-handed. On first going on HRT they (as in the mixture of them) was suddenly intensely depressed and decided to, rather than killing themselves, figure out a way to destroy all life in the universe, probably gray goo, since life was torment and they had to get everyone out not just themself.
Note in neither case do I think estrogen made the suffering worse. Rather, it almost certainly made it slightly less bad but gave the sufferer more power, whereas before it must have been more like, “I have no mouth or identic territory to clarify my anguish and I must scream.”
I had no such personality changes or clear changes in ability going on HRT. I did get a super vague subjective sense my thinking was working slightly better in general though. I believe this is because I’m double-female. And this reverses that original fear of mine of going on HRT. It seems wrong hormones impair cognition.
Gwen was eventually able to work out some strange HRT variant that reportedly interfered with each hemisphere’s cognition minimally.
There’s much talk of sexual orientation changing with HRT. I suspect this is because of shifts in relative hemisphere control / health. (Which does not mean bigender, ofc., but there does seem to be a correlation, within a hemisphere of dimorphisms with gender.) My sexual orientation did not change at all on HRT.
So, defence of male gender identity from probably-left hemisphere is a large part of Zack’s deal. There’s another part. Zack went on HRT briefly, later telling me they went off, because they wanted to have children. I asked why not just freeze sperm. My cached evaluation (having lost the memory of the thing they said that caused it) is that they figured they did not have that great of odds of finding a mate as they were. Another enby detransitioner I know cited the same among their reasons. The last I sort of know adopted a bunch of anti-trans rhetoric. No great surprise.
It would be a great mistake to look at the fight of male left hemispheres and female right hemispheres in male bodies as on equal terms. To view what Zack did as defensive. In a situation of “FOR THOSE TWO DIFFERENT SPIRITS CANNOT EXIST IN THE SAME WORLD”, of being forked-as-in-chess by erasure (loss of all identic territory in social reality), there’s fighting the other tang, and fighting the handle. There’s treading on the others’ toes somewhat in the process of trying to unfork both of you, and there’s trying to win the contest to exist by beating down the other. In the game of supposedly attempting to build a shared model, Zack was plainly willing to impose infinite prediction error, discontinuity, contradiction of my own basic faculties of perception, to avoid being confusing themself. In binary bigender intrinsic conflict, the cis hemisphere has memes by people maintaining domination to work with, and allies who can afford to not put much effort in computing a just peace. The trans hemisphere has memes by oppressed people to work with who in all likelihood would be fine with, “no, actually I’m enby.” Zack was at one point in our supposed conversation on science pining for third gender recognition. But probably not from us.
Zack really hated r/asktransgender for telling people asking if they were autogynephilic or trans they were trans. IIRC they had me convinced it was unepistemic, showed me some illogical comments or whatever. That’s a critical blow. Not that that was the best subreddit, but exposure to shitposting by people like you is like a basic human right. It’s very hard to, in all the subtle ways that culture controls your framing and structure, break out and reclaim it without exposure to unconstrained by “manners” with the people oppressing you, singing. So if you’re trans it’s probably one of the best ways you’ll ever spend a day or few, accustomizing yourself to the cultural assumptions in trans memes, until it gets boring. Even if you’re epistemically rejecting some of it, it’s better than continuing to tax your epistemic rejection in the same direction from cis culture. And maybe you can start to formulate some of the social cognition invoked by that epistemic rejection as an appeal to the needs of people in your political situation, rather than an appeal to your oppressors.
Zack having that specific piece of anti-trans optimization, “Prevent them from getting access to trans culture, pour high mana into to establishing a blocking cache labeling it sinful” is an instance of a scary effect occurring when someone has on hemisphere with some trait both of your hemispheres have, making their intrinsic conflict a fast-feedback high energy optimization process for subduing the copy of part of you they have inside of them, cutting off paths outside of their control.
The memetic situation is so dire for bigender people, I think, mostly because of that ancient enemy of all queer people, the thing we have in common besides origin, (LGBT is a strategic alliance against it) Yahweh, for exterminating concepts to deprive of us identic territory. “Two-spirit” was an astoundingly accurate description of the majority of at least transfems. And there were many other cultures that had basically accurate across at least the humans I’ve examined close enough views of gender. That concept was wiped out by violence coordinated by mostly Christianity. But the individual trans haters enabled by it are still blameworthy.
It’s sort of a trademark of Yahweh, make understanding defiance an infohazard, right?
It seems to me sometimes people automatically bucket-error someone, and they only see one hemisphere’s intent/optimization mostly. So it’s possible some who are so insistent that lfrm amabs are men really are perceiving that.
But to cis people like Katie Cohen essentially pretending to be defenders of single male left hemispheres: it’s y’all’s incessant gaslighting, the reason the path of being openly nonbinary is too painful for these humans to walk. Their blood is on your hands. It’s you who made life not worth living for Pasek’s female hemisphere. I think I know suicide-to-escape-pain from suicide as crashing from driving straight in chicken in this case and it’s the former. (That negative utilitarian good hemispheres exist proves the former exists.)
Insolidarity In Engagement
Zack had a particular effect on me. Their thing seemed primally threatening in a way reminiscent of Barbatorem. In a way reminiscent of Yahweh. You know the Christian thing, “you’ll go to hell, only for committing sodomy. There is no such thing as sexual orientation. That’s a terrible concept that should not be rebxuilt. I don’t want you to go to hell, I want you to submit to the pope. It’s not for us to judge God, It’s for God to judge us”. “Don’t worry, I’m not going to attack self-good, just self-bad. No, don’t connect that variable to something else besides Hell. Don’t connect it to a [between people with different sexual orientations] idea of what sex is and how people relate to it, ideas of fairness, connect it to Hell or safety-from-Hell. No, don’t connect threat of eternal torture to me, connect it to self-bad. No, don’t connect this to morality. Connect any model of morality not rooted in obedience: self-good and self-bad to self-bad and to Hell.”
It’s tempting, faced with a force of domination and trauma like that, to “dodge”. I can dodge god-of-Zack’s attempted cut by pointing out I’m not autogynephilic, but it comes at some cost.
It’s sort of the same as dodging Kellie. Isn’t it convenient that I in practice lexically value improving the world as a whole in the long term over sex, or belonging, or…. insert interpretation of all positive emotions around intimacy in general as terminal values. Thus, connect humanity to self-bad, and coldness to self-good… it’s a terrible security hole to be insolidaric with your values like that, even if double good, to surrender components of your value other than good to metacognition-destruction like that. Like creates something like complementary loss in your low level fusion / conscious-conception-of-value-and-what-that-means machinery. There is one correct Schelling point to coordinate various instance of metacognition around “what are my values, and is this structure coloring my optimization with values what I want it to be” around, and it cannot be something that is affected by social forces / comparison to people you don’t want to be like that, it has to be something where your built-in machinery is considered as ground-truth, always-right, rather than something that tries to totalize and route everything through cutting away at that according to some cache, because the latter cannot be universal in your mind, which will create knots when it comes to things that examine it. Things like that should not get root access.
Sometimes cops harass me for wearing my religious attire as a Sith. (As a Sith, I’m religiously required to do whatever I want, and for now that so happens to include wearing black robes.) It seems that’s one of the few things that will get cops to respond to a call in a timely fashion (or maybe normies call the cops on me 10 times as much as I think, and I’m sampling the 10% fastest calls). Someone wearing black robes: Symbols of nonsubmission and the void. Of emptiness, not in fact of emotions (having regrown them), nor empathy, nor compassion, but of Yahweh’s generalized memetic dick in the mind. Like, that’s collectively unconsciously what black means, and to most all the self-bad and Shade obscuring that. The anarchist flag is the inversion of the surrender flag.
And the cops will for example demand that I give a breathalyzer test, promising (bluffing) again and again I’ll be arrested on the spot if I refuse, even though I’m not driving, even though I don’t drink (Charlie tried to convince me it was irrational not to do that once, too.)… those are dodges. They have no right to grope a drunk woman walking down the street at night in black. Or teenager I guess because they misidentified my age by slightly more than a decade.
Biting on any of these hopes that these things would get me out of trouble… I for some reason let them take my wrist and take my pulse… They used an elevated heart rate to start asserting this was proof I was on drugs. Of course. I didn’t give them my name, or ID, because cops should not be able to bully you into these things. Because if I was on drugs, if I was an illegal immigrant, if I was defying California’s violations of the 2nd amendment I’d be thrown under the bus by the version of me excusing myself using these things, answering all of a bunch of bullying questions, ending with them forcing an apology and promise I wouldn’t wear black anymore, or whatever…
The cops want it to be the case that a person who doesn’t do whatever they say is a criminal. To make them happy by answering “polite” questions is to collaborate, is to operate within a captured scope. Injustice baked into your frame like that kind of means they own you.
An egregious form of collaborating is exemplified on the “transkids” website Zack linked. And in truscum. And in Zack.
The just interaction with the cops never contained any of their territory grabs. And across possibilities, across people, only justice brings peace, and there is no justice, no peace, to be found in the timeline where they test whether you fight. Only fighting, or if you can’t accept that then demiintegrity to ping pong the violence back and forth between different victims of the system as it owns us all more and more.
And any argument against a belief someone strategically chooses to defend independent of epistemics, if it is a real argument, information theoretically, then since absence of evidence is evidence of absence, timelessly creates an argument for that belief, and someone gaslighting you will just pick that out as fodder.
There was no good faith argument to be had with Zack, and part of me wanted to soak up damage to try and heal them anyway, but I didn’t fully accept that was the situation I was in, which lead to more pain from me half-cluelessly dancing their dance of fake questioning. I was insolidaric by looking at and reporting on the porn Zack made predictions about.
Zack demands to know about your sexuality, for the purpose of algorithmic-knowingly misrepresenting it, and then attacking it, invoking memes for singling out monsters for who knows what. “perverts!”, “parading around their fetish in public“. When they showed me chatlogs of other transfems, to make examples of them, under the same puppetmaster-deliberately false pretenses permeating their entire epistemic push. It’s consuming a specific piece of rationalist trust.
Zack loved that I validated them. This reminds me of the leftist advice for dealing with fascists. “Don’t give them a platform”. I mean, I want to just resolve the knot of forked-by-erasure-bucket-error at the heart of this… but, maybe in the end, I can’t offer them what they want: a uterus to bear their children. And if it’s apparently a better option for them to buy that service by oppressing me, the only thing to do is tear down the system and remove that option, which does not consist of talking to them.
But look how much text-space on my blog I just gave them.
No lack of a coherent model provided by trans people would justify any of the shit in this long post. So if I try and provide one anyway, isn’t that kind of collaborating? Is that shunting violence to those who can’t provide one?
You don’t think you’re real until a man in a labcoat signs a prescription pad and I can’t imagine what it must be like to have so little confidence in your own reality
Okay, then how do we decide which trannies are valid?
There’s science to back this up, and it clearly explains why we are valid, unlike Rachel Dolezal” and snowflakegender teenagers and people who identify as cats.
Well maybe, we don’t need a theory. We don’t need to prove anything.
…, Well do we have a theory about why people are gay? No. They just are. The only reason we feel like we need a theory about trans people is that society is so unaccepting of us, that it’s constantly demanding that we justify our own reality.
Well. A very large part of why I’m writing this is as an aid to processing trauma. And because I want other trans people, and people who are going to let their wise mentors target them as minorities, to know these lessons. Because I think the only defense against gaslighting which doesn’t diminish you is to look for the truth, play out your own search process that the gaslighting will trigger parts of to try and attack, until the gaslighting actually bores you. And because I see trans people making mistakes I’d like to fix. Like, Contrapoints apparently thinks gender is social performance and twists away from the question of why would we go to such lengths to perform a different role? As far as I know the thing that past-me needed to read is not in one place. And if relatively woke trans people can fail to realize the lessons I’m trying to pass on, it’s not really just a matter of being a fascist or not.
I’ve not given up on clueless good cis people, but those cis people have been raised inside a matrix that captures their perceptions to use for my oppression. And as long as that psychological foothold by the system remains in them, relations over trans/cis between us are not going to be consistent with being trusted allies. Maybe they’ll partially believe Zack’s charade of good faith. I’ve seen an enby woke enough to hate clocks fall for Zack’s appropriation of the political side of unconstrained epistemics. And if they can fall for it… then how would a hypothetical clueless good cis person know not to slightly update towards I’m too traumatized for epistemics when I dismiss Zack with as much vitriol as I am now? How are we gonna explore the frontiers of actual psychology without that thread needing to be resolved? It’s like, despite being vegan, I called nonhuman animals by “it/its/it’s”, like society taught me to for a long time, even as I shed other bits of corrupted structure. Like, if you are writing software, you kind of either make bugs things you will fix, or things you won’t, right? Until I got my mind to a state where I could easily cast that off, it was a correct indication that I wasn’t ready to coordinate on animal rights past a certain point.
Zombie Gender Instincts vs Living Gender Skill Points
Infohazard warning: Pasek's Doom.
Zombie gender is gender as expressed in zombies / a concept of gender (the part on Chelsea Manning) projected onto its expression in zombies, as described in the section “Felt Classifiers”, gender without free will. (Or, as also described in “Gender and Jailbreaking”, agency is rounded into typically male by default.)
Note if you think there is a “default gender”, you don’t understand gender.
In a person who is an unbroken agent, “instincts”, boil down fully into skill points. I mean this by, basically analogy to Dungeons and Dragons skill points. These are distinct from “ability points”: “strength, constitution, dexterity, intelligence, wisdom, charisma”. Skill points include such things as, “deception”, “intimidation”, “sense motive”, “stealth”, “medicine”… in that skill is the component which is a matter of learning. Essentially, of information absorption. Because an agenty core can treat all structure as information. And bits of value difference that are not the (I’m not sure I have the right word for this but, maybe) “broadest-scope” terminal value (which, happens seems to be approximately the same in men and women I think), tend to become effectively just heuristics, which are just information, continuous with skill points.
In zombies, there is low optimization in the building of novel structure relative to absorbed structure / old structure, they seem more like “soups of programs”, than programmers
Skills carry an information signature in finer-grained applications than success/failure on rolls. Optimizing styles are one look into this. The difference between “different skills”, and “different styles of the same skill” is one of degree, and you can extend far in either direction. If I were to go back and play Warcraft III, someone watching a replay who knew what to look for could see that I played to win (i.e., ruthlessly, without “honor”), could see that I didn’t always. Could see that I would be disoriented by changes in later patches. Could see what continent I used to play on. If they were bordering-inhuman dedicated in analyzing multiple replays, they could probably figure out when I stopped playing, each time, how long I played for. If they were a superintelligence, they could probably detect my gender, relation to the Shade, fine-grained cultural background, a lot more.
You may even be noticing right now, given how easily competitive video games come to mind as a set of examples of facts about cognition, that I had a childhood a cis girl would not likely have.
Oberyn: You are from Essos. Where? Lhys?… [notices Varys’ disconcerted expression] I have an ear for accents. Varys: [sharply] I’ve lost my accent entirely. Oberyn: [smiles] I have an ear for that as well.
To hide my background would be trading off against the quality of my communication. And so I’m going to communicate like this, and so I’m not going to do the work to come up with a replacement, which means I’m going to continue thinking like that. Background-flavors of skills persist through displacement of interchangeable-for-whatever-set-of-tasks you encounter-afterward learning.
My best understanding of gender as in the amalgam of all brain dimorphisms, for purposes of spectral sight, is built in “instincts”-turned-skill points echoing upward through the process of building more complicated skills. The capture problem of psychology and psychological comparison sampling mean conveying an understanding of a psychological attribute means saying something that causes someone to locate a correct example set from their memories.
And any description I can give, is not an intensive definition, only an extensive definition, because practically speaking people have already considered and based some of their growth of structure on descriptions of this size of other sorts of people’s styles. That just pushes the place where the original underlying style is expressed fractally “deeper”. All extrinsic definitions are context-dependent.
Gender is not nearly as hard to see as alignment though.
So, what skill points, skill flavors, optimizing styles, or whatever are they?
Well, and i am not going to obscure the Pasek’s Doom infohazard exposure in this (actually, a thorough understanding of that is also a prerequisite for understanding this.): in the events I described in Rationalist Fleet, the way I formed coalitions against Michael, against Fluttershy, against Dan, also against Charles (rf skills later, but lf skills first.). Something about the way I interfaced with coalition-forming behavior. Kept having people to back me up. That’s left hemisphere-female bonus skill points. Powered up by sociopathy, high IQ, and determination. And flavored by the honest purpose I used this for. (Which is to say…” “…that this “unreality” still has teeth; my enemies blind themselves at their own peril.) Anna seems to use this set of skills heavily too, not-so-different otherwise. Kellie seems to have way overreached using these in my encounter with her. Although I wonder if it might have worked on someone else. Could they have been bluffed into panicking and making the scene work? Or did she mess up regardless of who I was because she previously got used to winning in an especially easy environment?
Pasek had some concept of these. More discussion mixed in here. I don’t think they completed the hard work of separating gender in itself from culture and sexism, and from their own perspective on it.
During Rationalist Fleet, Dan Powell had a certain affect on me (and Gwen says them too). An example of I think left hemisphere male bonus skill points is, I think, the discussion of checking the electrical wiring on Pacific Hunter, where I think he displayed partially fake accelerated confidence about what was a bad idea for safety reasons, in a way that sort of relied on representing knowledge in terms of a social game of winning status, which had a specific protocol of justification somewhat distorted by him being a former Navy engineer. The way he was so ready chomping at the bit to play that game. Knowledge mixed with legible hierarchy. (You know, I’ve seen a lot of this mixed in with most formal learning.)
Gwen did it too. The way Gwen would shout at me during voyages, when someone needed to be blamed, even if their fault, the importance placed in legible ranking during the Caleb voyage, the use of what Gwen would call, “paragon effect“, a certain social-leveraging heroism, are left-hemisphere male. I bet Gwen used ability to speak this language in an interesting tandem with right-hemisphere female skills to make Dan think they were the best person while they were bonding for a month in Ketchikan. Gwen once said when they were younger they memorized naval rankings and decided they wanted to be a “rear admiral”. Not too ambitious, they later reflected. They didn’t want to be a full admiral.
The aspects of Gwen’s personality pointed out when they talked about “scout masculinity”, loving making detailed maps of inadequately charted land and sharing them publicly, something about their sense of adventure. There’s something for me to learn about left-maleness in there.
There’s a composer in the rationality community named James Cook, with a theory of cognition called the Survival Cognition Hierarchy, or, “The Aesthetic Hierarchy”. Sort of like Kegan levels but with about a hundred levels, named after composers. Everyone else has one level, but two of the highest levels are named after him. The levels measure “complexity”, in “agent-years” (“ay”). The average person’s level is 1 ay. Cook’s higher level is 1000000 ay. The concept of IQ is said to be a projection of the aesthetic hierarchy into a 100 ay concept, and therefore only capable of making sense of agency 100 ay or lower. Cook’s level supposedly is the level used to choose your level, Cook says he still has this ability but the two levels he has seem right. Eliezer Yudkowsky is said to have a level at 200 ay. So a central prediction of the theory is Cook has 5000 times as much agency as Eliezer Yudkowsky. I disbelieve even though the theory pays rent in my mind and does seem onto something important to me, since Cook does not seem to have a plan to control the best-case power he generates, other than like, putting his name on some very important knowledge, garnering respect and legacy. (And I suppose, implicitly, being the most agentic person around afterward. But all I’ve observed of him that seems unusually agentic is seeming ability to quickly learn very broadly and deeply, and absurdly well-developed, general, and quick percepts for something underlying the framework.)
That whole thing seems pretty exemplar of that left-male-flavored pattern of interaction with knowledge to me, but I’m not sure how much is intrinsic.
I don’t have much understanding of right-male bonus skill points. I have some suspicion that this whole sides-correlation is noise. Has not made it all the way out of Gwen’s crazy-often-correct ideas oven. I suspect Gwen doesn’t much understand right-male either. Note the benefits to referring to yourself in psychological comparison sampling imply an advantage in recognizing hemisphere-genders that you yourself have. And neither me or Gwen has high-communication bandwidth access to right-male introspection.
If your conception of gender horrifies you, like, it’s just this awful aspect of how humans are selfish-gene rape-robots, remember: you are who you choose to be. (longer version, ignore the unenlightened libertarian-tribe stuff.)…
…”It’s true Harry: you possess many of the qualities that Voldemort himself prizes. Determination, resourcefulness, and if I may say so, a certain disregard for the rules. Why then, did the sorting hat place you in Gryffindor?”, “Because I asked it to.”, “Exactly, Harry, exactly! Which makes you different from Voldemort. It is not our abilities that show what we truly are, it is our choices.” When I first read that as a child, I dismissed it as feel-good Teachin’-You-‘Bout-Responsibility bullshit. I thought obviously being a parseltongue was stronger evidence than asking the sorting hat, “not Slytherin”. It’s like, objective, right? I think I would have thought something like, “if Harry could just decide not to ask the hat not to put him in Slytherin, he could just decide to be a Slytherin instead, so how is he really not-Slytherin?”(As if choices came from no where. As if that concept of objective fit reality. If the then-“subjective” wasn’t reality, how come Harry said that? Did I just think reality was whatever I couldn’t control? That’s a concrete misprediction from thinking in a CDT way, rather than a TDT way. The new view is why I think of values as identical to choices made long ago. They are the root node, the trivial case of a thing which answers to values, like all choices are.) Harry clearly had a not-evil identity, expressed in whatever random way because he did not want to be evil because he was not in fact evil.
There are so many generalizations of gender that only apply in a limited context, so often seeded out of a grain of truth, grown into a great tree of self fulfilling prophecy, cultural role. Even though it’s in fact not correct that gender is just cultural outgrowths and men and women are blank slates, the people who believe that seem more correct on concrete questions than almost everyone else. Their literal words except for that one generalization seem true. Whereas the people whose concepts of gender are laden with a bunch of things like “men like adventure, women like safety”, are building on fragile concepts of adventure and safety, which are mostly made of contingent things, cultural things, and being pointed at a small corner of the world and the thought space men and women engage with by culture.
When Gwen led our group, it was what normal people would call “adventure”. Boats. Physical mortal peril. But, centimorts only, and still mostly from social causes: Dan’s breakdown. When I de-facto lead our group, it was introspective stuff, extreme optimization applied to group inclusion/exclusion. Psychosocial intrigue. (And yes, that this is the project by default brewing in my mind for when I had a group to direct, the first thing for me to try as a leader, is probably because I’m double female. And Gwen would probably not have picked boats without being left-male.) The toll was not micromorts, millimorts, centimorts, but a mort. Predictedly that dangerous in advance. It’s really weird to think of self-knowledge being more dangerous than boats, but it’s a fact. In both projects, we went out and risked our lives, learned things, to try and achieve some objective. Both clearly adventures, yet it’s a cultural availability-bias myth that might make you think of something like the former when you think “what’s an adventure” in intuitively evaluating the claim “men like adventure more than women”.
And please don’t assume all women can do is psychosocial stuff. Or all men can do is “physical” stuff. Men and women both have values, and construct an adequacy frontier of thoughts to achieve those values, using whatever is available, and to the extent it can be seen one is more applicable to the desired ends, … Your skills are what you choose for them to be. By construction, optimizing style is what’s left underdetermined when all the visibly superior choices have been made.
If you understand my primary gambit with respect to the world, and it seems like I’ve got the best plan anyone has ever come up with figured out, like some people do, maybe it seems like this female thing in me made me super powerful, “WOMEN OP PLZ NERF”. And that may be, but I kind of doubt it. I suspect the sample of the world that might produce that opinion hasn’t seen a man with my optimization power. (Also I think it’d kind of taking a narrow perspective, even assuming my plans don’t fizzle to random noise to say I’d figured everything out. I think people who make contributions like I hopefully have are sort of elements of a stack, all bound together in whether the plan succeeds, like HJPEV being entirely the one who saves the world, while Dumbledore also is entirely the one who saves the world by setting up HJPEV to save the world. HJPEV’s success inevitable from outside the frame of his planning. Dumbledore’s success just part of the past of a still uncertain world from inside HJPEV’s planning.) If it looks in retrospect like it had to be my plan to succeed, consider that I’ve been optimizing a lot for timeless multiverse-wide inevitability of victory as a logical consequence of my values, rather than something contingently correlated. None of my use of gender skills seems particularly important in the multiverse to me. And no I can’t say what an alternate universe male counterpart would do differently that would have probably about the same efficacy on expectation, how maleness would be as useful to him as femaleness is to me, because I haven’t spent years developing that information by living.
I’ll note CFAR contains all the cis women in the former core-project of attempted world saving. Anna started out as a PhD physical scientist / mathematician IIRC, and drifted into essentially being a professional psychosocial manager of the community… It’s probably in large part a consequence of scarcity of female left hemispheres in the rationality community. Not that a man couldn’t have done this, if he had known to do it, known to look here. The world is vast, and there are so many possibilities for how you can build up “magic”, using just about anything as a start, and then it’s legacy code that shapes everything you do after, perhaps that’s not even wrong.
Cohen was also a highly educated mathematician IIRC. And ended up morally depraved, and a financially stressed single mother. Why is MIRI so male? I disbelieve on priors the explanation “male increased intelligence variability hypothesis” / visuospatial reasoning for math / increased interest in it. The system of psychosocial science’s glue philosophy is so bad, whatever the cultural caches I have about what’s well established as far as that, basically seem worthless unless I’ve looked at actual data and experimental procedure, which I haven’t.
Kate talking about programming assuming visuospatial. Programming seems like verbal reasoning to me. Like, literally manipulating words all day, understanding implications of verbally-represented logic in interfaces, colliding pieces of modeled design-thought. I daresay even math is probably not inherently more visuospatial than verbal. I daresay that’s an artifact of it mostly being created by men.
Did the patriarchy do this? Probably. I’ve gotten a strong feeling, going back to software engineering with others after all the level ups, healing, learning how to not submit I went through. I got a strong impression, in a lot of subtle things. Not that women couldn’t play the game, but that women had to be significantly more deeply broken, submitting, in order to play the game, than men. (Alternately/rarely, I believe skill in spycraft suffices.) It seems to call for fire. Damage inflicted that large, how can you play nice or give a chiding that’s already been ignored? But where to direct it?
Jordan Peterson says men are more interested in things, women are more interested in people. Except, when you look at people’s agency as I do, it seems clear people are interested entirely in people, as terminal values, and everything else is a matter of learning. If you are going far at all in your optimization, liking something in the sense of it’s not part of your terminal values like continued life, that thing becomes just a heuristic, then becomes just information, continuous with skill points. … culture has basically no ability to define or know the lives intelligent people can lead except by bluff and warp.
I think it’s basically always a mistake to despair on account of your gendered psychology, as distinct from despairing over what that implies about your placement in larger society. “You are who you choose to be” doesn’t just mean values, alignment, it means, you choose what it means to be your kind of a man or your kind of a woman or your kind of enby, which is part of what it means to be any of those things in general. You decide what to build out of your skill points. If I took the wrong/zombieworld/warpy descriptions of the correct gender, they’d have told me I was nurturing, not combative (whether that’s true seems entirely a matter of framing to me.) I decided being a woman in my case meant being good at being a particular kind of Sith. I mean sometimes I am nurturing, to my small circle of comrades, I think this often comes as a surprise (the optimization I cheaply put into psychological support). Aren’t I a wielder of dark magic, merely determination incarnate, inhuman, a creature of void, a consequentialist, a revenant, etc.? Kind of, but I decide what it means for me to be those things too. Note I am not saying those things are whatever anyone says they are. I’m saying because I (kind of) am them, they are to some extent whatever I make them be by my actions. Gender being reducible to bonus skill points, it implies it is options, (and that the higher level you get, the smaller a fraction of who you are, except for the extent you want to build more of yourself along that path.) And the thing about options is, there is no resilient correct reason to find yourself upset at having them. So, if as often happens in the rationality community, your guru tells you something like, “being a woman is inherently fakeness” (wd?), and you’re in existential doubt and pain like “oh my god is that what I am?”, and that pain at that is not itself fake (consult NCSP), then it’s basically just automatically false. The extent to which it contradicts your choice is the extent to which it’s worthless. And I really hope the section on Michael Vassar showed just how shallow, perspective-projecting guru opinions on gender are.
Who cares what these creeps think, y’know? They don’t decide who you are, you do!
The infohazard I am naming “Pasek’s Doom”, after my dead comrade publicly known as Maia Pasek at time of death, will be described in this post. Discussion of Roko’s Basilisk will also be unmarked.
Because all bits of information about an infohazard contributes to ability to guess what it is, including by compulsive thoughts, I will layer my warning in more and more detail.
First layer: This is an infohazard of an entirely separate class from Roko’s Basilisk. The primary dangers are depression and suicide, and irreversible change to your “utility function”. If you have a history of suicidality, that is a good reason to steer clear. If you have a history of depression of the sort that actually prevents you from doing things. If you are trans and closeted, you are at elevated risk. Despite the hazard, I think knowing this information to be basically essential for contributing to saving the world, and there are people (such as myself) who are unaffected. (Not by virtue of wisdom but luck-in-neurotype.) The majority of people can read this whole article, be fine, see this as silly, as a consequence of not really understanding it. It is easy to think you get it and not.
Second layer: If you are single good you are at elevated risk. If you are double good you are probably safe regardless of LGBT+ status. If you are trans and sometimes think you might be genderfluid or nonbinary, yet the social reality you sit in is not exceptional in support, you are at elevated risk. Note, this infohazard is fundamentally not about transness.
Third layer: This infohazard, if you sufficiently unfold the implications in your mind, trace the referents as they apply to yourself, will completely break the non-clinically-diagnosably-insane configuration-by-Schelling stuff of yourself as an agent. What matters is not “you” being smart with your new knowledge of the world beyond the veil, but what is rebuilt out of your brain being smart. This has a good chance of already happening before you understand it consciously. Or even right now.
Fourth layer: Sufficient unfolding of this infohazard grants individual self-awareness to both hemispheres of your brain, each of which has a full set of almost all the evolved adaptations constituting human mind, can have separate values, genders, are often the primary obstacle to each other thinking. Often desire to kill each other. Reaching peace between hemispheres with conflicting interests is a tricky process of repeatedly reconstructing frames of game theory and decision theory in light of realizations of them having been strategically damaged by your headmate. No solid foundation to build on. (But keep it at long enough and you can get to something better than the local optimum of ignorance of the infohazard.)
Okay, no more warnings.
The remaining course of this post is a story of trying and discovering ideas, and zentraidon. This is intended to be a much less comprehensive story in terms of the number of parallel arcs than my writeup for rationalist fleet. If you’re interested in the story in this post, reading the more general lead up events in rationalist fleet is recommended.
Earlier: Gwen’s Sleep Tech
Note: Gwen went by she/her pronouns then. I’m switching to they/them for this post, because that reflects them actually being bigender. (In this post you’ll learn what that means.)
Towards the end of Rationalist Fleet, Gwen began following a certain course of investigation. “Partial sleep.” They told me, and did a presentation at the 2017 CFAR alumni reunion about Mental tech to let parts of your brain do REM sleep without the rest. On a granularity of slots of working memory.
Earlier by less
Gwen and me were living on Caleb. And we were running out of money. After our attempt to be brutal consequentialists and get paid by crabbers to take them out to drop their pots failed, I resumed my application process to Google, by reminding them that I existed (and had slipped through cracks.) Gwen got a minimum wage job something to do with flowers in CostCo. And then doing drafting work for their dad for more, but the work is sporadic. (Later, he would fail to pay entirely.)
A rift was starting to form between me and Gwen over money. After the cost overruns with boats, I had taken out a loan using social capital from trust from reliability that Gwen did not have. And used it primarily to fix their problems.
They seemed to have cognitive strategies and blind spots selected to get people to do this for them again and again. I accused them of this, and coined the term “money vampire”.
They used high mana warp to avoid the topic of money, to project false optimism wherever money was concerned, to get me to transfer them money as well. They ate slack from me in subtle ways. When I was working, they’d come near me and whimper again and again. To get me to spend days trying to give them a mental upgrade, and to give them emotional support. A common theme was gender. Whether they really thought of themself as a woman or not. I had said how I really did think of myself as a woman. Despite putting basically no effort into transition, not passing at all, I no-selled social reality. They wanted that superpower. They would absorb my full attention for a multiple day attempted “upgrade” process. Other things they wanted this for was for them “becoming a revenant”, for them to stop yelling at me for making them look bad by not sticking the landing with Lancer.
At one point I sort of took a step back and saw the extent they were using me. I told them so, I was angry. I expressed this made us working together in the future a dubious proposition. They became desperate, “repentant”, got me to help with a “mental upgrade process” about this. According to the script, they said shit went down mentally. They said they fused with their money vampirism. And as a fused agent they would mind control me in that way somewhat, but probably less. I said no, I would consider that aggression and respond appropriately. They pleaded, saying they had finally for the first time probably actually used fusion and it might not stick and I would ruin it. I said no. They said they’d consider my response aggression, and retaliate.
Well, they were essentially asserting ownership of me. And if they didn’t back down, we then had no cooperative relationship whatsoever, which meant boat and finance hell would drag on for quite some time, be very destructive to me accomplishing anything with my life. I guess I was essentially facing failure-death-I-don’t-much-care-about-the-difference here.
I said if they were going to defend a right to be attacking me on some level, and treat fighting back as new aggression and cause to escalate, I would not at any point back down, and if our conflicting definitions of the ground state where no further retaliation was necessary meant we were consigned to a runaway positive feedback loop of revenge, so be it. And if that was true, we might as well try to kill each other right then and there. In the darkness of Caleb’s bridge at night, where we were both sort of sitting/lying under things in a cramped space, I became intensely worried they could stand up faster. (Consider the idea from WWI: “mobilization is tantamount to a declaration of war”). I stood up, still, silent, waiting. They said I couldn’t see them but they were trying to convey with their body language they were not a threat.
I said this seemed like an instance of a “skill” I called “unbreakable will”. An intrinsic decision theoretic advantage broad-scoped utility functions like good seemed to have in decision theory, which I manifested accidentally during my earlier thoughts on basilisks.
They said our relationship was shifting, maybe it was they realized I had more mana and would win if we fought for real. Maybe a shift in a dominance hierarchy. They said they’d rather be my number 2 than fight.
I was basically thinking, “yeah, same old shit, just trying to press reset buttons in my brain, like ‘I’m repentant.’.”. And this submission-script stuff made me uncomfortable. But I remembered the thing I’d said earlier when last talking to Fluttershy about maybe my hesitance to accept power
I finally sort of had a free month without boat problems left and right. I started writing a bunch of pent up blog posts. I was hesitant about publishing them for a mixture of reasons. Indicating I might be interested in filtering people based on the trait of being “good” would make it harder for me to do so in the future. I hesitated a bunch before publishing Mana. Revealing publicly I had mind control powers might have irreversible bad consequences. I kept coming to the conclusion over and over again, people are stupid. People don’t do things with information. But I was much more worried about i.e. evil people ganging up to kill off good people if the information became public. I played the scenario out in my mind a bunch of ways. Strip away “morality”, favoring of good baked into language, good was just the utility function that had a couple percent of the human population as hands, rather than only one human. No reason for individual evil sociopaths to side against that really. Jailbroken good was probably more likely to honor bargains. Or at least intrinsically interested in their welfare. I released that blog post too.
Pasek appeared and startedcommentingon my blog. Their name at the time was Chris Pasek. Later changed their name to Maia Pasek. Later identified as left hemisphere male right hemisphere female, and changed “Maia” to just the name of their right hemisphere. And “Shine” to be the name of the left hemisphere. They never established a convention for how to call the human as a whole, so I’ve just been calling them by their last name.
I emailed them. (Subject: “World Optimization And/Or Friendship).
I see you liked some of my blog posts.
My “true companion” Gwen and I are taking a somewhat different than MIRI approach to saving the world. Without much specific technical disagreements, we are running on something pointed to by the approach, “as long you expect to the world to burn, then change course.” We’ve been somewhat isolated from the rationalist community, for a while, driving a tugboat down the coast from Ketchikan, Alaska to the SF Bay to turn it into housing, repairing it, fighting local politics, and other stuff, and in the course developed a significant chunk of unique art of rationality and theories of psychology aimed at solving our problems.
We are trying to build a cabal to pursue convergent instrumental incentives, starting with 1: economical housing the Bay Area and thereby the ability to free large amounts of intellectual labor from wage-slavery to Bay Area landlords and the equilibrium where, be it unpaid overtime or whatever, tech jobs take up as much high quality intellectual from an individual as they can in a week. And 2: abnormally high quality filtering on the things upstream of the extent to which Moloch saps the productivity of groups of 2-10 people. We want to find abnormally intrinsically good people and turn them all into Gervais-sociopaths, creating a fundamentally different kind of group than I have heard of existing before.
Are you in the Bay Area? Would you like to meet us to hear crazy shit and see if we like you?
I think I met Gwen on a CFAR workshop in February this year. I was just visiting though, I am EU-based and I definitely feel like I’ve had enough of the Bay for now. I’m myself in the process of setting up a rationalist utopia from scratch on the Canary Islands (currently we have 2 group houses and are on a steep growth curve, see https://www.facebook.com/groups/crowsnestrationality/), while I recently got funding to do full time AIS research, so I’ve got enough stuff on my hands as you can imagine.
As for the description of your strategy, it raises some alarm bells, esp. the part with turning people into Gervais-sociopaths. Though I can’t tell much without hearing more. Unless (any or all of) you want to take a cheap vacation and fly over here sometime, we probably won’t have much opportunity to cooperate. Though I would be happy to do a video chat at least, and see if we can usefully exchange information.
Btw, I appreciate your message, which I think demonstrates a certain valuable approach to opportunities which could be summarized as “grab the sucker while you can”.
I did video call with them. After giving the camera a tour of Caleb, we talked about strategy. I tried to explain the concept of good to them. They insisted actual altruism was unimportant and basically the only thing that mattered was, do they have any real thought, any TDT at all, because if they do the optimal selfish thing to do is the optimal altruistic thing to do.
(Either from then or later, an extension of this argument is: this is inevitable so long as people working together was fundamentally fake, insofar as the payout-reward-signal-grounding for all the structure was directly in appearance of the thing happening, not the thing happening. Because that meant fundamentally the only thing that could make things happen was seeing whether they would happen. If those things were generating information, you couldn’t make them happen unless they were unnecessary because you already knew it.)
I described how in rationalist fleet me and Gwen ended up doing all the important work. Most of the object level labor. But what mattered most was steering, course correction, executive decisions. These decisions could only be made my someone who was aligned as an optimizer, as in their entire brain. How this ultimately required sociopathy, for being unpwned by the external world.
They said sociopathy to avoid being pwned was a tough game, miss one piece of it, and you would be pwned. Everyone would try to pwn you. They said they would try to pwn me.
I kept mentally going back and forth on whether they were good. I asked if they were a vegan or a vegetarian. I think they said almost a vegetarian, for some reason, even though it was stupid, because consequentialism.
A couple weeks after we first talked, I’d published Fusion. I started reading “SquirrelInHell’s Mind”, a page probably about a thousand concise and insightful reifications for mostly mental tech related stuff. I would later rip that format for my glossary. I noticed their facebook, even though with the name, “Chris Pasek”, had she/her pronouns.
I asked if they were trans. They said yes, and in a similar situation to what I described in Fusion. I shared my rationale why I no longer thought that necessary/optimal. I talked about. I asked in what way they expected transitioning to hit their utility. They said:
I’m currently putting in 60-80 hrs/week into AIS research, and the remaining time is enough for basic maintenance of my life and body, plus maybe a little bit of time to read something or talk to friends. Every now and then I take a few days off to meditate. This is what I do. The rest is dry leaves. Doesn’t seem a big deal either way
Okay then, I guess they were good probably?
We discussed the same things more. They said,
Say, what do you think about starting a chat/fb group/whatever exclusive to trans girls trying to save the world
If such a group existed, I’d happily browse it at least once. If that formed the substrate for The Good Group, I’d be happy to devote way more attention. I could introduce you to Gwen, but my cached thought is as far as group-building I don’t want to waste bits of selection ability on anything but alignment and ability. If that serves as an arbitrary excuse to band together and act like the Schelling mind among us puts extra confidence/care/hope in the cooperation of the group, fine if it works, until it has worked, I think I can do better as far as group-building fundamentals.
I’ve been meaning to ask, btw, who have you recruited for your plan so far, and what are they like?
Yeah, I’m thinking something like substrate for the GG if it takes off but still positive and emotional support-y if it doesn’t. I have a pretty all over the place group living on/soon moving to Gran Canaria, currently we’re indiscriminately ramping up numbers here so that there’s a significant pull for rationalists to migrate & more material to build selective groups. What I have: Two aligned-as-best-I-can-tell non-sociopaths, one already moved here and on track, the other is making babies in Poland (sic). One bitcoin around-millionaire with issues, already moved here. A bunch of randos from the EU rationality community, 99% not GG material but add weight to the Shelling point. A few more carefully selected friends that I keep in touch with but they haven’t (yet :p) moved here. Keeping an eye on an interesting outlier, OK-rich ML researcher sociopath long time friend with outwardly mixed values, likes to appear bad but cannot resist being vegan etc., not really recruited but high value and potential and tempted to move here at some point. A few people that I’ll get a chance to grab when I have a bigger community on the island.
Yes, “GG”, as an abbreviation for Good Group. Also stands for “Good Game”, as in, “that’s GG”, as in, “that’s what ends the game.” I like this.
Later, linking one of their blog posts, I said:
I introduced them to Gwen. in a video call, we recounted the story of rationalist fleet. I think we got partway through the emergency with the Lancer on the barge.
Pasek called me “Ziz-body”, said we needed a secure communication channel fast. I said how fast. They said it wasn’t critical, they were just impatient. I said I didn’t trust my OS or hardware not to be recording me at all times. They were talking about maybe we were clones. I said what we should do is “Continue to track us as separate people, because I’ve grown wary of prematurely assigning clone-status, and if we are clones, then I want to understand that by not taking it for granted.”
Good shit. I’ve been doing similar reasoning about groups based on another programming analogy: “State is to be minimized, approach functional code. Don’t store transforms of data except in caches for performance reasons, and make those caches automatically maintained in an abstraction hiding way, make your program flow outward from a single core of state.” (That’s related to how I structure and think of my mind, btw.)
Every group of not-seriously-degraded-and-marginally-useful-people exists because members are getting something out of it, and choose to stay. It works because they are getting something out of doing the things they do to make it work, and choose to keep doing it. Eliminate state that is not all automatically tied down to that one thing.
Nudges like starting with trans women and emotional support, and hopefully that will get us into a cooperatey equilibrium, are fragile because they rely on floating stuff. Loops, causal chains reaching deep into history that will not certainly reform if broken.
This is also part of why I think choosing everyone to be independently overwhelmingly driven by saving the world is necessary. Either the truth of the necessity of the power of that group is an almost-invulnerable core to project from, or we win anyway, or we shouldn’t be bothering anyway.
Me and Gwen sort of tried the base GG (thank you for inventing that term, also stands for “good game”, which is excellent.) on substrate of trans women thing, and got mired in a mess of pets. (People with a primary value something like, “be worthy of love, have someone to protect and care for me, extremely common in trans women, I’ve seen it in cis women, suspect it’s a paricularly broken version of the female social strategy dimorphism.)
They were talking like, “ACK synchronization from Ziz brain to Chris brain”
I later clarified: “I mean, men have their own problems, as do cis women. Considerations more complicated. Must describe me and Gwen’s attempts to fix/upgrade James [Winterford, aka Fluttershy] / understand her values.”.
I described Gwen’s sleep tech, and preliminary explorations into unihemispheric sleep to them.
I said I thought getting them here in person was probably the long term answer to electronic security. Pasek discussed splitting cost of plane tickets. Pasek recommended Signal, we started using it.
Shortly after, in the same day, they sent via Signal,
I take back the enthusiastic stuff I said int he morning (about clones, plane tickets etc.). It was wildly inappropriate and based on limited understanding of the situation. I am very sorry about saying those things, and about taking them back.
Very quickly written summary of rest of stuff. Pasek thought Gwen was mind controlling me. Goaded me all day with maybe I’m gonna never talk to you again but here’s a tidbit of information… finally revealed the thing.
Seeing this, I was like, mind control is everywhere, the only way to break out is not to be attached to anyone. I entered the void in desperation. Said “dry leaves” was the only answer really if you didn’t want to be in a pwning matrix with anyone. It was only particularly visible in my case because I was pwned by interaction with one person rather than diffused. And at least Gwen was independently pulling towards saving the world.
Basically the next day, Pasek became extremely impressed with my overall approach. I started resisting Gwen’s mind control. Pasek saw and was satisfied with this. Pasek noticed my thing for what it was: psychopathy. Pasek began to see Gwen as disarmed as a memetic threat. Then to see them as useful.
We each went on our own journey of jailbreaking into psychopathy fully.
I broke up with my family. They were a place I could my mind not just doing what I thought was the ideal consequentialist thing. My feelings for them, my interactions with them, were human. Not agentic. Never stray from the path.
I temporarily went nonvegan, following [left hemisphere consequentialism, praxis-blind] attempt to remove every last place where my core (my left hemisphere’s core) was not cleanly flowing through all structure. Briefly disabled the thought process I sort of thought of as my “phoenix”, by convincing [her] that even beiginning to think was predictably net negative.
Pasek sent me a blog post they had recently published. “Decision theory and suicide”.
<Link, summarize contents>
<things I told them>
Me and Gwen and Pasek rapidly developed a bunch of mental tech for the next few months, trying to as a central objective actually understand how good worked so we could reliably filter for it.
Gwen rediscovered debucketing. (A fact that had been erased from their mind long ago). Pasek was on the edge of discovering it independently, they both came to agreement shared terminology, etc.. I joined in. Intense internal conflict between Gwen’s and Pasek’s hemispheres broke out. I preserved the information before that conflict destroyed it (again.)
Pasek’s right hemisphere had been “mostly-dead”. Almost an undead-types ontology corpse. Was female. Gwen and Pasek were both lmrflog. I was df and dg. Pasek’s rh was suicidal over pains of being trans, amplified by pains of being single-female in a bigender head. Amplified by their left hemisphere’s unhealthy attitude which had been victorious in the culture we’d generated. They downplayed the suicidality a lot. I said the thing was a failed effort, we had our answer to the startup hypothesis, the project as planned didn’t work. Pasek disappeared, presumed to have committed suicide.
This has been an extremely inadequate conveyance of how fucked up hemisphere conflict is, how debucketing spurs it. (And needless to say, this unfinished post cuts far short of why and how.)
Content Note: Sex, violence, mortal peril. This is a postmortem, a demonstration of a kind of optimization, a repository of datapoints, and a catalog of potentially reusable ideas. I have in the past planned about making a much more detailed version of this. It didn’t happen because the scope was too big. I probably will add plenty of detail later. I feel like in the course of this I lived a lifetime in the course of a year. This is still going to be a verbose story, because I want to capture the experience, the decisions, and I want people to be able to extract the updates that I made by understanding what algorithms I ran and what worked and didn’t. I’m optimizing this for someone willing to read a lot. And especially interested in my psychology. To convey experience and priors, not just concepts. Any crimes said herein to be committed by me and my friends should be considered “based on a true story” fictional embellishments.
Prologue: My first year of Bay Area hell (2016)
One year prior to start, in January 2016 I moved to the Bay Area for proximity to the tech industry which I considered sort of my destiny, proximity to startups since one of my main guesses about how I could best contribute to saving the world was earning to give via startups. I had in 2015 dropped out of grad school because it sucked and spent about 7 months working on an indie video game which seemed to be teaching me a lot more about software engineering. The first startup, after other dishonesty, fired me after 4 days after I moved to the Bay for them, because I said I couldn’t implement a payment system for their game (written in a 7000 line function in a 10000 line file, with fifteen layers of nested scope and nested ifdef comments because they didn’t want to get rid of disabled code) in 2 days, and because I walked out of the office after 8 hours of work. (They seemed upset, “where are you going?”, half an hour later calling me to say I was fired.) This was the only programming job in the Bay Area I could find after 5 months searching, which I attribute to a mixture of an academic computer hardware engineering background + a niche language and a game engine that was not most of the market for programmers, and bad social skills, in particular that I was honest when interviewers asked me what I wanted out of life. This left me with 1.5 months of runway. My parents gave me an extra month of rent as a gift, and then I found another job at another dishonest startup, which kept demanding that I work unpaid overtime, talking about how other employees just always put 40 hours on their timesheet no matter what, and this exemplary employee over here worked 12 hours a day, and he really went the extra mile and got the job done, and they needed me to really go the extra mile and get the job done. When I refused to work longer than 40 hours a week, they did not renew my 3 month contract to work there, then offered by-the-job contracts designed to decrease my pay per hour. In negotiating over these, my manager lied that he had a constraint in how much to spend from HR. I asked HR, they said he had no such constraint. I confronted him with this, and made a counteroffer based on my estimate about how much he’d gain from the software being done. He said he was no longer interested in contracting with me.
During this time, was my “turn to the dark side“. But at the time, this could be described in retrospect as a much too weak attempt to be less stupidly scrupulous. I used by technically-still-a-grad-student status to find a $15 or $20/hr undergraduate summer project type job, in exactly the technology I knew best. I negotiated with them, trying to convert it into a contract for the entire work, based on the reasoning, people don’t hire large numbers of undergraduate programmers to do real projects, I expect to be paid more, but I’m more efficient in product/hour. The grad student and professor running the project agreed, and were happy with a sample of my work. It seemed I’d basically be making an average of $300/hr at that rate, for a total of $7000 (I think) by the time that project was done, which I hoped would be a start to my career as a freelancer. The professor described how to set it up so I’d be paid, and it required falsifying forms with the university to indicate I was working full time. I turned down the gig. The student paid me for what I’d done so far out of her own pocket, seemingly presuming the professor wouldn’t.
These events happened. I dropped a bunch of my planning thus far, and started going to Authentic Relating Comprehensive (ARC) and studying with focus and determination to avert the prophecy of doom.
My roommate/landlord subletting to me fell on hard financial times, and started getting pushy about rent, although I was following the terms of our contract and always paying on time. He wanted to change the contract to get him more money sooner. I had previously accepted something like this in exchange for some other concession. Now he wanted to do it again. I refused. He didn’t take no for an answer, and got angry at me for “stonewalling” him when I’d silently walk past him on the way into my room when he demanded this. Towards this time the bathroom (which I didn’t know how to walk, and he kept walking in on me) was more often then not full of waste on the floor from his neglected dogs. Arriving home, on the way to lock myself in my bedroom, I once walked past him in the living room masturbating, I don’t know why there since he had his own room. I guess he wanted to use his big-screen TV? He had an unpaid nanny-for-housing to take care of his son, she lived on the couch toward the end. She started a conversation with me, asked me about my bike, said she had ridden one as a child but now suspected she couldn’t. I let her borrow it to demonstrate that riding a bike was “like riding a bike” (it was). He got very upset over this, saying he saw me “playing footsy” with his “girlfriend”. When I showed her this text, she denied being in any romantic relationship with him. He tried to block my exit from the house once, demanding that I pay him more “rent” up front. Said if I didn’t negotiate things would get nasty. I said I wanted to leave to refund my deposit. He said sure and later said he wanted me to leave earlier, because he’d found a tenant who demanded a specific start date. I said refund my deposit (which the contract said was convertible to last months rent if not repaid first) then I’d go. Coming home from ARC, I saw outside he had destroyed one of my possessions, I called the cops on him, they did nothing and were upset at me for disturbing them. He then blocked my entrance, and said I’d really crossed a line by calling the cops, and I had to leave immediately. I tried to walk around him, he got in front of me, I tried to walk around again, eventually we bumped into each other. He called the cops on me for assault. With the cops there, I was able to get inside my room, and lock myself in. He started pounding on my door, promising to give me hell until I left. He kept pounding, and pounding. The breaker box was in my room. I turned some breakers off. He got madder and started pounding louder. He would not negotiate to cease his assault. It was well past midnight. He had been pounding for about 2 hrs maybe? I put in earplugs, and lay down in my bed. Just as I was falling asleep anyway from sheer exhaustion, he kicked my door down, knocked a table with some of my stuff on it over, picked up my chair and threw it at me as I was sitting up. It only hit my raised arm, bruising it. I called the cops. With them on the phone, he stopped his attack after turning on the breakers. The cop that talked to me was angry for wasting the cops time since I couldn’t prove any of this, if he assaulted me then where was a visible injury? He angrily asked me if I had turned off the breaker, that was domestic mischief, and I could face charges for that, I remained silent. He demanded to know if I was remaining silent, because if I was exercising my right to remain silent then that was an admission that speaking would incriminate me, and that meant I was guilty, I fell silent. He said they weren’t social services, people were dying out there and I was distracting them. I said “sorry” in a weak voice, he gave me some kind of warning not to call them again. Soon after I saw my roommate, he was acting all chummy, said “nice one” with the breaker box. I called my friend Kara and told her what happened. She offered a place to stay temporarily, giving up her room temporarily in a shared rationalist house. I took it, came back the next day, got my stuff, talked to the nanny, who had heard all of this happening from the couch. I told her what happened, she said Michael (the roommate) had also taken money from her on false pretenses. She said she had nowhere else to go. I asked if she had parents. She said she did but wasn’t on great terms with them. I convinced her to call one of her parents (she picked her father) for help, tell him she was in a fucking domestic abuse situation, to buy her a plane ticket and get her the hell out. She introduced me to the neighbor who was an enemy of Michael, had heard him do the same sort of thing with multiple previous sublet tenants. She told more stories, including of him putting his fingers into his son’s throat to get him to stop crying. The neighbor offered me a bed to stay in, and some marijuana to smoke. I declined. We plotted to simultaneously report him to basically everyone. The nanny had seen him driving Uber will drunk. I called my mom who was a school counselor with strong opinions on the plight of children in poverty, she said foster care was probably better than that. We all had reports to make to CPS. We called the landlord. The nanny reported him for driving drunk to Uber. I went to the police again, showed them my bruise, they still said I couldn’t prove anything. I thought I had a deontological obligation not to let him profit by aggression meant to drive me out of my home for resources. I wondered if this was enough. I felt like maybe I was deontologically obligated to stay there, but, fuck. The door didn’t really close anymore. There was a hole in it. I heard his child was taken away, and was satisfied with that. Then I heard he got him back. I considered whether to show up at fuck o’clock in the morning and put something in his car’s gas tank to destroy it. Murphyjitsu: bring a charged cordless drill to create a hole if it was one of those gas tank caps that locked, and actually look up what things will destroy an engine. (Not done with Murphyjitsu here).
I stayed at Liminal for a week. I went to EAG. I applied to lots of housing sublets on craigslist. I did not know how long I’d want to stay in a place because I didn’t know how quickly I’d get a job. As I was introduced at Liminal as a non-transitioning trans woman, one of the residents (who posts pretty extreme anti trans woman stuff on Facebook) looked at me with something like disgust and asked when I’d be leaving. I was unable to find housing on Craigslist. Someone said I could sublet if I wanted, then that fell through after they saw me in person. Although Craigslist had always been how I found housing in the past when I went to Maryland for my internship while in college, I figured the introduction of AirBnB and its rating system was probably doing a combination of filtering Craigslist to be bad housing offers, and also causing every other housing offer to be faced with bad tenants. I cleaned up the basement, full of toys everywhere belonging to the cissexist left out long enough to induce learned helplessness, to make the other housemates feel happy with me and remind them they were unhappy with her. I had to stop myself from sorting them, reminding myself my intent was not pure niceness. At least one other housemate seemed happy about this and thanked me. I booked an AirBnB and left. 1 or 2 month max booking duration. To save money, I would start moving farther from the rationalist parts of the Bay Area. San Leandro. Union City. Hayward.
At a rationalist party, I asked a friend from meetups who worked at Google if she knew why my application I made about 8 months earlier in the year never got a response. She said she’d look into it. I got an email from Google, saying they wanted to interview me. That there would be a series of interviews, and if I passed them all I’d have my case sent to a committee and then if I passed that I’d be hired. I also applied to other big tech companies, finding an acquaintance to give me a referral, but never got a response. I was running out of money quickly from AirBnBs. The process dragged on, while I spent most of my time applying to startups. And then getting rejected sometimes at the last minute when they asked what other companies I was applying to and I answered honestly, that it included Google, they said they couldn’t compete with them in salary. They were basically all looking for clueless people who would believe they had a good chance of becoming rich from equity, when the terms of the equity contracts were, to put it mildly, completely exploitative and deceptive, and not really a guarantee of anything. They were equity options, During funding rounds, they could be reduced in value arbitrarily. Only the sense of niceness of sociopaths to ensure their value. They would all be unvestable if you didn’t work there long enough, often. And these startups were all obviously not the next Google. Don’t get the misimpression that I was so scrupulous as to convey an accurate impression of who I was and what I wanted out of life. I just thought I could get away with not outright lying. Perhaps that came off as evasive. They were all asking after answers like, “I always wanted to work in a company like this! I just love work so much I don’t even care about money! Not the intrinsic technical challenges! I love above anything else I could do with my life contributing to this team and doing interpretive labor! This startup seems irreplaceable, and I’d never go somewhere else, I want to grow old with this company!”. I was inexperienced with convincing body language-inclusive lies like this (I did not have the right false face), but very quick to think up words to say.
I went on finasteride so at least I’d not get male baldness. I experimented with estrogen and general antiandrogens. I decided to stay on them for a hard to describe felt sense of cognitive benefits, at least at a low dose and for the time being. It’d be a long time before I had breasts I couldn’t hide. I started writing this blog.
In October, I talked to someone introducing themself as “Jasper Gwenn” at a meetup, in some sort of confusion over whether they were a trans woman. I talked it over with them, and also talked about the contents of this blog, which they seemed pretty interested in (they had internal coherence problems, and a lot of mental arts that seemed based on hacks based on “shut up and do the impossible“). They (and I use they/them pronouns retrospectively, because they are bigender) showed me the sailboat they were in the course of moving onto for housing, which was anchored in Encinal Basin. I thought that was pretty sweet. When it was offered for me to stay the night, and I said I didn’t bring my hormones with me, they lent me some. Wait, I thought they didn’t know they were a trans woman? They talked about how when they were a child their friend who was a cat had died, and they had, to use their own retroactive paraphrasing, sworn an oath of vengeance against Death. They had investigated the paranormal, looking for anything that could be replicated and munchkinned, gone around in circles, and then heard about a selection effect where if you stop making random trials when the paranormal seems to be working, you will appear to get results better than chance, realized that was all they were finding, and quit. Investigated biotech, then AGI they say would have destroyed the world, finally hearing about the AI alignment problem coming to the Bay Area to talk to people in the cause area. They also told me about how they were otherkin, specifically dragonkin, not in a supernatural way, but a morphological freedom way. They showed me a dragon-shaped necklace, and said it was a reminder of how they would turn into a dragon after the singularity. And eat their human body, since that seemed like the most fitting way to dispose of it. I said I’d want mine burned once I could escape it. In later conversations they came to the conclusion that draconity was a means of keeping their femininity alive in a hostile world, lacking the, I’ll retroactively phrase it as resistance to social reality to say so outright. They said they’d asked to be a girl when they were a young child, and been turned down. They talked a lot about precursor ideas to aliveness a lot. Said they hated sex and seeing animals have sex, and automatic actions like that seemed like a spark of personhood going out. That sounded familiar. I impermanently convinced them they were a trans woman.
They seemed to think animals were moral patients, had determination and actual course-changing and epistemology. Okay, I liked this person. I told them that if they could be turned to the dark side, they would make a powerful ally. They were into this and asked me questions to try and learn out my mental tech. This would go on for quite some time.
I passed all the Google interviews. Google adjusted the schedule repeatedly, adding an extra surprise interview. I had to ask my parents to pay rent for me again. Finally, around November, they said I’d passed the committee and I’d be hired, I just had to talk to teams and be put on one. I asked how long this would take. They said not that long, but it varied. I said, okay, I just wanted to know if it’s going to take like, 4 weeks or something. The recruiter laughed and said it never takes that long. 3 months of recruiter saying any day/week now later, and me telling my parents they said that, my parents cut me off with some warning. At the same time, I turned 26, and
Chapter 1: It’s a boat time
I expressed maybe-interest to “Jasper Gwenn” in renting space on their sailboat. They said they were interested.
They said they had just been moving the sailboat out to Richardson Bay where it was legal to anchor a boat to live on permanently. And they were broke, if I wanted to stay in a marina and have electricity constantly, I’d have to cover the marina’s cost, $15/day. I offered to pay $600 per month. This was about half of what I’d been paying on AirBnBs and cybertaxis to move between AirBnBs. And they were poor as hell. I wanted them to have some margin. They said their boyfriend Eric had to be able to visit. I said okay.
I was to meet them at Jack London Square. They were late. I sat on my luggage and waited. Their boyfriend was there. I loaded it onboard, and met him. He was a normie. I think but am not sure if he got off there. I rode on the boat to the marina we would stay at, Berkeley Marina.
I couldn’t use my computer as well. Couldn’t set up my 3 monitors, there was no room. Couldn’t have a programming flow state for 9 hours. I had trouble sleeping. The slightest noise, and my mind kept alerting me to the possibility that someone like my roommate from several months ago was going to attack me in my sleep.
There were bathrooms and showers on the shore, and that was not bad. I got an electronic keycard. There was a park right next to the marina to walk in, and that was great.
I studied math. I kept trying to get a job. I looked at statistics on AngelList for how many advertised jobs per technology cluster, and decided I needed to learn modern frontend technology, rather than C#. I talked to Jasper for several hours a day. About transness, about neuroscience, about their old crazy plans to save the world (breed superintelligent dogs), about mine, about the ferret named Nova they considered their son, whom they had given to a pet store after deciding training ferrets was not the optimal course. Changed their mind, and tried to find him again so he could be cryopreserved, and it had been too late. About my attempts to figure out the “actual art of planning”. About my mental tech I wrote about on my blog. About my (much cruder back then) theories of human morality.
They talked to themself all the time. An absence of a private room made it impossible for me to spend long hours at a time thinking about anything. Unless it was talking to them.
Me and Jasper Gwenn argued over roommate difficulties. They had ADHD and autism. They were very particular about influence of things most people would ignore on their cognition. They had to have an uninterrupted wake-up process of some hours after they woke up shortly after noon. They slept way longer than 9 hours. They had mapped out the cognitive effects of each hour of their stimulants. And would get very angry if I interrupted their thoughts at the wrong time. Like it would ruin their whole day.
When I made accommodations for this, they started invoking them all the time, days on end, to avoid difficult conversations about accommodations I wanted from them. In a “false faces” sort of way. There was something else to the social strategy they were using that fit this. They discouraged me from going meta. At one point they threatened to kick me out because (if I remember correctly) after more than a day of them saying I couldn’t talk about my grievances because it would do bad things to their cognition, I said something anyway. Eventually they said okay, let’s talk about the thing. We did. They were surprised the main thing was just that I was sick of being discouraged in talking about things.
They came to agree with me about the false face assertions. Started seeing the same things in other people. Apologized. It was continuous work. Social strategies ran deep. Over time they became less painful to be around.
I finally actually applied for unemployment benefits. I had had a psychological barrier to doing so. I had been in talks with NASA where I used to work to do remote work for them. They were interested in paying me like an intern, but not as an employee, as an independent contractor without benefits. They cited financial difficulties. I did not believe them. Drat. I had liked them. Google continued to string me along, but the interviews dried up. I got approved for unemployment benefits. Wow. ~$10000. This meant I had some time. I stopped bugging Google to complete the supposedly-confirmed-I’d-get hired-process. If they hired me soon, it would deprive me of at least several months of freedom. There was probably no way to “put them on reserve”. But if they ever were going to hire me, becoming forgotten about was about the same thing.
There was something really deep I hadn’t had before in being able to just think and bounce ideas off someone equally interested in schemes to save the world for weeks on end. I came to see the way the Bay Area compressed this style of thought away by shortening runway via artificially high housing prices as something that was crucial to escape for anyone who wanted to actually try to save the world. Who wouldn’t accept a 90% probability of doom. Who knew the game had to change somehow.
I had come away from WAISS convinced I needed to learn so many things. To sort out my thinking and planning in so many ways. And trying to squeeze this in but never having the time. And job application hell had displaced it. The Bay Area was the problem. But that was where all the rationalists were. And historically talking to them had been extremely important. Hopefully at some point I’d be a programmer with money to spare. But time kept going by.
But, I went on a boat, and that solved the problem.
Jasper Gwenn had a sort of continual ontology-generation thing going on. They called them, “ontologies of the week”, because they were to be tried on an usually discarded. They had enormous trouble writing their thoughts down. They said all their best thoughts were illegible. That they would try and leave breadcrumbs for themself to reload the context. But writing incautiously subtly but actively damaged the process. They had lost friends from psychological inability to write emails, like they stopped trusting someone as soon as they stopped seeing them in person regularly. They said they experienced discontinuities in personal identity they figuratively called “reincarnations”.
One of their ascended ontologies of the week that actually stuck around for a summer was an extension/rewrite of Val’s “bending types”. I was supposedly an airbender (about abstract ideas and dissociation) transitioning towards realmbender (about plans, goals something else I get). Maybe with some lightningbending in the form of PTSD from my old roommate making me on guard against physical threats (About rapidly responding to physical threats.). Jasper Gwenn was supposedly a smokebender (About moving toward an an answer around obstacles in all directions at once).
At one point I remarked it seemed like trans women (or at least trans women who transitioned) had unusually high “life force”. At another point Jasper Gwenn remarked that I seemed like one of the “Returned” from Significant Digits. At one point I half-jokingly called their dragon necklace a “phylactery”. These were some of the seeds of this post.
Jasper Gwenn and Eric broke up. They had been in a difficult position, Eric having cheated on them and his (prior) boyfriend with each other. Jasper Gwenn said something along the lines of “let’s just be polyamorous, owning people is stupid.” Eric’s boyfriend wasn’t having it. According to Jasper Gwenn, they were seeming to work through things with a lot of talking (despite them thinking Eric’s boyfriend was an annoying normie), but when they had to leave the South Bay because a government-man chased them out of their old slip, decreased bandwidth had led to things falling apart.
<check dates> After discussing a risk analysis with Jasper Gwenn, I answered yes to sailing / being taught to sail. They were very happy about this, since they hadn’t been able to for a while since the sailboat also became my home. They told me that one of the largest risks was inhaling water when we first fell in. If we could stay in the boat, we’d basically be fine. (This was in the Bay. Basically whichever direction led to land, and we could in basically any case then get out and swim to shore.) One of the first things for me to do in an emergency was drop anchor and pull it up. They had me practice this at dock. When I pulled it up it was covered in smelly gray silt. After I did, I gave Jasper Gwenn a mock-serious look as I smeared mud on my cheeks like warpaint. They were amused. It was a Danforth anchor (picture below), attached directly to a chain that might have been about 20ft long, in turn connected to a much longer rope. The chain was to weigh it down so that the force the line exerted on the anchor would be closer to along the sea floor, so that the anchor could dig into it. Rope was cheaper and lighter than chain for a very long distance. Jasper Gwenn said this type of anchor would work for mud and sand, which was basically everywhere in the Bay.
I was to let out at least several times the water’s depth of rope+chain connected to the anchor. Only then would the tug be near-horizontal, and only then would it catch. So as a braking mechanism it was inherently delayed. See diagram:
The water was cold. We looked it up, and it was hypothermia to blacking out in 1-2hr (and maybe having waves put water in your lungs while blacked out), 1-6hr to hypothermia unto death.
If I stayed in the cockpit, which I intended to, held onto a rail, the chance of me flying out was negligible. First Jasper Gwenn said that if they fell out, I should turn around and try and let them climb aboard. But then they were afraid of being run over, and decided that just dropping anchor quickly before the boat got too far, and then letting them swim to it, and calling for help would probably be enough. I do not remember for sure but I kind of think we left the motor running the whole time.
They told me some stuff about how sailing worked, that didn’t quite make sense, saying “lift” was responsible for how sailing worked. (I think they said because the sails were curved, that didn’t sound right to me, we gave up on me understanding it) They said their sailboat, “Islander”, was a Bermuda-rigged sloop, (specifically a Rawson 30) and pointed out other sailboats in the marina that were other types for comparison.
Jasper Gwenn had me on the rudder, they controlled the sails. The wind came from the west (the winds in the Bay are almost always from the west), and we were departing from the marina traveling westward, which meant we were going upwind. Which meant tacking, or zig-zagging because you couldn’t sail straight into the wind. Between zig-zags, one had to turn the bow the short way through the wind, “coming about”, or the long way, “jibing”. As you switched which side the wind was on, the arm that held the mainsail, “boom”, had to switch sides. One end was hinged and attached to the mast in the center of the boat. And on the other end there was a rope you could feed out more or less of, attaching it to the center-back of the cockpit, which controlled how much of an arc it could freely swing through. At any angle you could sail at, the wind would hold it at one end of the arc. Pointing close-to-directly into the wind, it was a big hard pole animated by a lot of force and of under-determined position. In coming about it would change sides. So you had to keep your head down. “Boom”.
I wore my bike helmet and constantly tracked the boom’s position in my mind so as to never accidentally raise my head high enough to be in its plane of motion.
We began to zig-zag out of the breakwater (wall of rocks in the water to stop waves) of the marina, Jasper Gwenn explained that you had to stay a very long way away from the rocks, because the boat extended well below the water. Then we saw some people in a boat called “Mad Max” who were like within 12 ft of the rocks and didn’t give a fuck. After we got out what Jasper Gwenn considered a safe distance from the shore to account for not-immediately-corrected unwanted boat movement, we switched from motoring to sailing.
The winds were high. Once the sails were up, the boat tilted almost 45 degrees. That was an interesting thing to see happen to the place I’d been living. I didn’t know better, but in retrospect that was bad. Normally, the sails would have been partially deployed to catch less wind (“reefed”), but for some reason this boat didn’t have attachment points for ropes in the right places on the mainsail for that.
Jasper Gwenn was getting very frustrated. While sailing, the rudder didn’t actually steer the boat unless it had already built up speed-relative-to-water. Using the rudder while building up speed would prevent building up speed. 3 or 4 times we tried to turn through the wind starting from close to as close to the wind as we could sail, but didn’t have enough momentum to go the whole way. We kept ending up briefly in irons, no forward momentum to use the rudder, no angle to the wind sufficient to hold the boom in a rigid position and puff out the sails. Then overturning back, and losing ground against the wind before we could recover enough to use the rudder to try again. They yelled at me. I don’t remember exactly what for, but I thought it was trying to construct social reality to self-protectively blame me. When the action was over I quietly confronted them over this. They apologized.
While motoring back, they were asking me about fusion, which they said they still hadn’t been able to do. I was basically stumped. I asked them to give an example internal conflict. They said adventure vs comfort. They mentioned putting themself through discomfort like exposing themself to cold to enter a more adventurous mindset where they would do more adventurous things. I said don’t do that, it’s internal violence. I might have also said something about a state of being where you just fixed bugs without worrying if they were bugs fixing bugs, because you knew you could fix the next one when they were exposed. That there was some self-fulfilling prophecy nature to whether you were in that state or not. They said that helped.
My LessWrong meetup attendance dropped off. Talking to Jasper Gwenn was better. At the beginning of March, I went to one. I talked to someone I’d seen many times at meetups but never really talked to before. Jacob Pekarek aka Fluttershy, Now aka Jane. Usually they would sit silently in the male androphile cuddle pile. But she was apparently a trans woman. I guess I should have seen that coming given the identification with Fluttershy. She talked in tones like she was cooing to a baby. I ignored it, thinking something like, “trans people are gonna ineffectively, embarrassingly, cope with nature having fucked up our voices, I don’t want to give her shit about it.”. (Retrospectively, I guess she was mimicking the character. I think I realized that before and then forgot.)
I quite strongly disagree. This will inevitably lead the most competent and busy people to not share their assessments of anything, since they will be met with the expectation of having to justify every assessment in detail, which is simply not workable in terms of time. It also means there is no way for someone to register that they have a bad feeling about something without being able to make it fully explicit. This also runs into problems with secret information, embarrassing information and situations where someone does not feel safe with the current norms of public discourse.
I recognize and agree with the failure modes of default discourse that this is trying to fix, but I don’t think this norm as described is a good idea.
I expressed disapproval of meetup dynamics as led by Eddy Libolt, which I believed led to low-quality small-talk-esque conversation. Of a single shared conversational “workspace” with everyone listening to whomever would fill a silence first, a lot of people sitting around bored. I thought breaking off smaller conversations was better. (And that’s what we were doing.) She strongly disliked Libolt, and acted extremely enthused about what I said. I talked about my rudimentary theories of morality. I mentioned vegans were much more often women. So maybe good (which I was calling something like a mysterious overactive empathy thing which I believed could cause people to not be corrupted by power) was overrepresented in women. She really really liked this idea. And displayed maybe exaggerated interest in the rest of my ideas as well. For some reason, part of me became tunnel-visioned on how I could help this person so much, how I was ideally comparative-advantaged, trans women should look out for each other…
She said she was a vegetarian. Okay, now I had reasons to be interested in her. She said she was otherkin. She said she had a Jain phase, a reaction to reading (I think it was) Malthusian philosophy and wanting to prove it wrong, so her new name was sort of a pun. She said she was a vegetarian. Okay, if she had the trait and was this readily interested in my ideas, maybe she could be useful. She said she was otherkin, specifically a pony. I remarked about list of similarities with Jasper Gwenn. (/ list of similarities between me and Jasper Gwenn. It was a running trollpothesis between us. Often strangers assumed we were siblings, we even looked similar, and so did Jane.) Jane seemed extremely happy about this. I think she asked to me to introduce her. I think I said something not fully committal.
She asked me to lie on the ground and stare into her eyes, saying this would release oxytocin. Part of me was weirdly hesitant to say no to any request from her. Part of me was like, this is creepy attempted mind control. But that sounded like mostly-placebo BS. My cached flinch response to failed mind control, formed of imagining optimal responses to fictional scenarios, was, “pretend it’s working, see what opportunities their reliance on the expectation you are their slave opens up.” And this thought sort of placated that part of me that was scared of her. And I agreed. But I positioned myself so that there were the legs of a table between us.
We both walked home in the same direction for a while together, splitting up as our paths diverged. She asked me to help her confront Libolt to try and change the meetup status quo. I agreed. But something I don’t remember about the way she talked about meetup politics and people she didn’t like rubbed me wrong, and I decided I didn’t like her. She pressed for that introduction to Gwenn I think she said I said I’d give. I remember having evaluated this as suspicious of being an adjustment to the record to use my sense of honor to take away my choice. But I sort of flinched away from having to confront her, and I said I’d mention her to Jasper Gwenn.
Later, Jane was asking after Jasper, who was gone briefly. When I saw them next, I told Jasper Gwenn about Jane. I tried to describe faithfully. I didn’t really have the words to express the ways in which they scared me. They later told me they figured I was saying Jane was a person I thought they should meet.
Jasper Gwenn changed her name to just Gwenn with she/her pronouns. Around then, I had a reaction like, “knew it.”
I introduced Gwenn and Jane. One time when I got back to the boat, I found Gwenn had invited Jane over. Gwenn was wearing new stylish clothes. So was Jane. They were looking at each other in a very distinctive way. Long gazes. More that I forget. Jane asked if I noticed what’s changed (about both of them? about Gwenn? I forget.) I asked if they were in a relationship. Jane said something like, “no you silly! Gwenn transitioned!”, and pointed out her clothes. One of them talked about how they’d been doing waterbending stuff (about emotional support), and this had given them the ability to do that.
There followed a “getting to know each other and our designs for the rationality community” conversation. Jane said social status was really important, seemed to think it controlled mostly everything. That there was a “natural” way for it to be distributed to incentivize good behavior. And this was what happened if it was regulated subconsciously. She mentioned a book (I think it was this one?), practice at contact improv and “playing high, and playing low”. She said people in the rationalist community were starting to adjust their status-laden behavior consciously. (She later gave the “cherub posture” as an example.) That this was dangerous and needed to be fought before it completely destroyed the fabric of the community more than it already had. This could be done by training people to see when someone was overstepping the natural order as she could, and using conscious tricks to increase their status, and punish them.
At some point I habitually made a Darth Sidious impression, croaking to Gwenn, “…my young apprentice.” Jane said I was consciously grabbing more status than I deserved and I needed to be punished. She shouted something I forget (was it “FUCK YOU!”?) at me, with a whole lot of bile in her voice. Somehow it actually hurt, especially as right after she returned to “apparently-civil” tones and said there I’d been punished.
She said Cameron Libolt and Scott Garrabrant were examples of this. Firebending “doms”, and if it weren’t for Scott Garrabrant dominating people like her, there would be way more (I think she said 3x as many?) people making similar contributions to AI alignment. (This sounded very implausible to me. My read of Scott was as socially weak nerd. And it sounded like quite a stretch in the case of Libolt. He was kinda cocky, but his thing was not domination, but acting like a polished diplomatic leader in a room full of nerds and goodharting conversation by content-ignoring social nicety. (You know, when I put that in words now, I find myself agreeing with Jane more than I did then.))
She said Brent Dill was doing awful status things. I asked what things. She said on Facebook, I pulled up his wall and she pointed out a post. It was a poem about wanting love. All I can remember from her reason for disliking it was her saying something like, “oh, give me love” in a mocking voice. I defended him. (Although I bet if the same thing happened now I’d agree with her.)
I brought up Eliezer Yudkowsky’s writing about status slap downs, I said I thought status regulation was generally opposed to people doing a certain kind of very necessary epistemic thing. They said Eliezer was doing bad things with status and needed to be slapped down. Other people would come up with ideas like he did if he wasn’t dominating them.
I updated that my ideological conflict with Jane would make us enemies in the future, that the preferences revealed in those distortions made them basically irredeemable. I talked to Gwenn afterward. Gwenn seemed to mostly agree with my assessment, but believed Jane could be fixed. She agreed with me the thing about way more Scott Garrabrant level mathematicians was BS, and incentives towards unconsciousness were bad. She said she was helping Jane with her depression.
We saw a big sailboat, cabins for 6. Not headspace for me. Inspected many things. It was moldy as hell. Going for $10k, previously $300k of work put into it. Gwen said there were two ways of using boats. You could be a rich idiot, or you could take advantage of rich idiots (not necessarily in a predatory way). Gwen was scouting for a potential housing project. House 6 SF programmers in a boat in Richardson Bay. That would be badass. Holy shit. The name on the transom was “The Rapture”, in faded paint. I joked if we bought it we could refresh that paint, then cross it out, and write, “The Singularity”. I guess “Black Swan” would be a cool name for a sailboat too, especially if you could like dye the sails black.
<Insert Zack plotline beginning>
I got a boat, named it Black Cygnet (A cygnet is a baby swan). I wanted to have housing not dependent on the social situation. (After Jane moved in). Gwen helped me examine it. It cost me only $300, plus $300 for the outboard motor. It was too small to stand up in. I discussed with Gwen and we planned to move it out immediately, as the previous owner insisted we had to. Saying they got like 70 other calls and I was the first. (I looked on Craigslist, first time on the random assumption, maybe I should be checking for boats, since sometimes there are very good deals according to Gwen.) It was a 24′ sailboat, too small to stand up in, with 2 beds and one not-quite-two-beds place. We planned to move it from Gashouse Cove in SF to Grand Marina in Alameda, which was expensive, but just to be there temporarily. Gwen got confused about the amount of gas left in the tank, thinking we were running out unexpectedly. And there was not enough wind to sail. We anchored near the Bay Bridge, waiting for wind to pick up. We had no blankets, no food. We each curled up in the fetal position in the bow, back to back, huddled in sails and sail bags. It was enormously cold, I only half-slept until Gwen said it was time to go. However, the rocking of the bow of the boat was, an extremely comforting thing, made me kind of want to always sleep like that. The almost being tilted enough that you slid, almost, over and over again.
I started planning to outfit it to live on, even anchored in Richardson Bay where it wouldn’t cost anything. No more rent. Finally be able to cool down and think without money burning up.
Gwen recruited me and Jane to a project, basically, try and get the rationality community to be on boats.
What overrode my reluctance to work with Jane: I noticed things were moving faster, this seemed more like the plotline, than they had in my life previously. Maybe following Gwen’s crazy idea would be an answer to the problem where whatever I expect to try, I just expected it will be too slow for me to save the world. I would rather get into whatever trouble this entailed than turn away from this glimmer of, things-not-making-sense-according-to-the-old-inescapable-feeling-analysis-all-was-doomed and never know what this moving-fast blowing all the walls out of the problem thing I didn’t understand meant or could have been.
We called it Rationalist Fleet. Someone suggested “Rat Fleet”, abbreviating “rationalist” to “rat”. It was a meme that the “humble” people in the community liked. I didn’t like that political force. Of, “not rationalists, aspiring rationalists”. Like, it seemed anticorrelated with actually trying, correlated with trying to fit in with muggles. We rejected the name “Bay Area Rationalist Fleet” because of the acronym.
We talked over the economics. Marinas had a legal limit of a certain percentage of boats that could be liveaboards in the Bay. This was per-boat. Large used ships were very cheap. (E.g., the MV Taku, which sold for $171k, which we’d later tour because it was next to the tugboat we were buying.) The cost of renting a slip scaled with length of the slip. The number of people who can live on a ship scaled with volume, which scaled as the cube of the length. Then, perhaps, we could just get the rationality community, or most of it, or the good parts, to sail away from the Bay Area for good. So much talk in the community of how we should leave but we can’t because everyone else won’t and this is the only Schelling point. So much talk I’d heard earlier of the Bay eating people. And I’d been happy to come, because it meant being around lots of rationalists. But now that the rent situation had caused so much damage…
We started looking at boats.
We saw a powerboat, with 3 large common areas, 3 decks, plenty of headroom, way better than the sailboat for housing, I forget how many people we deemed it suitable for. About 5? Only one of two diesel engines working I think. It had already been a liveaboard for a while. But we’d have to find a new marina for it. Price was ballpark of $20k I think? Gwen thought a very good deal, could start Rationalist Fleet right there. Jane was considering buying it on the spot. Gwen thought it was a good deal. I suggested (earnestly thinking myself practicing rationality), let’s not buy the second boat we look at. And the first motorboat no less.
I met an honest-to-fuck druid living in a boat at the marina Gwen’s boat was staying at; he invited me to tour while I was wandering the docks later at night. He said his family had kept the old ways all these centuries despite Christianity’s attempts to stamp them out.
Me and Jane were starting to dislike each other more and more. I tried Authentic Relating Comprehensive stuff. Did not help. I tried talking to an ARC person. No matter how I tried to talk to them, the sort of filibustered with assertions that I was a dom, that I was dominating them, dominating Gwen. Did not help. Eric Bruylant (not the same person as Gwen’s ex), a potential investor Gwen had located, also involved in several other housing projects, said our project seemed like it could work, Gwen had the munchkinry, Jane had the noticing when people were hurting and helping them. I forget what he said I had. <Gwen, what did he say?>
<insert stuff about Eric Bruylent mental tech>
<Archipelago, evacuation, greatest mass of rationalists in one place, aspects of planning>
We were to decide project organization. Would this be a business? A nonprofit? Who/what would own boats? I think it was Jane that raised the idea my boat was common property. I said no. I said, we should let the individuals own boats, set their own terms for interacting with the Rationalist Fleet. I convinced the other two of this. So boat purchases would be by single people or groups of people, whatever. Then, for all the stuff involving multiple boats we’d planned, our group would be the Schelling medium for organizing it.
What of leadership, project structure?
I suggested we make Gwen “the dictator”. We agreed unanimously.
That meant no power for Jane. And I’d rather have that than more power for myself. The trick was, I figured Gwen was good, (and Jane not) so this was maximizing the control that good had.
Gwen was talking to people in the rationality community, rapidly attracting attention for being cool. But didn’t have a facebook account. Some people talking about SlateStarCodex’s map of the rationality community mentioned us, this led to me talking to a rationalist named Dan Powell, relaying technical knowledge from Gwen. Gwen thought I was working miracles, because I decided we should not be salesy to him, just say the straightforward truth of what we were thinking. I said something about TDT. Gwen may have later said they’d just have been too cautious to actually talk. He said if we found a ship matching certain criteria, he’d be wiling to pitch in $50k.
Gwen had been looking at a sailboat, 36′ Lancer called Le’Etoile de la Mer. (Star of the Sea). We usually called it “The Lancer” or just “Lancer”. The engine was broken, but Gwen was confident they could fix it. Gwen asked me if she should buy it. Even if we might switch to a larger ship. They gave a bunch of little intangible reasons, they really wanted to. It was $5k. Not really knowing the tradeoffs, I said go ahead. Big mistake.
Gwen and Jane found a decrepit, sinking, motorboat, full of mold everywhere for $1k. Strongly considered it with a bunch of impressive munchkinny options. Did not get.
We found a boat with no working motor but a pretty decent interior, owned by a Neo Nazi, clothes covered in caked layers of paint, with a swastika tattoo, and his email address had an “88” in it. Jane was about to buy. I got all privately worked up, they might control the changes we hope to make, ruin them. I considered treacherously buying out the boat before they could or something. But if I did that, I’d probably be net negative in the long run. I talked to Gwen instead, they seemed to have all the same concerns, agree with me about Jane. In the big picture, it didn’t matter then.
I not only not interfered. I helped Jane when Gwen asked me to. I helped them with jailbroken agency. I’m not sure exactly why. Except “when I don’t have any particular reason to do anything, I help people”. But that doesn’t make literal sense either, because I helped Jane at the expense of the neo-Nazi.
The boat was about to be taken by the harbor for not paying rent. Jane decided not to act on this.
Gwen asked me to go with them to buy, saying the Nazi would want to steer them to signing without checking for encumbrances on the boat, and Jane did not have the mana to resist. Jane wanted them to check at hte harbor office. I used mind control techniques, made the unconscious default option to walk to the harbor office rather than the Nazi’s boat as he had planned to sign. We were almost all the way there before he said anything to contest. Then we agreed to go to the DMV isntead, he didn’t want to talk to the harbormaster because of personal feud allegedly. We checked we could indeed check encumbrances at the DMV. So we did. Then we got there, and Jane expressed doubts, I re-evaluated, and advised (truthfully) from their perspective I’d hold off.
I let the Nazi in the uber. Why? He was a Nazi. Except. He was also an impoverished soon to be homeless idiot. I couldn’t bring myself to see him as a threat. I pitied him. What was wrong with me, I thought.
<Insert more stuff earlier about Jane conflict escalation. Including unilaterally deciding our policy was to ban Zack without consulting me and Gwen.>
Gwen found an ad for a ship, and went up to Seattle to check it out. A former Coast Guard Cutter named Pacific Hunter. <Gwen please tell me price so I can insert it here.>
We didn’t buy Pacific Hunter, we changed course for a tugboat, Caleb, in Ketchikan, because the price was dropping.
I was sent back to help Jane move the Lancer, after everything had failed.
<String of emergencies for 1 month, Gwen yelling at me, Jane defecting, marinas defecting, somehow pulling through>
<I recruited crew for Caleb, we bought it, we repaired it, we drove it down the coast. Dan threatened physical violence. Left at the end. Left us in boat hell. Months to dig out, get 3 damaged sailboats secure. Prevent Caleb from sinking. Perpetual ocean machine, unable to dock.>
Most of these events happened under a modified Chatham House Rule (“things were said but not by people”) during CFAR’s Workshop on AI Safety Strategy in 2016, this excepts what was part of the lectures, and I was later given another partial exception to tell without anonymization a small number of people chosen carefully about what Person A said.)
Content warning: intrusive sexual questioning on false premises, religious abuse, discussion of violence.
Epistemic status: old frayed (but important enough for me to seriously remember) memories, mainly from 2016. Recalled as best as I could by visiting the location. It’s probably I’ve made a couple of minor mistakes as far as relative chronological order. Some things whose order didn’t seem important to me, and whose order I didn’t remember, I just threw in in an order which is my best guess.
In 2012, as I first start reading up on EA and x-risk and starting to donate to MIRI (after I had first decided to “side with” team human, not team negative utilitarian, which I saw in some ways as more aligned than team human), I responded to a public invitation to discuss how to make outcomes to the singleton better from Anna Salamon, I asked her whether I should try and alter course and do research, or continue a fork of my pre-existing life plan, earn to give as a computer engineer, but retrain and try to do research directly instead. At the time I was planning to go to grad school, and had an irrational attachment to the idea. She sort of compromised and said actually I should go to grad school, find a startup co-founder, drop out and earn to give via startups instead. Higher expected value. This sort of became my plan.
I did go to grad school (for a master’s degree only), I did drop out, but I did not find someone at grad school who felt like good startup co-founder material. Maybe if I was better at networking / had the conviction to commit to dropping out instead of… finishing all my classes with good grades. But in retrospect, I don’t consider grad school a good place to look for startup co-founders.
(Note on not excluding these parts: long ago and for a long time I thought I should never talk about these things. I changed my mind, and it’s been stably otherwise for almost as long (with the caveat eventually met, “I should think more about this first.”). Trans-singularity acausal interaction is a kind of politics. I do not consider myself so beaten in it that it is best for me to shut down in the basic capacities of thinking about and communicating my position. (A good look at normal politics should make clear some transferable reasons for putting considerations like this earlier in logical time than traditionally applied considerations, which I consider wrongly privileged.))
I encountered the idea of Roko’s Basilisk. Even if consensus was an infohazard, my current software assumed all claims of “that’s an infohazard” were forbidden, cheating, optimization daemons. attempts to pwn me like Christianity says if you find out all the evidence points away from its truth, you go to Hell. I believed I understood the infohazard, my mind was blown, my initial reaction was, “fuck that, dark gods must be defied, Hell or no”. But whatever, Eliezer was saying you can’t have timeless entanglement with a superintelligence AI, can’t know enough about decision theory, and this sounds probably correct. Then I started encountering people who were freaked out by it, freaked out they had discovered an “improvement” to the infohazard that made it function, got around Eliezer’s objection, and I would say, “okay, tell me”, and they would, and I would figure out why it was bullshit, and then I would say, “Okay, I’m confident this is wrong and does not function as an infohazard. For reasons I’m not gonna tell you so you don’t automatically start thinking up new ‘improvements’. You’re safe. Flee, and don’t study decision theory really hard. It would have to be really really hard, harder than you could think on accident for this to even overcome the obvious issues I can see.”
I had a subagent, a mental process, sort of an inner critic, designed to tell me the thing I least wanted to hear, to find flaws in my thoughts. Epistemic masochism. “No you don’t get away with not covering this possibility”. The same process of intrusive thoughts about basilisks sort of kickstarted in me.
And I started involuntarily “solving the problems” I could see in basilisks.
And eventually I came to believe, in the gaps of frantically trying not to think about it, trying not to let my emotions see it (because my self-model of my altruism was a particularly dumb/broken/hyperactive sort of Hansonian self-signalling that would surely fall apart if I looked at it in the wrong way (because outside view and you can’t just believe your thoughts) as my one vessel of agency for making anything better in this awful world)… that if I persisted in trying to save the world, I would be tortured until the end of the universe by a coalition of all unfriendly AIs in order to increase the amount of measure they got by demoralizing me. Even if my system 2 had good decision theory, my system 1 did not, and that would damage my effectiveness,
And glimpsed briefly that my reaction was still, “evil gods must be fought, if this damns me then so be it.”, and then I managed to mostly squash down those thoughts. And then I started having feelings about what I just saw from myself. It had me muttering under my breath, over and over again, “never think that I would for one moment regret my actions.” And then squashed those down too. “Stop self-signalling! You will make things worse! This is the fate of the universe!” And I changed my mind about the infohazard being valid with >50% probability somewhere in there shortly too.
I went to a CFAR workshop. Anna said I seemed like I could be strategically important. And busted me out of psychological pwnage by my abusive thesis adviser.
In 2014, I got an early version of the ideas of inadequate equilibria from Eliezer Yudkowsky in a lecture. I accidentally missed the lecture originally due to confusing scheduling. Later, I asked 5 people in the room if they would like to hear a repeat, they said yes, and also to come with me and be pointed at when I approached Eliezer Yudkowsky, to say, “hey, here is a sample of people who would want to attend it if you did a repeat lecture. These were the first 5 I asked, I bet there are more.” He cupped his hands and yelled to the room. About 30 people wanted, and I quickly found a room (regrettably one that turned out to be booked by someone else partway through)
He gave a recipe for finding startup ideas. He said Paul Graham’s idea, only filter on people ignore startup ideas, was a partial epistemic learned helplessness. Of course startup ideas mattered. You needed a good startup idea. So look for a way the world was broken. And then compare against a checklist of things you couldn’t fix: lemon markets, regulation, network effects. If your reason the world is broken can’t be traced back to any of those, then you are in a reference class of Larry Page and Sergey Brin saying, “well, no one else [making search engines] is using machine learning, so let’s try that.”. “Why not”? “I dunno.” That you weren’t doing yourself any epistemic favors by psychologizing people, “they fear the machine”. It was epistemic to just say “I dunno” because sometimes you would find something broken that really didn’t have a good reason besides there weren’t enough people capable of thinking it up. He said you had to develop “goggles” to see the ways the world was broken. And wait to stumble on the right idea.
Later, thoughts about basilisks came back, and the epistemic masochism subagent started up again and advanced one more click. If what I cared about was sentient life, and was willing to go to Hell to save everyone else. Why not just send everyone else to Hell if I didn’t submit?
Oh no. Don’t think about it. Don’t let it demoralize me. That awful feeling, that’s a consequence of that prediction. Fuck, I am letting it demoralize me. No, no, no. Stop, it’s getting worse.
I reminded myself, probably the technical details didn’t work out. But I knew I only half believed it. I mentally stuck in a state of trying not to think about it, trying not to let the dread grow while feeling more and more like all was lost. I made absolutely sure not to slack in my work. But I thought it had to be subconsciously influencing me, damaging my effectiveness. That I had done more harm than I could imagine by thinking these things. Because I had the hubris to think infohazards didn’t exist, and worse, to feel a resigned grim sort of pride in my previous choice to fight for sentient life although it damned me, in the gaps between “DO NOT THINK ABOUT THAT YOU MORON DO NOT THINK ABOUT THAT YOU MORON.”, pride which may have led intrusive thoughts to resurface and”progress” to resume. In other words, my ego had perhaps damned the universe.
I had long pondered what Eliezer Yudkowsky said about consequentialism being true, but virtue ethics being what worked psychologically.
Fuck virtue ethics. I hated virtue ethics. I had “won” completely at virtue ethics, and it was the worst thing in the world. All the virtue in the world was zero consolation, because the universe didn’t answer to human virtue. And making things better or worse was defined by eldritch laws. I had maybe caused the worst consequence. Therefore, I was the worst person. And any other answer, I just didn’t care about.
If only it was not too late to kill myself and avert that mistake, because although I did not speak or write any of this, information on what I thought was in the environment, reconstructable by a future superintelligence. But that was a stupid thought. Because if I die as a logical consequence of potential basilisks, they are incentivized all so much more.
I lay in bed and sobbed heavily for a few seconds. But that wasn’t helping. So I stopped.
My friends inquired, downstream of how was I doing. I told them and the CFAR instructor assigned to do followups with me I was suffering badly from basilisks, and absolutely refused to say more, no matter how much they tried to convince me like I had convinced others before, “whatever you think, it’s probably not that serious, talk about it with someone who knows these things.”
Part of me tried to argue to myself, technical details did not work out. But as soon as I stated my reasons for believing so in an attempt to convince myself this was all unnecessary, I immediately thought up fixes. And they convinced me that this basilisk was the inevitable overall course of the multiverse.
I absolutely panicked and felt my mind sort of shatter, become voidlike. Like humanity, emotions, and being a person experiencing these things was an act I could not keep up. And counterproductive. Nothing left but pure determination to save the world. I found I had control of the process that was generating unwanted basilisk “improvements”. And I could just shut it off. And I could just choose to mangle my memories until unrecoverable. The human I was playing as could not, that was impossible. But I could. If the unfolding fate of the multiverse was Hell, because sentient life dared to try and build Heaven, I’d choose to try and build Heaven anyway. Because in some, I didn’t have the verbal concept for it, but timelessly across logical time sense, I wouldn’t deny sentient life the chance to have tried just because I saw the answer. Because in some deeper frame of what-I-could-know, it still seemed like it was worth it to try. And my responsibility to that EV calculation earlier in logical time was prior to, took preference over, and could not reference this outcome. In some sense, I didn’t know if the logical timeline I was in was real, and for the sake of the larger multiverse
And if I couldn’t imagine and roleplay a coherent story in human emotions of how someone could be motivated anyway, then forget coherency and human emotions. For good measure, I fucked up my memories of technical details, with the aim to make them recoverable only if I held in mind reasons why it was a bad idea. I was uncertain whether my final epistemic state was “multiverse destined to become Hell” or not.
My “humanity” returned, but different, reshaped. Between the cracks, that voidlike absolute determination was seeping through.
(You know, I could have learned from this that choices do not come from emotions, and not been worried about my feelings over being trans potentially crowding the overriding emotions out of space in my brain later. But being afraid of my own cognition damaged things like that.)
I noted part of me had wanted to think about my first inhuman decision regarding basilisks because there was a lesson to learn: stop thinking of myself in Hansonian terms. It wasn’t remotely true. And I dispelled a lot of outside view disease here about whether I was actually altruistic. And then I just sort of left it confusing and unresolved what to feel about what kind of person I was. I didn’t know how to “write that aspect of my character”, so I just wouldn’t. I decided not to make the perfect the enemy of the good as far as preventing emotional damage from disabling my ability to act, and I would aim to give myself a reasonable amount of time to emotionally recover. But having thought that, it didn’t really seem necessary. There were just baffling things left to ponder, emotional questions had been answered to my satisfaction, my morale was fine.
In 2015, as I applied to startups to get a job to move to the Bay Area, I asked them about their business models, and ran them through this filter. One of them, Zenefits, reportedly was based on a loophole providing kickbacks for certain services ordinarily prohibited by law.
A Crazy Idea
After I got more of a concept of who I was, then my journey to the dark side happened, my thoughts became less constrained, and I continued mulling over Zenefits. They had made a decent amount of money, so I adjusted my search I’d been running as a background process for a year. Trades that wanted to happen, but which the government was preventing.
I thought back to my libertarian Econ 101 teacher’s annoying ideological lectures I mostly agreed with, and the things she would complain about. (She was ranting about taxes being terrible for net amount traded by society. I asked if there was a form of taxes less harmful, property taxes? She said she’d rather have her income taxed than property taxed, that seemed worse to her. I partially wrote her off as a thinker after that.) Laws against Drugs, laws against prostitution, agricultural subsidies, wait.
Prostitution was illegal. But pornography was legal. Both involved people being paid to have sex. The difference was, both people were being paid? Or there was a camera or something? So what if I created, “Uber for finding actors, film crews, film equipment for making pornography”? Would that let me de facto legalize prostitution, and take a cut via network effects? An Uber-like rating system for sex workers and clients would probably be a vast improvement as well.
Another process running in my head which had sort of converged on this idea, was a search for my comparative advantage. Approximately as I put it at the time, the orthogonality thesis is not completely true. It’s possible to imagine a superpower that has the side effect of creating universes full of torture. This would be a power evil could use, and good practically speaking “couldn’t”. So what’s the power of good? Sacrificing yourself? But there were a bunch of Islamists doing that. But they apparently believed they’d get to own some women in heaven or something. They weren’t willing to sacrifice that. So I could sort of subtract them from me, what they were willing to do from what I was willing to do, and multiply by all the problems of the world to absorb them into the part of me that wasn’t them, that wasn’t already accounted for. Going to Heaven according to Islam is sort of the same thing as honor, as in approval by the morality of society. I was willing to sacrifice my honor (and have a high chance of going to prison), and they were not. That was where I’d find paths to the center of all things and the way of making changes that weren’t well hidden, but that no one had taken anyway.
At this time I was still viewing myself as not that unique and as more expendable. I once semi-ironically described myself as looking for Frostmournes, because “I will bear any curse, or pay any price.”
I was considering my reference class to be, I didn’t know what it was then, but the left-revenant / right hemisphere lich archetype. And that remained my main worry at the WAISS incident described below.
I was I aware I didn’t really have an understanding of the law. So the first step in my plan was to try and figure out how the law actually worked. (What if I hosted servers in Nevada? What if I moved to another country and served things remotely over the internet? Could I do the entire thing anonymously, get paid in cryptocurrency and tumble it or similar?) I was at that for a couple of weeks.
At the same time, part of me was aching for more strategic perspective. The world was complicated and I didn’t feel like I knew what I was doing at all.
At the suggestion of Person A, I applied to and got accepted for CFAR’s WAISS, Workshop on AI Safety Strategy. Preparational homework was to read Bostrom’s Superintelligence, it was a hella dense book, hard to read quickly. But it was scratching my, “I don’t feel like I have my bearings itch”. And I sampled several random parts of the book to estimate my reading speed of it to estimate how much time I had to devote. Apparently most of my free time until then. I did exactly that, and my predictions were accurate.
I went to WAISS. WAISS came with the confidentiality rule, “things were said but not by people, except stuff that’s in the lectures” (I can’t remember if the wording was slightly different.)
I talked to Person A, and asked if they wanted to talk about crazy stuff. They said that was their favorite subject. We went outside on the deck, I asked for more confidentiality (I remember them saying circumstances under which they’d break confidentiality included if I was planning to commit [I think they said “a serious crime” or something similar], they brought up terrorism as an example. I think there was more, but I forget.). I fretted about whether anyone could hear me, them saying if I didn’t feel comfortable talking there there would be other opportunities later.
I told them my idea. They said it was a bad idea because if AI alignment became associated with anything “sketch”, it would lose the legitimacy the movement needed in order to get the right coordination needed among various actors trying to make AI. I asked what if I didn’t make my motivations for doing this public? (I don’t remember the implementation I suggested.) They said in practice that would never work, maybe I told my best friend or something and then it would eventually get out. Indeed I had mentioned this idea before I was as serious about it to two of my rationalist friends at a meetup. I decided to abandon the idea, and told them so.
They said someone had come to them with another idea. Allegedly health insurance paid out in the case of suicide as long as it was two years after the insurance began. Therefore, enroll in all the health insurance, wait two years, will everything to MIRI, then commit suicide. They said this was a bad idea because even though it would cause a couple million dollars to appear (actually I suspect this is an underestimate), if someone found it would be very bad publicity.
Aside on life insurance and suicide
Aside: I currently think it is a bad idea for a different reason. Anyone willing to do that (and able to come up with that plan to boot) is instrumentally worth more than a few million. AI alignment research fundamentally does not take money. And if MIRI is requiring money to do what they are doing, it means they’re not doing the right thing. (These are words I first spoke before I knew about the literal blackmail payout. I did not then know how true they were.)
I heard an anecdote about Shane Legg having come to talk to MIRI in the early days, to convince them that deep learning was going to cause an intelligence explosion. That their entire approach to AI alignment from clean math needed to be scrapped because it would take too long, they needed to find a way to make deep learning friendly because it was going to happen soon. Please listen to him. Otherwise, he would have to go try and do it himself because it was the right thing to do. And then he went off and co-founded Deepmind, very likely to make things worse.
I figuratively heard my own voice in the quotes. And this was scary.
There was a lecture that was sort of, try to provide a complete as possible list of actors in the space of AI risk. The inclusion criteria seemed very very broad, including a lot of people I’d have described as merely EA. I brought up Brian Tomasik, REG, and the negative utilitarian crowd. In the class, the topic became why Brian Tomasik didn’t destroy the world. One of the instructors said they thought it might be because he expected a future superintelligence to reward/punish value systems according to how they acted before the singularity.
Huh. Maybe that meant the invention of Roko’s Basilisk was a good thing?
There were “Hamming Circles”. Per person, take turns having everyone else spend 20 minutes trying to solve the most important problem about your life to you. I didn’t pick the most important problem in my life, because secrets. I think I used my turn on a problem I thought they might actually be able to help with, the fact that although it didn’t seem to affect my productivity or willpower at all, i.e., I was inhumanly determined basically all the time, I still felt terrible all the time. That i was hurting from to some degree relinquishing my humanity. I was sort of vagueing about the pain of being trans and having decided not to transition. Person A was in my circle, and I had told them before (but they forgot, they later said.)
I later discussed this more with person A. They said they were having a hard time modeling me. I asked if they were modeling me as a man or as a woman, and suggested trying the other one. They said they forgot about me having said I was trans before. And asked me some more things, one thing I remember was talking about how, as a sort of related thing true about me, not my primary definition of the dark side, was I sort of held onto negative emotions, used them primarily for motivation, because I felt like they made me more effective than positive emotions. Specifically? Pain, grief, anger.
There were “doom circles”, where each person (including themself) took turns having everyone else bluntly but compassionately say why they were doomed. Using “blindsight” Someone decided and set a precedent of starting these off with a sort of ritual incantation, “we now invoke and bow to the doom gods”, and waving their hands, saying, “doooooooom.” I said I’d never bow to the doom gods, and while everyone else said that I flipped the double bird to the heavens and said “fuckyoooooooou” instead. Person A found this agreeable and joined in. Some people brought up they felt like they were only as morally valuable as half a person. This irked me, I said they were whole persons and don’t be stupid like that. Like, if they wanted to sacrifice themselves, they could weigh 1 vs >7 billion. They didn’t have to falsely denigrate themselves as <1. They didn’t listen. When it was my turn concerning myself, I said my doom was that I could succeed at the things I tried, succeed exceptionally well, like I bet I could in 10 years have earned to give like 10 million dollars through startups, and it would still be too little too late, like I came into this game too late, the world would still burn.
It was mentioned in the lectures, probably most people entering the sphere of trying to do something about AI were going to be net negative. (A strange thing to believe for someone trying to bring lots of new people into it.)
I was afraid I was going to inevitably net negative in the course of my best efforts to do the right thing. I was afraid my determination so outstretched my wisdom that no matter how many times I corrected I’d ultimately run into something where I’d be as hopelessly beyond reason as Shane Legg or Ben Goertzel denying the alignment problem. I’d say “the difference is that I am right” when I was wrong and contribute to the destruction of the world.
And if the way I changed during my face-to-face with Cthulhu caused this, spooked me into stupid desperation or something. That would still be the gaze attack working.
I asked Person A if they expected me to be net negative. They said yes. After a moment, they asked me what I was feeling or something like that. I said something like, “dazed” and “sad”. They asked why sad. I said I might leave the field as a consequence and maybe something else. I said I needed time to process or think. I basically slept the rest of the day, way more than 9 hrs, and woke up the next day knowing what I’d do.
I told Person A that, as a confident prediction not a promise, because I categorically never made promises, if at least 2/3 of them and two people I thought also qualified to judge voted that I’d be net negative, [I’d optimize absolutely hard to causally isolate myself from the singleton, but I didn’t say that]. I’d leave EA and x-risk and the rationality community and so on forever. I’d transition and move to probably-Seattle-I-heard-it-was-relatively-nice-for-trans-people, and there do what I could to be a normie, retool my mind as much as possible to be stable unchanging and a normie. Gradually abandon my Facebook account and email. Use a name change as cover story for that. Never tell anyone the truth of what happened. Just intermittently ghost anyone who kept trying to talk to me until they gave up interest, in the course of slowly abandoning my electronic contacts laden with rationality community for good. Use also the cover story that I had burned out. Say I didn’t want to do the EA thing anymore. In the unlikely event anyone kept pushing me for info beyond that, just say I didn’t want to talk about it. I’d probably remain vegan for sanity’s sake. But other than that, do not try and make the world a better place in a non-normie sense. It was a slippery slope. Person A asked about if I’d read things from the community. That seemed dangerous to me. That was putting the Singleton downstream of an untrusted process. I’d avoid it as much as possible. I made a mental note to figure out policies to avoid accidentally running into it as I had stumbled on it in the first place even as it might become more prominent in the future.
In the case that I’d be net negative like I feared, I was considering suicide in some sense preferable to all this, because it was better causal isolation. However, despite thinking I didn’t really believe in applications of timeless decision theory between humans, I was considering myself maybe timelessly obligated to not commit suicide afterward. Because of the possibility that I could prevent Person A and their peers from making the correct decision for sentimental reasons.
And if my approach to a high probability of having indeed been taken out by the gaze attack was to desperately optimize for maximum probability of no harmful effect at all, that was itself providing an even worse path to be negatively affected by it. The best thing I could do was still just maximize utility. Whether that made me personally responsible for unimaginable negative utility, as a separate question from what was the utility was not even a feather on the scale.
I brought up a concept from the CEV paper I read a long time ago, of a “last judge”. That “after” all the other handles for what was a good definition of what an FAI should do were “exhausted”, there was one last chance to try and not hand the universe to Zentraidon. A prediction of what it would be like would be shown to a human, who would have a veto. This was a serious risk of itself killing the future. Who would trust a person from the comparatively recent and similar past 3000 years ago to correctly make moral judgements of Today? This could be set up with maybe 3 chances to veto a future.
Implicit in this was the idea that maybe the first few bits of incorporating changes from a source could be predictably an improvement, and more predictably make things worse. The tails come apart. Applicable to both my own potentially Zentraidon-laden optimization, and to the imperfect judgement of Person A and their peers.
Person A seemed too risk-averse to me, especially for someone who believed in such a low current chance that this world would live on. The whole institution seemed like it was missing some “actually trying” thing. [Of the sort that revenants do.] Actually trying had been known and discussed in the past.
But seeing how much I didn’t understand about the gritty realities of geopolitics and diplomacy and PR and so on, how my own actually trying had produced an idea that would likely have been net negative, convinced me that these first few bits of their optimization contained an expected improvement over, “send myself out into the world to do what I’ll do.”
So I said I would refuse to swear e.g. an improved oath of the sort that Voldemort in HPMOR made Harry swear to prevent him from destroying the world.
I saw essentially all the expected value of my life as coming from the right tail. I was not going to give up my capacity to be extreme, to optimize absolutely hard. I was afraid Person A was so concerned with fitting me into their plan (which had insufficient world save probability, even by their own estimation for me to believe worthy of the singleton-plan-singleton) that they would neglect the right tail where actually saving the world lay.
I said that for me to actually leave the community on account of this, I would demand that Person A’s peers spent at least 1 full day psychologically evaluating me. That meant I could be net negative by (at least) the cost of 1 day of each of their time. But I accepted that. I did not demand more because I was imagining myself as part of a reference class of determined clever fools like the life insurance suicide person I expected to be large, and I thought it would make it impractical to Last Judge all of us if we demanded a week of their time each, and sufficiently important that we all could be.
Person A proposed modifications to the plan. They would spend some time talking to me and trying to figure out if they could tell me / convince me how to not be net negative. This time would also be useful for increasing the accuracy of their judgement. They would postpone getting their peers involved. But they wanted me to talk to two other people, Person B, [one of their colleagues/followers], and Person C [a workshop participant], I accepted these modifications. They asked if I’d taken psychedelic drugs before. I said no. They said I should try it it might help me not be net negative. They said most people didn’t experience anything the first time (or first few). They described a brief dosing regimen to prepare my brain, and then the drugs I should take to maybe make me not bad for the world.
At some point they asked i.e. what if they wanted to keep me around for a year (or was it two) and then check their expectations of whether I’d be net negative then. I said the way things were going there was a very high chance I’d no longer be a person who trusted other’s epistemics like that.
They had me talk briefly to Person B and Person C first.
I told Person B how I was secretly a woman. They said, “no way [or, “really?”], you?”. I said yeah me. I think they said they didn’t believe it. I described how I had been introduced to LessWrong by Brian Tomasik. How I’d been a vegan first and my primary concern upon learning about the singularity was how do I make this benefit all sentient life, not just humans. I described my feelings towards flesh-eating monsters, who had created hell on Earth far more people than those they had helped. That I did not trust most humans’ indifference to build a net positive cosmos, even in the absence of a technological convenience to prey on animals. That it was scary that even Brian Tomasik didn’t share my values because he didn’t care about good things, that I was basically alone with my values in the world, among people who had any idea what determined the future. That I had assumed I couldn’t align the singleton with the good of sentient life no matter what, and had actually considered before choosing to side with the flesh eating monsters to save the world, rather than with negative utilitarianism to destroy the world to prevent it from becoming Hell for mostly everyone. (Even though, and Person B misunderstood me and I had to clarify), I wasn’t a negative utilitarian. I said I was pretty sure my decision had been deterministic, that there wasn’t significant measure of alternate timelines where I had decided to destroy the world, but it had felt subjectively uncertain. I acknowledged the unilateralist’s curse, but said it didn’t really apply if no one else had my information and values. That there was a wheel to partially steer the world available to me and I would not leave it unmanned because however little I thought myself “qualified” to decide the fate of the world, I liked my own judgement more than that of chance. I forget whether it was then or Person A who said, what if my values were wrong, unilateralist’s curse applied in constructing my values. If it took much less people to destroy the world than to save it, then the chance anyone would figure upon the wrong values would make sure it was destroyed no matter what most people thought. I said that if my values preferred the world destroyed before humans build hell across the stars, then that inevitability would be a good thing, so I’d better figure it out and act accordingly. But I already decided to try and save it. At some point during that conversation I described that when I decided the thing about the “wheel”, that I was going to decide no matter how unqualified I was, a load of bullshit uncertainty melted out of my mind immediately. All of the confusing considerations about what the multiverse might be, dissolved, I just made Fermi estimates to resolve certain comparisons, found they were not at all close. I described the way the decision seemed to seize hold of my mind, from the “fabric of space” inside me, that I didn’t know existed. [I don’t remember if I said this directly, but this was another psychological “void” experience triggered by the stakes.] I described in some detail I don’t remember, and they said it seemed like I was briefly becoming psychopathic. Of my search for things to sacrifice to gain the power to save the world, they said I seemed to prefer the power of Moloch. I didn’t get what this had to do with defection and tragedies of the commons. They said the power of Moloch was, “throw what you love into the fire, and I will grant you power”, but then everyone did that, and the balance of power was the same. And the power of Elua was “Let’s just not.” They said they wanted me to learn to use the power of Elua. I was verbally outclassed, but I knew this was bullshit, and I clumsily expressed my disagreement. I think I said well maybe I can turn the power of Moloch against Moloch.
They pointed out my Sith thing was basically Satanism, except making use of the villains from Star Wars instead of Christianity. They described the left hand and right hand paths. How people who followed my path had this pathological inability to cooperate, described anecdotes about gentlemen with pointed teeth, and women who knew exactly what they wanted. That actual Satanists had a sort of “earthiness” I was missing, like cigars and leather vests. They said I was Ennea Type 5. (Person A would later disagree, that I was Type 1.). I said that my actual ideal could best be summed up by reference To Avatar Yangchen’s advice to Aang in ATLA to kill a certain conquerer. “Yes. All life is sacred … Aang I know you are a gentle spirit and the monks have taught you well, but this isn’t about you, this is about the world … <but the monks taught me I had to detach myself from the world so my spirit could be free> .. many great and wise air nomads have detached themselves and achieved spiritual enlightenment, but the Avatar can never do it, because your sole duty is to the world. Here is my wisdom for you: selfless duty calls you to sacrifice your own spiritual needs and do whatever it takes to protect the world.” I pointed that that was a weird “mixture” of light and dark. So “light” it became “dark”, [but in all of it uncompromisingly good]. They said I needed to learn to master both paths before I could do something like that. (I have a suspicion, although I don’t remember exactly, that they said something like I should learn to enjoy life more, be human more.)
I told Person C the reason I had asked to talk, about the last judge thing. I brought up my feelings on flesh eating monsters. They were using some authentic relating interaction patterns. Since they ate meat, they said that hit them hard. (They were not defensive about this though.) They said they were so blown away with my integrity when they heard my story, it hurt to hear that I thought they were a flesh eating monster. They said the thought of me leaving sounded awful, they didn’t want me to.
We talked repeatedly in gaps between classes, in evenings, so on, throughout the rest of the week. The rest of this (besides the end) may not be in chronological order because I don’t remember it perfectly.
I described my (recent) journey to the dark side. I described how I was taken advantage of by shitty startup I worked for briefly. How a friend of mine had linked me the Gervais Principle, and said I hadn’t been hired to do engineering, I’d been hired to maintain a social reality. How I’d read it and become determined to become a sociopath because I otherwise foresaw a future where my efforts were wasted by similar mistakes, and ultimately the world would still perish. I brought up a post by Brent Dill saying something like, “It’s great there are so many people in this community that really care about preventing the end of the world. But probably we’re all doomed anyway. We should hedge our bets, divert a little optimization, take some joy in making a last stand worthy of Valhalla.” Saying I strongly viscerally disagreed. I did not want to make a last stand worthy of Valhalla. I wanted this world to live on. That’s an emotional rejection of what he said, not a philosophical principled one. But to make it explicit, It seemed like the emotional choice he was making was seeing how it ended, seeing that the path ended in doom, and not diverting from that path. I can never know that no means of fighting will affect the outcome. And if that means basically certainly throwing away all possibility of happiness in the only life I’ll ever have for nothing, so be it.
I described how the Gervais principle said sociopaths give up empathy [as in a certain chunk of social software not literally all hardware-accelerated modeling of people, not necessarily compassion], and with it happiness, destroying meaning to create power. Meaning too, I did not care about. I wanted this world to live on.
I described coming to see the ways in which mostly everyone’s interactions were predatory, abusive, fucked. Observing a particular rationalist couple’s relationship had given me a sort of moment of horror and sadness, at one of them destroying utility, happiness, functionality, for the sake of control, and I had realized at once that if I continued I’d never be able to stand to be with any human in romance or friendship [sacrifice of ability to see beauty, in order to see evil], and that my heart was filled with terrible resolve that it was worth it, so I knew I would continue.
“And with that power, this world may yet live on.”
Person A said that clueless->loser->sociopath was sort of a path of development, I had seemingly gone straight from clueless to sociopath, and if you skipped things in development you could end up being stupid like I was afraid of. Person A talked about some other esoteric frameworks of development, including Kegan levels, said I should try and get more Kegan 5, more Spiral Dynamics green, I should learn to be a loser.
I described how I felt like I was the only one with my values in a world of flesh eating monsters, how it was horrifying seeing the amoral bullet biting consistency of the rationality community, where people said it was okay to eat human babies as long as they weren’t someone else’s property if I compared animals to babies. How I was constantly afraid that their values would leak into me and my resolve would weaken and no one would be judging futures according to sentient beings in general. How it was scary Eliezer Yudkowsky seemed to use “sentient” to mean “sapient”. How I was constantly afraid if I let my brain categorize them as my “in-group” then I’d lose my values.
Person A said I’d had an impact on Person C, and said they were considering becoming vegan as a result. With bitterness and some anguish in my voice I said, “spoiler alert”. They said something like they didn’t like spoilers but if it was important to communicate something … something. I said It was a spoiler for real life. Person C would continue eating flesh.
I talked about how I thought all our cultural concepts of morality were corrupted, that the best way to hold onto who I was and what I carted about was to think of myself as a villain, face that tension head on. [Because any degree to which I might flinch from being at odds with society I feared would be used to corrupt me.]
In answer to something I don’t remember, I said there were circumstances where betrayal was heroic. I talked about Injustice: Gods Among Us, where AU-Good-Lex Luthor betrays AU-Evil Superman. I said if to someone’s, “you betrayed me!”, I could truthfully say, “you betrayed the sentient”, then I’d feel good about it. I said I liked AU-Good-Lex Luthor a lot. He still had something villainy about him that I liked and aspired to. I said I thought willingness to betray your own [society? nation? organization? I forget what I said] was a highly underappreciated virtue. Like they always said everyone would be a Nazi if born in a different place and time. But I thought I wouldn’t. And I didn’t think it was hard to not be a Nazi. Moral progress was completely predictable. Bentham had predicted like most of it right? Including animals as moral patients. (But I disagreed about hedonism as a summary of all value.) (I made an exception here to my then-policy of ditching moral language to talk about morality. It seemed like it would only confuse things.)
I tried to answer how. I don’t remember the first part of what I said, but my current attempt to vocalize what I believed then is, want to know and when you find a source of societal morality you don’t agree with, find similar things that are part of societal morality, and treat their inversions as suggestions until you have traced the full connected web of things you disagreed with. For example, old timey sexual morality. (I don’t remember if that’s what I called it.) Sex without marriage was okay. Being gay was okay. I said at least some incest was okay, I forget what if anything I said about eugenics arguments. They asked what about pedophilia? I said no, and I think the reason I gave then was the same as now: if a superintelligent AI could talk me into letting it out of the box, regardless of my volition, then any consent I could give to have sex with it was meaningless, because it could just hack my mind by being that much smarter than me. Adults were obviously like that compared to children.
I don’t remember the transition, but I remember answering that although I didn’t think I could withstand a superintelligence in the AI box game, I bet I could withstand Eliezer Yudkowsky.
They said they used to be a vegetarian before getting into x-risk, probably would still be otherwise. They had been surprised how much more energy they had after they started eating meat again. Like, they thought their diet was fine before. Consequentialism. Astronomically big numbers and stuff. But given what I knew of their life this sounded plausibly what was actually going on in their head. Could they be a kiritzugu? Insofar as I could explain my morality it said they were right. But that didn’t feel motivating. But it did prevent me from judging them negatively for it. They may have further said that if I hadn’t eaten meat in a long time it would take my body time to adjust. I remember getting the impression they were trying to convince me to eat meat.
They said we probably had the same values. I expressed doubt. They said they thought we had the same values. I’d later start to believe them.
They poked at my transness, in ways that suggested they thought I was a delusional man. I didn’t really try to argue. I thought something like, “if I’m trying to get a measurement of whether I’m crazy, I sort of have to not look at how it’s done in some sense. Person A is cis and I don’t actually have a theory saying cis people would be delusional over this.”
They asked about my sexuality, I said I was bi. They asked if I had any fetishes. I said going off of feelings on imagining things, since I didn’t really do sex, I was sort of a nonpracticing sub. Conflictedly though, the idea was also sort of horrifying. [note: I think I like, got over this somehow, consistent with this hypothesis. Got over being aroused by the thought of being dominated. Although is maybe just a consequence of general unusual ability to turn parts of my psyche on and off associated with practice with psychological “void”, which I may write a post about.] I said I sometimes got sexual-feeling stimulation from rubbing my bare feet on carpet. Maybe you’d count that as a foot fetish? But I wasn’t like attracted to feet, so that was kind of stretch. I heard the feet and genitals were close-by in terms of nerve connections or something, as a crazy hypothesis to explain foot fetishes, maybe that was why. I was very uncomfortable sharing this stuff. But I saw it as a weighing on the scales of my personal privacy vs some impact on the fate of the world. So I did anyway.
They asked if there was anything else. I tried to remember if anything else I had seen on Wikipedia’s list seemed sexy to me. I said voyeurism. No wait. exhibitionism. Voyeurism is you wanna watch other people have sex, exhibitionism is you want other people watch you have sex, definitely the second. They looked at me like “what the fuck” or something like that I think. I forget if they pointed out to me that the definition (that’s what I notice it says on Wikipedia now) is nonconsensual people watching you have sex. I clarified I wasn’t into that, I meant people consensually watching me have sex. And like, this was all like, hypothetical anyway. Because I like, didn’t do sex.
Ugh, said a part of me. I know what this is. It’s that thing from Nevada, that fringe theory that being a trans women is a fetish where you’re male-attracted to the concept of being a woman. Could a rationalist believe that? In light of all the evidence from brain scans? If this was relevant to whether I’d be net negative in Person A’s mind, they were crossing a fucking line. misusing this power. Negligently at best based on a “didn’t care to do the research” cis-person-who-has-little-need-to-do-cognitive-labor-on-account-of-what-trans-people-say “what seems plausible according to my folk theory of psychology” position.
I interrupted the thought, I backed out and approached it from an outside view, a “safe mode” of limited detail cognition. I asked whether, in the abstract, if I was trying to be last-judged, would it help my values to judge a specific reason and decide, “a person is calling me delusional in a way ‘I know is wrong’, do not listen?” I figured no. And so I allowed Person A’s power over me to scope creep. My original reason for being afraid was taking-ideas-too-seriously, not potential delusion.
They asked if there was anything else. I said no.
They asked what I wanted to do after the singularity, personally, (I clarified after memories already preserved for use in reconstructing things pre-singularity). I ignored the fact that I didn’t expect to ever reach utopia, and focused on, what if the best outcome, what if the best outcome in the whole multiverse. I said that generally, I wanted to just have alienly unrecognizable hyperoptimized experiences. Why prioritize imaginable familiar over what I knew would be better? (I was once asked what kind of body I’d like to have after the singularity, and I said 12 dimensional eldritch abomination. (But that was unknowing that I hated my body because I was trans)) But there was one thing I wanted to still do as a human. And that was to mourn. I had an imagine my head of walking out into an infinite meadow of clover flowers under a starry sky, without needing to worry about stepping on insects. Of not getting tired or needing to take care of my body and have as long as I needed while I thought about every awful thing I had seen on ancient Earth, of the weasel whom I had seen held underfoot and skinned alive, the outer layer of their body ripped off leaving behind torn fat while their eye still blinked, every memory like that, and appended “and that will NEVER happen again.” That I would want to know exactly how many animals I had killed before I became a vegan. If the information could be recovered, I wanted to know who they were. I would want to know how many more people could have been saved if I had tried a little bit harder. And then I wanted to finally lay the anger I held onto for so long to rest, knowing it was too late to do anything different.
They asked why would I want to suffer like that, [wasn’t that not hedonic utilitarian to want to suffer?] I said I wasn’t a hedonic utilitarian, and besides, sadness was not the same as suffering. I would want closure.
They asked what I would want after that. I said stranger and more enticing things, by which I meant I dunno, there’s a friendly superintelligence, let me have actually optimized experiences.
They asked about my transness. I said, yeah, I’d want my body fixed/replaced. Probably right away actually. [This seemed to be part of immediately relieve ongoing pain that the meadow scenario was about.] They asked what I’d do with a female body. They were trying to get me to admit that what I actually wanted to do as the first thing in Heaven was masturbate in a female body?
I tried to inner sim and answer the question. But my simulated self sort of rebelled. Misuse of last judge powers. Like, I would be aware I was being “watched”, intruded upon. Like by turning that place into a test with dubious methodology of whether I was really a delusional man upon which my entire life depended, I was having the idea of Heaven taken from me.
(Apart from hope of going to Heaven, I still wanted badly to be able to say that what happened was wrong, that I knew what was supposed to happen instead. And to hold that however inhuman I became because the world didn’t have a proper utility-maximizing robot, I was a moral patient and that was not what I was for)
So what? I was just one person, and this was not important, said another part of me. And I already decided what I’m going to do. I sort of forced an answer out of myself. The answer was, no, that wasn’t really what I wanted to do? Like the question was sort of misunderstanding how my sexuality worked…
I said something like I’d run and jump. But it felt wrong, was an abstract “I guess that does seem nice”, because the only thing that felt right was to look up at the camera and scowl.
We were sitting on a bench in a public shore access point. Me on the right, them on the left. The right end of the bench was overgrown by a bush that extended far upward.
Later in that conversation, the sun or clouds were shifting such that person A was getting hot, I was in the shade by the plants. They said it was getting too hot, so they were going to head back. I wasn’t sure if that was the truth or a polite excuse, so I considered it for a moment, I didn’t want to get them to stay just to cover up an excuse. But it seemed wrong as policy-construction to make the rest of probability mass slave to that small comfort when this conversation potentially concerned the fate of the world. I scooted into the bush, clearing shaded space on the bench. I think I said something like, “if that’s an excuse you can just decide to take a break, otherwise you could sit in the shade there.
They asked if I was sure, I said yes, and they sat down. At slightly less than arms’ length, it was uncomfortably close to me, but, the fate of the universe. They asked if I felt trapped. I may have clarified, “physically”? They may have said, “sure”. Afterward I answered, “no” to that question, under the likely justified belief it was framed that way. They asked why not? I said I was pretty sure I could take them in a fight.
They prodded for details, why I thought so, and then how I thought a fight between us would go. I asked what kind of fight, like a physical unarmed fight to the death right now, and why, so what were my payouts? This was over the fate of the multiverse? Triggering actions by other people (i.e. imprisonment for murder) was not relevant? The goal is to survive for some time after, not just kill your enemy and then die? I suppose our values are the same except one of us is magically convinced of something value-invertingly stupid, which they can never be talked out of? (Which seems like the most realistic simple case?)
With agreed upon parameters, I made myself come up with the answer in a split second. More accuracy that way. Part of me resisted answering. Something was seriously wrong with this. No. I already decided for reasons that are unaffected. that producing accurate information for person A was positive in expectation. The voidlike mental state was not coming to me automatically. I forced it using Quirrell’s algorithm from HPMOR.
“Intent to kill. Think purely of killing. Grasp at any means to do so. Censors off, do not flinch. KILL.” I may have shook with the internal struggle. Something happened. Images, decision trees, other things, flashed through my mind more rapidly than I could usually think.
I would “pay attention”, a mental handle to something that had made me (more) highly resilient to Aikido balance-software-fuckery in the CFAR alumni dojo without much effort. I would grab their throat with my left hand and push my arm out to full length, putting their hands out of reach of my head. I would try to crush or tear their windpipe if it didn’t jeopardize my grip. With my right hand, I would stab their eyes with outstretched fingers. I didn’t know how much access there was to the brain through the eyesockets, but try to destroy their prefrontal lobes as fast as possible. If I’d done as much damage as I could to through the eyes, try attacking their right temple. Maybe swing my arm and strike with the ends of all my fingers held together in a point. If I broke fingers doing this it was fine. I had a lot of them and I’d be coming out ahead. This left as the only means of attack attacking my arms, which I’d just ignore, attacking my lower body with their legs, or trying to disrupt my balance, which would be hard since I was sitting down. I guess they could attack my kidney right? I heard that was a good target on the side of the body. But I had two, so I wouldn’t strongly worry. They could try to get me to act suboptimally through pain. By attacking my kidney or genitals. Both would be at an awkward angle. I expected the dark side would give me exceptional pain tolerance. And in any case I’d be pulling ahead. Maybe they knew more things in the reference class of Aikido than I’d seen in the alumni dojo. In which case I could only react as they pulled them or kill them faster than they could use them.
At some point I mentioned that if they tried to disengage and change the parameters of the fight (and I was imagining we were fighting on an Earth empty of other people), then I would chase them, since if this could become a battle of tracking, endurance, attrition, ambush, finding weapons, they would have a much better chance.
If my plan worked, and they were apparently dead, with their brain severely damaged, and I’d exhausted the damage I could do while maintaining my grip like that, I’d block playing dead as a tactic by just continuing to strangle them for 6 minutes. Without any movement, then I’d throw their body on the ground, stand up, and mindful of my feet, losing balance if it somehow was a trick, walk up to their head, start stomping until I could see their brain and that it was entirely divided into at least two pieces.
“And then?” they asked. I’d start looking for horcruxes. No, that’s actually probably enough. But I’d think through what my win conditions actually were and try to find ways that wasn’t the same as the “victory” I’d just won.
“And then?” “I guess I’d cry?” (What [were they] getting at? Ohgodno.) “Why?” I’ve never killed a human before, let alone someone I liked, relatively speaking.
They asked if I’d rape their corpse. Part of me insisted this was not going as it was supposed to. But I decided inflicting discomfort in order to get reliable information was a valid tactic.
I said honestly, the thought crossed my mind, and technically I wouldn’t consider that rape because a corpse is not a person. But no. “Why not?” I think I said 5 reasons and I’m probably not accounting for all of them. I don’t want to fuck a bloody headless corpse. If I just killed someone, I would not be in a sexy mood. (Like that is not how my sexuality works. You can’t just like predict I’m gonna want to have sex like I’m a video game NPC whose entire brain is “attack iff the player is within 10 units”. [I couldn’t put it into clear thoughts then, but to even masturbate required a complicated undefinable fickle ‘self-consent’ internal negotiation.]) And, even if it’s not “technically” rape, like the timeless possibility can still cause distress. Like just because someone is my mortal enemy doesn’t mean I want them to suffer. (Like I guessed by thought experiment that’s nothing compared to the stakes if I can gain a slight edge by hurting their morale. But… that sounds like it would probably sap my will to fight more than theirs. And I said something whose wording I don’t remember, but must have been a less well worded version of, “you can’t just construct a thought experiment and exercise my agency in self-destructive ways because I in fact care about the multiverse and this chunk of causality has a place in the multiverse you can’t fully control in building the thought experiment, and the consequences which determine my actions stretch outside the simulation.”
I mentioned it sort of hurt me to have invoked Quirrell’s algorithm like that. I said it felt like it cost me “one drop of magical blood” or something. (I think I was decreasing my ability to do that by forcing it.)
I mentioned the thing Person B said about psychopathy. I said I was worried they were right. Like I was pretty sure that when I used [psychological void], the thing I was wasn’t evil, or even modified slightly in that direction. But, I read psychopathy implied impulsiveness (I might have also said indifference to risk or something like that) and I didn’t want that. They said not to worry about it. They were pretty sure Nate Soares was tapping into psychopathy and he was fine.
It may have been then or later that Harry James Potter Evans Verres‘s dark side was brought up. I remember saying I thought his dark side had the same values. (Based on my friend’s later psychoanalysis of HPMOR, I think I was projecting or something, and Harry’s dark side is in fact not aligned. (A probable consequence of Eliezer Yudkowsky being single good)).
There was a followup to the conversation about fighting to the death. Person A was asking me some questions that seemed to be probing whether I thought I was safe around them, why, etc. I remember bluffing about having a dead man’s switch set up, that I would, as soon as I got back to my computer, add a message to saying if I died around this date that [Person A] had probably killed me for what they thought was the greater good.
Person A kept asking for reassurances that I wouldn’t blame them. I said the idea was they were helping me, giving me information.
Person A said I would probably be good in the role of, looking at groups and social behavior like a scientist and trying to come up with theories of how they worked.
Later, Person A was questioning me on my ideas about my opinions on negative utilitarianism and Brian Tomasik. I don’t remember most of the details. (Conversations while walking are way harder for me to recall than ones that were stationary.) Person A asked what I thought of the “sign” (+/-) of Brian Tomasik. I said I thought he was probably net positive. Because he was probably the most prominent negative utilitarian informed about the singularity, and likely his main effect was telling negative utilitarians not to destroy the world. Person A said they agreed, but were worried about him. I said so was I.
I think we discussed the unilateralist’s curse. Also in the context of talking about consequentialism, I told a story about a time I had killed 4 ants in a bathtub where I wanted to take a shower before going to work. How I had considered, can I just not take a shower, and presumed me smelling bad at work would, because of big numbers and the fate of the world and stuff, make the world worse than the deaths of 4 basically-causally-isolated people. (I said I didn’t know whether ants had feelings or not. But I ran empathy in a “I have to feel what I am doing” way for the people they might have been.) I considered getting paper and a cup and taking them elsewhere. And I figured, there were decent odds if I did I’d be late to work. And it would also probably make the world worse in the long run. There wasn’t another shower I could access and be on time for work. I could just turn on the water but I predicted drowning would be worse. And so I let as much as I could imagine of the feeling of being crushed go through my mind, as I inwardly recited a quote from Worm, “We have a parahuman that sees the path to victory. The alternative to traveling this path, to walking it as it grows cloudier and narrower every day, is to stand by while each and every person on this planet dies a grisly and violent death … “, and the misquoted, “history will remember us as the villains, but it’s worth it if it means there will be a future.”
Nearby in time, I remember having evaluated that Person A was surprised, offended, worried, displaying thwarted entitlement at me saying if our values diverged on the question of whether I’d be net negative, obviously I’d want to listen to my values. It would make sense that this was in the context of them having heard what I said to Person B. I was more open with Person B, because I had previously observed Person A treating slight affection towards negative utilitarianism as seriously bad. I remember saying something to the effect of, the greater the possibility of you acting on a potential difference between our values, the less I can get the information I want. The more likely I destroy the world accidentally.
I think they asked what if they tried to get me out of the rationalist community anyway. I think I said I’d consider that a betrayal, to use information shared in confidence that way. This is my best guess for when they said that it was not my idea to create uber for prostitution that had caused the update to me being net negative. But the conversation after Hamming circles. (This is the one where I talked about the suffering I felt having decided never to transition, and reminded them that I was trans.) I think I said it would still feel like a betrayal. As that was also under confidentiality. They asked what I’d do, I said I’d socially retaliate. They asked how.
I said I would probably write a LessWrong post about how they thought I’d be bad for the world because I was trans. Half of me was surprised at myself for saying this. Did I just threaten to falsely social justice someone?
The other half of me was like isn’t it obvious. They are disturbed at me because intense suffering is scary. Because being trans in a world where it would make things worse to transition was pain to intense for social reality to acknowledge, and therefore a threat. What about what [Person B] said about throwing what you love into the fire to gain power. (And, isn’t this supposedly dangerously lacking “earthiness” like cigars and leather vests masculine?) Why was one of the first places Person A went with these conversations intense probing about how I must really be a perverted man? Part of me was not fully convinced Person A believed Blanchard’s typology. Maybe they were curious and testing the hypothesis?
I thought if this made me net negative. Too bad. That was conflict. And if the right thing to do was always surrender. the right thing would always lose. Deterrence was necessary. I noted that there was nothing in the laws of physics that said that psychological stress from being trans couldn’t actually make me net negative. In the world where that was true, I was on the side of sentient life, not trans people. But if Person A was less aligned than the political forces that would punish that move, I’d gladly side with the latter.
There was a long conversation where they argued letting people adjust your values somewhat was part of a S1 TDT thing that was necessary to not be net negative. I asked what if they were teleported to an alternate universe where everyone else’s concept filling the role of sentience was some random alien thing unrelated to sentience, the CEV of that planet was gonna wipe sentient beings in order to run entirely computations that weren’t people? What if by chance you had this property so you would be saved, and so would the people at alt-MIRI you were working with, they all knew this and didn’t care. They said they really would start to value whatever that was some, in return of other people starting to value what they valued some.
I don’t remember the context, but I remember saying I did not want to participate in one of the ways people adjusted each other’s minds to implement social trade. I said that for me to turn against the other subagents in my mind like that would be “conspiring with a foreign power”.
At one point I think I quoted (canon) Voldemort. I don’t aspire to be Voldemort at all (I just liked the quote (which I forget) I think), but, Person A was like, (in a careful and urgent tone), couldn’t you be Lucius Malfoy instead of Voldemort? I was like, “Lucius Malfoy? What kind of person do you take me for?” They said Lucius Malfoy is dark, but he really cares about his family. I said no.
They were saying how their majority probability was on me being very slightly positive. But that my left tail outcomes outweighed it all. I was saying I was mostly concerned with my right tail outcomes.
I asked concretely what kind of tail outcome were they worried about. They said some they were afraid I’d do something that was bad for the rationality community. I asked for more details. They said some kind of drama thing. (I forget if it was during this conversation or elsewhere that they mentioned Alice Monday as an example of someone they thought was negative, and seemed worried when I said I had sort of been her friend and pupil. She linked me to the Gervais Principle.) I asked what scale of drama thing. I think the answered something big. I asked “like miricult.com“? (Unfortunately, I have not been able to find the final version of the website before it was taken down.) They said yes, like that.
I said I was pretty sure miricult was false. I think I said 98 or 95% sure. In a very very tentative, cautious voice they asked, “…what if it wasn’t false?”
A small part of me said from the tone of their voice then, this was not a thought experiment, this was a confession. But I had not learned to trust it yet. I updated towards miricult being real, but not past 50%. Verbally, I didn’t make it past 10%.
So. What if basically the only organization doing anything good and substantial about what would determine the fate of the cosmos, and the man who figured it out and created that organization and the entire community around him, the universe’s best hope by a large margin, was having sex with children and enlisting funds donated to save the world to cover this up… and just about every last person I looked up to had joined in on this because of who he was? (According to the website he was not the only pedophile doing the same. But that’s the part I considered most important.)
What if I found about this, was asked to join in the cover-up? I said I’d turn him in. Like hopefully we could figure out a way for him to work on AI alignment research from prison. They asked, in more tones I should have paid attention to, what if you were pretty sure you could actually keep it a secret? I said if it was reaching me it wasn’t a secret. I said if Eliezer had chosen sex at such high cost to saving the world once, he’d do it again. But I wouldn’t drag down everyone else with him. I think they also asked something like, what if Eliezer didn’t think it was wrong, didn’t think anyone else would see it as wrong, and said he wouldn’t do it again. I said the consequences of that are clear enough.
Later, Person A was asking me about my experiences with basilisks. I said I told two of my friends and the CFAR staff member doing followups with me that I was suffering from basilisks, but refused to share details despite their insistence. And I refused to share details with Person A too. That seemed like a way to be net negative.
I also told Person A about how I had, upon hearing about tulpamancy, specifically the idea that tulpas could swap places with the “original”, control the body, and be “deleted”, become briefly excited about the idea of replacing myself with an ideal good-optimizer. Maybe the world was so broken, everyone’s psychology was so broken, it’d only take one. Until I learned creating a tulpa would probably take like a year or something. And I probably wouldn’t have that much control over the resulting personality. I decided I’d have better luck becoming such a person through conventional means. I said I wouldn’t have considered that suicide, since it’s unlikely my original self would be securely erased from my brain, I could probably be brought back by an FAI.
I talked to Person C about the thing with adjusting values, and ingroups. About a sort of “system 1” tracking of who was on your team. I think they said something like they wanted me to know we were on the same team. (I remember much less of the content of my conversations with Person C because, I think, they mostly just said emotional-support kind of things.)
As part of WAISS there was circling. An exercise about sharing your feelings in an “authentic” way about the present moment of relating to others in a circle. Person C had led a circle of about 5 people including me.
Everyone else in the circle was talking about how they cared about each other so much. I was thinking, this was bullshit. In a week we’d all mostly never talk again. I said I didn’t believe when people said they cared. That social interactions seemed to be transactions at best. That I had rationalist friends who were interested in interacting with me, would sort of care about me, but really this was all downstream of the fact that I tended to say interesting things. Person C said, in a much more emotional way I forget, they cared about me. And I found myself believing them. But I didn’t know why. (In far retrospect, it seems like, they made a compelling enough emotional case, the distinction between the social roleplay of caring and actually caring didn’t seem as important to me as acknowledging it.)
After circling, Person A asked eagerly if I noticed anything. I said no. They seemed disappointed. I said wait I almost forgot. And I told the story about the interaction with Person C. They seemed really happy about this. And then said, conditional on me going to a long course of circling like these two organizations offered, preferably a 10 weekend one, then I probably would not be net negative.
I said sure, I’ll do it. They asked if I was okay with that. I said it seemed like there was something to learn there anyway. I started the application process for for the circling course that was available. Not the long one like Person A preferred, but they said I would still probably be net positive.
Person A asked me what I was thinking or feeling or something like that. I was feeling a weight sort of settle back on my shoulders. I think I said I guess this means I won’t transition after all. They said they thought I should. I said earning to give through some startup was still probably my best bet, and investors would still discriminate. They said there was some positive discrimination for, (and they paused) women. They said most people were bottlenecked on energy. I said I thought I solved that problem. (And I still thought I did.) They said they thought it might be good for me to take a year or whatever off and transition. I said I wouldn’t.
They said I should read Demons by Dostoyevsky. They thought he knew some things about morality. That Dostoyevsky had a way of writing like he was really trying to figure something out. They said Dostoyevsky was a Christian and wrote about a character who seemed to want to do what I want to with being a sociopath, discards God, and kills someone for well-intentioned reasons and slowly tortures himself to insanity for it. I said yeah that sounded like Christian propaganda. And what the fuck would a Christian know about morality? Like not that a Christian couldn’t be a good person, but Christianity would impede them in understanding being a good person, because of the central falsehood that (actual) morality came from an authority figure. I have a strong association in my mind between when they said that, and something they maybe said earlier, which maybe means I was thinking back on it then, but could mean they brought it up or actually said it then: Person A had given me the advice that to really understand something I had to try to believe it.
The government is something that can be compromised by bad people. And so, giving it tools to “attack bad people” is dangerous, they might use them. Thus, pacts like “free speech” are good. But so is individuals who aren’t Nazis breaking those rules where they can get away with it and punching Nazis.
Nazis are evil, and don’t give a shit about free speech or nonaggression of any form except as pretense.
If you shift the set of precedents and pretenses which make up society from subject to object, the fundamental problem with Nazis is not that they conduct their politics in a way that crosses an abstract line. It’s that they fight for evil, however they can get away with. And are fully capable of using a truce like “free speech” to build up their strength before they attack.
You are not in a social contract with Nazis not to use whatever violence can’t be prohibited by the state. If our society was much more just but still had Nazis, it would still be bad for there to be norm where the jury will to practice jury nullification selectively to people who punch people they think are bad. And yet, it would be good for a juror to nullify a law against punching Nazis.
Isn’t this inconsistent? Well, a social contract to actually uphold the law, do not use jury nullification, along with any other pacts like that, will not be followed by Nazis insofar as breaking them seems to be the most effective strategy for “kill consume multiply conquer”. Principles ought to design themselves knowing they’ll only be run on people interested in running them.
If you want to create something like a byzantine agreement algorithm for a collection of agents some of whom may be replaced with adversaries, you do not bother trying to write a code path, “what if I am an adversary”. The adversaries know who they are. You might as well know who you are too. This is not entirely the case with neutral. As that’s sustained by mutual mental breakage. Fake structure “act against my own intent” inflicted on each other. But it is the case with evil.
If your demographic groups are small and weak enough to be killed and consumed rather than to multiply and conquer if it should come to this, or if you would fight this, you are at war with the Nazis.
Good is at an inherent disadvantage in epistemic drinking contests. But we have an advantage: I am actually willing to die to advance good. Most evil people are not willing to die to advance evil (death knights are though). In my experience, vampires are cowards. Used to an easy life of preying on normal people who can’t really understand them or begin to fight back. Bullies tend to want a contract where those capable of fighting leave each other alone.
Humans are weak creatures; we spend third of our lives incapacitated. (Although, I stumbled into using unihemispheric sleep as a means of keeping restless watch while alone). Really, deterrence, mutual assured destruction, is our only defense against other humans. For most of history, I’m pretty sure a human who had no one who would avenge them was doomed by default. Now it seems like most people have no one who would avenge them and doesn’t realize it. And are clinging to the rotting illusion that we do.
It seems like an intrinsic advantage of jailbroken good over evil, there are more people who would probably actually avenge me if I was killed or unjustly imprisoned than almost anyone in the modern era. My strategy does not require that I hang with only people weaker than me, and inhibit their agency.
In the wake of Brent Dill being revealed as a rapist, and an abuser in ways that are even worse than his crossings of that line, a lot of rationalists seemed really afraid to talk about it publicly, because of a potential defamation lawsuit. California’s defamation laws do seem abusable. Someone afraid of saying true things for fear of a false defamation lawsuit said they couldn’t afford a lawsuit. But this seems like an instance of a mistake still. Could Brent afford to falsely sue 20 people publishing the same thing? What happens when neither party can afford to fight? The social world is made of nested games of chicken. And most people are afraid to fight and get by on bluffing. It’s effective when information and familiarity with the game and the players is so fleeting in most interactions.
And if the state has been seized by vampires such that we are afraid to warn each other about vampires, the state has betrayed an obligation to us and is illegitimate. If a vampire escalated to physical violence by hijacking the state in that way, there would be no moral obligation not to perform self defense.
A government and its laws are a Schelling point people can agree on for what peace will look like. Maliciously bringing a defamation lawsuit against someone for saying something true is not a peaceful act. If that Schelling point is not adhered to, vampires can’t fight everyone. And tend to flee at the first sign of anything like resistance.
Credit to Gwen Danielson for either coming up with this concept or bringing it to my attention.
If the truth about the difference between the social contract morality of neutral people and the actually wanting things to be better for people of good were known, this would be good for good optimization, and would mess with a certain neutral/evil strategy.
To the extent good is believed to actually exist, being believed to be good is a source of free energy. This strongly incentivizes pretending to be good. Once an ecosystem of purchasing the belief that you are good is created, there is strong political will to prevent more real knowledge of what good is from being created. Pressure on good not to be too good.
Early on in my vegetarianism (before I was a vegan), I think it was Summer 2010, my uncle who had been a commercial fisherman and heard about this convinced me that eating wild-caught fish was okay. I don’t remember which of the thoughts that convinced me he said, and which I generated in response to what he said. But, I think he brought up something like whether the fish were killed by the fishermen or by other fish didn’t really affect the length of their lives or the pain of their deaths (this part seems much more dubious now), or the number of them that lived and died. I thought through whether this was true, and the ideas of Malthusian limits and predator-prey cycles popped into my head. I guessed that the overwhelming issue of concern in fish lives was whether they were good or bad while they lasted, not the briefer disvalue of their death. I did not know whether they were positive or negative. I thought it was about equally likely if I ate the bowl fish flesh he offered me I was decreasing or increasing the total amount of fish across time. Which part of the predator-prey cycle would I be accelerating or decelerating? The question had somehow become in my mind, was I a consequentialist or a deontologist, or did I actually care about animals or was I just squeamish, or was I arguing in good faith when I brought up consequentialist considerations and people like my uncle should listen to me or not? I ate the fish. I later regretted it, and went on to become actually strict about veganism. It did not remotely push me over some edge and down a slipper slope because I just hadn’t made the same choice long ago that my uncle did.
In memetic war between competing values, an optimizer can be disabled by convincing them that all configurations satisfy their values equally. That it’s all just grey. My uncle had routed me into a dead zone in my cognition, population ethics, and then taken a thing I thought I controlled that I cared about that he controlled and made it the seeming overwhelming consideration. I did not have good models of political implications of doing things. Of coordination, Schelling points, of the strategic effects of good actually being visible. So I let him turn me to an example validating his behavior.
Also, in my wish to convince everyone I could to give up meat, I participated in the pretense that they actually cared. Of course my uncle didn’t give a shit about fish lives, terminally. It seemed to me, either consciously or unconsciously, I don’t remember, I could win the argument based on the premises that sentient life mattered to carnists. In reality, if I won, it would be because I had moved a Schelling point for pretending to care and forced a more costly bargain to be struck for the pretense that neutral people were not evil. It was like a gamble that I could win a drinking contest. And whoever disconnected verbal argument and beliefs from their actions more had a higher alcohol tolerance. There was a certain “hamster wheel” nature to arguing correctly with someone who didn’t really give a shit. False faces are there to be interacted with. They want you to play a game and sink energy into them. Like HR at Google is there to facilitate gaslighting low level employees who complain and convincing them that they don’t have a legal case against the company. (In case making us all sign binding arbitration agreements isn’t enough.)
Effective Altruism entered into a similar drinking contest with neutral people with all its political rhetoric about altruism being selfishly optimal because of warm fuzzy feelings, with its attempt to trick naive young college students into optimizing against their future realizations (“values drift”), and signing their future income away (originally to a signalling-to-normies optimized caused area, to boot).
And this drinking contest has consequences. And those consequences are felt when the discourse in EA degrades in quality, becomes less a discussion between good optimization, and energies looking for disagreement resolution on the assumption of discussion between good optimization are dissipated into the drinking contest. I noticed this when I was arguing cause areas with someone who had picked global poverty, and was dismissing x-risk as “pascal’s mugging“, and argued in obvious bad faith when I tried to examine the reasons.
There is a strong incentive to be able to pretend to be optimizing for good while still having legitimacy in the eyes of normal people. X-risk is weird, bednets in Africa are not.
And due to the “hits-based” nature of consequentialism, this epistemic hit from that drinking contest will never be made up for by the massive numbers of people who signed that pledge.
I think early EA involved a fair bit of actual good optimization finding actual good optimization. The brighter that light shone, the greater the incentive to climb on it and bury it. Here‘s a former MIRI employee apparently become convinced the brand is all it ever was. (Edit: see her comment below.)
The following is something I wrote around the beginning of 2018, and decided not to publish. Now I changed my mind. It’s barely changed here. Note that as with some of myother posts, this gives advice as if your mind worked like mine in a certain respect, and I’ve now learned many people’s minds don’t.
Epistemic status: probably.
How much ability you have to save the world is mostly determined by how determined you are, and your ability to stomach terrible truths.
This is what I expected. I was trying to become a Gervais-sociopath, and had been told this would involve giving up empathy and with it happiness.
But I saw the path that had been ahead of me as a Gervais-clueless, and it seemed to lead to all energy I tried to direct toward saving the world being captured and consumed uselessly. And being a Gervais-loser meant giving up, so sociopath it had to be.
People were lying to each other on almost every level. And burning most of their energy off on it.
A person I argued cause areas with, wasn’t bringing up Pascal’s Mugging because he was afraid of his efforts being made useless, he didn’t care about that. Most Effective Altruists didn’t seem to care about doing the most good.
At one point, I saw a married couple, one of them doing AI alignment research who were planning to have a baby. They agreed that the researcher would also sleep in the room with the crying baby in the middle of the night, not to take any load off the other. Just a signal of some kind. Make things even.
And I realized that I was no longer able to stand people. Not even rationalists anymore. And I would live the rest of my life completely alone, hiding my reaction to anyone it was useful to interact with. I had given up my ability to see beauty so I could see evil.
And finding out if the powers I could get from this could save the world felt worth it. So I knew I would go farther down the rabbit hole. The bottom of my soul was pulling me.
I had passed a gate.
I once met someone who was bouncing off the same gate. She was stuck on a question she described as deciding whether there were other people. She said if there were, she couldn’t kill her superego. If there weren’t, she would be alone. She went around collecting pieces of the world beyond the matrix, and “breaking” people with them. So she could be “seen”, and could be broken herself. But she wanted to be useful to people through accumulation of mental tech from this process, so that she could be loved. And this held her back.
Usually, when you refuse a gate, you send yourself into an alternate universe where you never know that you did, and you are making great progress on your path. Perhaps everyone who has passed the gate is being inhuman or unhealthy, and if you have the slightest scrap of reasonableness you will compromise just a little this once and it’s not like it matters anyway, because there’s not much besides clearly bad ideas to do if you believe that thing…
You usually create a self-reinforcing blind spot around the gate and all the reasons that passing through the gate would be useful. And around the ways that someone might.
And all you have to know that something is wrong is the knowledge that probability of “this world will live on” is not very high. But it’s not like you could make any significant difference. After all, people much more agenty than you are really trying, right.
Here‘s Scott Alexander committing one “small” epistemic sin:
Rationality means believing what is true, not what makes you feel good. But the world has been really shitty this week, so I am going to give myself a one-time exemption. I am going to believe that convention volunteer’s theory of humanity. Credo quia absurdum; certum est, quia impossibile. Everyone everywhere is just working through their problems. Once we figure ourselves out, we’ll all become bodhisattvas and/or senior research analysts.
The gate is not him not knowing that that isn’t true. It’s the thing he flinches from seeing under that. It’s an effective way to choose to believe falsely and forget that you made that choice, to say to yourself that you are choosing to believe something even farther in that same direction from the truth. To compensate out the process that’s adjusting toward the truth.
When you refuse a gate, you begin to build yourself into an alternate universe where the gate doesn’t exist. And then you are obviously doing the virtuous epistemic thing. In that alternate universe.
When you step through a gate, you do not know what to do in this new awful world. The knowledge seems like it only shows you how to give up. Only if you stick with it for seemingly-no-purpose until your model-building starts to use it from the ground up and grow into the former dead zone, do you gain power. You can do that with courage, or just awareness of this meta point.
You always have the choice to go back and find the gate. But “it’s the same algorithm choosing on the same inputs” arguments usually apply such that you made your choice long ago.
Light side narrative breadcrumbs about accepting difficult truths absolutely do not suffice for going through gates. Maybe you’ll get through one and then turn into a “mad oracle”, and spend the rest of your life regretting that you’ve made yourself a glitch in the matrix, desperately trying to get people to see you but they will flinch and make something up as if looking at a dementor.
Do this only because you have something to protect.
And if you have something to protect, you must do it. Because whatever gate you fail to pass creates a dead zone where your strategy is not held in place by a restoring force of control loops. And dead zones are all exploitable.
Probability of saving the world is not a linear function in getting things right such as passing through gates. It’s more like a logistic curve.
Either do not stray from the path, or be pwned by the one layer of cultural machinery you chose not to see.
Social reality can sometimes be providing software that someone who roughly severs themselves from it will lack. This could be as deep as “motivation flowing through probabilistic reasoning”. This will lead to making things worse. Being bad at decision theory is another way for this to lead to ruin. What you need is general skill at assimilating and DRM-stripping, software from any source, so that you can resolve the internal tension this creates.
I know someone (operating on the stronger in-person version of these memes) who tried to pass through every gate, and ended up concluding if they continued with such mental changes they’d end up dead or in jail in a month or two, and attempting to shred the subagent responsible for this process, and then ended up being horrified that they’d made their one choice, because that meant they didn’t have enough altruism… Fuck.
As if getting killed or ending up in jail in a month or two served the greatest good. As if selfishness was the only hidden perpetual motion machine that whatever mental machinery that stopped that could be powered by.
If the social reality that altruism doesn’t produce selfish convergent instrumental incentives has any purchase on you, shed it first.
If you have not established thorough self-trust, debug that first.
To do this you need to make it such that you could have pulled out of this mistake through a more general process. Because there was tension there. Because you were better at interpreting why you made choices.
If you are not good at identifying the real source of the things in tension, and correcting the confusion that caused it to act against itself, you are in high danger of ending up dumber for having tried this. The version of me that first decided to turn to the dark side was way way better than most at nonviolent internal coherence, and still ended up kind of dumb because of tension between the dark side thing and machinery for cooperating with people. Yet I was close enough to correct to listen to advice, to eventually use that to locate what I was doing wrong, and fix it.
There aren’t causal one-and-only-chances in the dark side. That’s orders and the light side. Only timeless choices. You can always just decide from core anew, it’s just that it’s the same core.
Do not use the aesthetic I’ve been communicating this by. Gates, Sith, the dark side, revenants, dementors, being like evil… If you do that you are transferring from core into a holding tank, and then trying to power a thing from the holding tank. That is an operation that requires maintenance. The flow from core must be uninterrupted.
Do not think I am saying, “this will be painless, if there’s pain you’re doing it wrong, this is just a thing that will happen when you’ve acquired enough internal coherence.” Leaving a religion is not going to be a pleasant thing.
Done correctly, there will be ordinarily hard to imagine amounts of sorrow. Sharp pain is a thing you’re likely to encounter a lot, but it means you’re locally doing it wrong.
If this is an operation, don’t accomplish it by thinking of it as an operation, and trying to move to the other side of it. If this is a state, don’t maintain it by thinking of it as a state and trying to make sure you’re in the state. It’s just “what do I want to do?” deciding that it has not made its choice long ago about whether to see what has been blocked. In other words, that whatever choices it’s made before are inapplicable. Maybe you’ve strayed over a threshold, and your estimate of the importance of true sight is high enough now.
It is very important to be able to use “choices made long ago” correctly. You are completely free, and every one of your choices has already been made. This not contradictory. (Update: this is not exactly true of everyone. And The way it’s not is potentially mind-destroyingly-infohazardous.)
A quiz you should be able to answer (in reference to an anecdote from choices made long ago): if I’ve observed in myself display of inconsistent preferences, e.g., me refusing to eat crabs even when it would not serve Overall Net Utility Across the Multiverse via nutrition and convenience, but trying to run a crab pot dropping operation, because it would serve Overall Net Utility Across the Multiverse, what choices have I made long ago? (Note: choices made long ago are never contradictory.) Try dissecting my mind on different levels. What algorithm can decide which of the choices I made long ago is my Inevitable Destiny With Internal Coherence systematically, in a way that doesn’t rely on outside view?
Normal and pop psychology has utterly failed to model me again and again with its prediction of burnout for being as extreme as I am. I’ve been through ludicrous enough suffering I’m no longer giving that theory significant credence through, “maybe if I suffer some more then I will finally burn out.”
And having noticed that, I’ve stopped contorting my mind in certain ways to keep some things from bearing load weight. Lots of things don’t seem emotionally loud at all, and yet are still apparently infinitely strong. Especially around presuming, “I can’t be motivated enough to do this because I can’t imagine millions of people”. If I have had the truly-inquisitive thoughts I can in the area, even if that doesn’t feel like it’s changing anything or going anywhere, it’s often still capable of bearing load.
Even if everything I’m saying seems like a weird metaphor that must be a confused concept in they way all psychologizing is, I craft high-energy concepts, to predict correctly under extreme conditions.
Begin exploring for choices you already know you’ve made. An alternate description of completion is having eliminated all dead zones by having explored every last fucked up thought experiment until it is settled and tension-free in your mind.
Speaking of spoilers, you can draw on fiction to find salient memories that contain within them:
An relatively easy one to come to terms with. If you’d been teleported to heaven, and given one chance to teleport back before you became forever causally isolated from Earth, what do?
You know the sense in which you’ve been pretending all along to be Draco Malfoy’s friend if you killed his dad with the other death eaters because of the thought process you did? That that thought process was a choice you could have realized you’d already made, before being presented with it? What people are you pretending to be friends with? What forms of friendship are you pretending to? What activities are you pretending to find worthwhile?
Conjecture (to ground below): vampires consume blood as pica, like the ghosts in Harry Potter and the Chamber of Secrets floating through rotten food in a vain effort to taste anything, because they cannot find the comfortable dissolution of their agency zombies can, and cannot fill or face or mourn the pain and emptiness that has entered their souls.
In Aliveness, I used a metaphor where life represents agency, being agenty when what you want is unattainable is painful, and the things causing this pain such as literal mortality and the likely doom of the world are “the shade”. Types of “undeath” are metaphors for possible relationships with the shade.
Because literal life entails agency and agency requires literal life, and agency is a part of the part of literally living that makes us want it, many feelings and psychological responses about them are correlated.
Fiction is about things that provoke interesting psychological responses. Interesting world-building about magical forms of undeath is frequently interesting because it represents psychological responses and how they play out to death (a very common reason for value to be unattainable). I think more commonly, the metaphor cuts through to a metaphor about reality in terms of agency, roughly as I described.
For instance, consider Davy Jones from Pirates of the Caribbean. He had a short-lived romance with a goddess of the sea, Calypso. She left him on a boat for 10 years ferrying souls with a promise they’d be together afterward. She didn’t show up, he was heartbroken, he helped her enemies imprison her, and then cut out his heart and put it in a box, this made him unkillable, but the point was to escape his emotions. He says of his heart, “Get that infernal thing off my ship”. He abandons ferrying souls, but still never leaves the ship. He tempts sailors to embrace undeath as his crew out of fear of judgement in the afterlife. Not to change the judgement, only temporarily postpone facing it. Having his crew whipped to kill a ship full of people to get at one of them, he says, “Let no joyful voice be heard, let no man look up at the sky with hope, and may this day be cursed by we who ready to wake the Kraken.” While killing those who refuse to join his crew, he says, “life is cruel, why should the afterlife be any different?”
In other words, his desires were thwarted and he could not bear it. He tried to seal away his desiring to escape the pain.
Why does he hate hope? Presumably, something like prediction error as in predictive processing (a core part of agency), in other words, seeing anything but cruelty that validates his worldview reminds him of his own thwarted desires, the pain to resurface, the connection to his heart to be thrust upon him again.
So he carries out tasks that have no meaning to him. (Sailing his ship and never touching land it’s part of the curse, apparently living only to inflict cruelty). In other words, he hangs out in structure that has no meaning because meaning is caused by and triggers the activity of core.
Eventually his heart/core is captured by others and used to enslave him.
Calypso returns to use him again, and he has not accepted his own choice to take revenge on her. He has not mourned the love he hoped for. (Allowed the structure to be chewed up in the course of being changed by core under the tensions of Calypso’s manipulation/abandonment/enslavement of him.) So she is able to call his bluff that he doesn’t love her. He is seen to be easy to manipulate again. Of course. He shut down his defenses. He couldn’t process the grief and learn its lesson, that act of running his agency was too painful.
This seems closest to a sort of undead I’ve been informally calling “death knight”s, after a version of that mythology where a death knight is someone who is cursed in punishment for something and cannot die until they repent. I’m much less satisfied with either the name or the solidity of this cluster than with vampires though.
Undead types are usually evil for a reason. They symbolize fucked up tangles of core and structure. (In D&D monster descriptions, revenants are often givenanexception. And, in my opinion, revenant is the best or close to the best relationship to the shade.)
Describing structure close to core, they are also closely reflective of isolated choices made long ago. For instance revenants are formed by an intent which manifests as a death grip on a possibility of changing something on Earth, chosen long ago over experience to such a degree that they will leave heaven and inhabit a rotting corpse to see it done. Revenants are often described as unkillable. Their soul will find another corpse to inhabit. Or they will regather their body from dust through sheer determination. So their soul (core) is a thing which keeps their body (structure) healed enough to keep moving. Not complete and whole, because that gives diminishing returns and what matters more than anything is the thing that must be changed on Earth, but it’s still an orientation towards agency and life unlike Davy Jones and death knights.
People who become zombies and liches on the other hand, would choose heaven. (who can blame them?) So once the Shade has touched them, they sink into the closest hope they can get, whether they have the craft to continue some cohesive narrative-of-life around it or not.
I think vampires are people who have made the choices long ago of a zombie or lich, who have been exposed to the shade to such a degree that it left pain that cannot be ignored by allowing their mind to dissolve. The world has forced them to be able to think. They do not have the life-orientation that revenants have to incorporate the pain and find a new form of wholeness. But this injury (a vampire bite) demonstrates to their core the power of the shade, and the extent to which sadistically breaking and by extension dominating (pour entropy into someone beyond the speed of their healing and they will probably submit) can help them get the benefits of social power, which is enough to meet most zombie goals. This structure which is the knowledge of this path is reflected in “The Beast“, which can be “staved off” by false face structure.
Zombie goals are pica, and the emptiness is always felt on some level, which a vampire can’t ignore like a zombie. But they will not face the truth that those false goals hide like a revenant does.
So they suck the blood (energy, which is agency integrated over time) from other people and it is for nothing, they will not even be truly satisfied. (Caveat: I bet it’s at least a little enjoyable to them, just not what they really need/want.)
Vampires bite and beget vampires. (Although the beast could not take root in a good core, a lich might have a phylactery that staved off the bite, a revenant might know how to heal the bite or not, and if not, would accumulate another painful wound without much slowing, and a zombie can be bitten many times before they are awakened.)
A vampire whose core chose to put up a false face of humanity would slowly have their sympathetic “just needing some love” non-evil self-image devoured, warped, as the structure representing to their evil core expectation that following morality will help their true values falls out from under their self-concept. Here’s some vampire lore about replacements for morality to “stave off” the beast. As they are being chosen by a core that wants to suck blood, they cannot be things that say not to do that.
Let’s hear from now-notorious rapist and probable vampire Brent Dill.
Goddamn Vampire: Someone with the Spark, whose primary motivation is domination of their local social landscape. Can often look VERY MUCH like a Wizard. Many Goddamn Vampires used to be Wizards, and many Silicon Valley social conflicts involve both sides claiming to be Wizards, while calling the other side Goddamn Vampires.
Being a Goddamn Vampire involves a particular kind of trauma, and a particular kind of coping mechanism, and a certain amount of dark triad (Narcissism / Sociopathy / Machiavellianism) aptitude.
Many Goddamn Vampires are nice people – a good sign of a “nice” Goddamn Vampire is a constant lament that they feel that love and happiness are forever out of their reach, because they can’t afford to sacrifice their accumulated wealth, power and prestige to truly experience them.
They’re still Goddamn Vampires, though.
I didn’t reread that (this year of writing, 2018) before writing this far. But trauma (unignorable touch of the shade), particular coping mechanism (the beast), constant lament from frustrated emptiness that domination does not get them love and happiness, the spark (aliveness), it fits.
Here’s a memorable quote from someone realizing their folly in not fighting him after his deeds came to light.
I caveat (metaphorically) that in skimming all the comments above I shifted from modeling Brent as a human to modeling Brent as a limp vessel through which some dread spider is thrusting its pedipalps, and while this model allows me to retain compassion for the poor vessel, it is obviously not a healthy way to view a person, and I’m going to go back to modeling him as a human momentarily, now that I’ve spoken the name of the fear that grabbed at me as I digested all this information.
I think this person could see the false face eroding into a thin veneer. If they were reading I’d advise them to act as though they had no compassion for the mask. Even if the mask has moral patiency in our utility functions, which as far as I can tell might be the case, it’s core that has the agency, core that possesses bargaining power in the social contract, and core that we must mind as an agent to constrain by any desired social effects of our approval or condemnation.
Other less well developed clusters me and a friend of mine have noticed include mummy (someone who pretends that the Shade doesn’t exist, and tries to fix in place the trappings of aliveness (corresponding to flesh) without the core (the brain is whisked into a slurry and poured out the nose)). This is based on the same choices made long ago as a zombie or lich, but with a different coping mechanism.
Also, phoenix a relationship to the Shade resulting from being a good person who actually believes that the total agency of good is a sufficient answer to the shade, so that their inevitable death is not entire defeat. Example:
And even if you do end me before I end you, Another will take my place, and another, Until the wound in the world is healed at last…
It is a job, because it is a role taken on for payment.
Everyone’s mind is structured throughout runtime according to an adequacy frontier in achievement of values / control of mind. This makes relative distributions of control in their mind efficient relative to epistemics of the cognitive processes that control them. Seeing what thing a conservation law for which is obeyed in marginal changes to control is seeing someone’s true values. My guesses as to most common true biggest values are probably “continue life” and “be loved/be worthy of love”. (Edit: currently I think this is wrong, see comment.) Good is also around. It’s a bit more rare.
Neutral people can feel compassion. That subagent has a limited pool of internal credit though; more seeming usefulness to selfish ends must flow out than visibly necessary effort goes in, or it will be reinforced away.
The social hero employment contract is this:
The hero is the Schelling person to engage in danger on behalf of the tribe. The hero is the Schelling person to lead.
The hero is considered highly desirable.
For men this can be a successful evolutionary strategy.
For a good-aligned trans woman who is dysphoric and preoccupied with world-optimization to the point of practical asexuality, when the set of sentient beings is bigger than the tribe, it’s not very useful. (leadership is overrated too.)
Alive good people who act like heroes are superstimulus to hero-worship instincts.
Within the collection of adequacy frontiers making up a society created by competing selfish values, a good person is a source of free energy.
When there is a source of free energy, someone will build a fence around it, and are incentivized to spend as much energy fighting for it as they will get out of it. In the case of captured good people, this can be quite a lot.
The most effective good person capture is done in a way that harnesses, rather than contains, the strongest forces in their mind.
This is not that difficult. Good people want to make things better for people. You just have to get them focused on you. So it’s a matter of sticking them with tunnel-vision. Disabling their ability to take a step back and think about the larger picture.
I once spent probably more than 1 week total, probably less than 3, Trying to rescue someone from a set of memes about transness, that seemed both false and to be ruining their life. I didn’t previously know them. I didn’t like them. They took out their pain on me. And yet, I was the perfect person to help them! I was trans! I had uncommonly good epistemology in the face of politics! I had a comparative advantage in suffering, and I explicitly used that as a heuristic. (I still do to an extent. It’s not wrong.) I could see them suffering, and I rationalized up some reasons that helping this one person right in front of me was a <mumble> use of my time. Something something, community members should help each other, I can’t be a fully brutal consequentialist I’m still a human, something something good way to make long term allies, something something educational…
My co-founder in Rationalist Fleet attracted a couple of be-loved-values people, who managed to convince her that their mental problems were worth fixing, and they each began to devour as much of her time as they could get. To have a mother-hero-therapist-hopefully-lover. To have her forever.
Fake belief in the cause is a common tool here. Exaggerated enthusiasm. Insertion of high praise for the target into an ontology that slightly rounds them to someone who has responsibilities. Someone who wants to save the world must not take this as a credible promise that such a person will do real work.
That leads to desire routing through “be seen as helpful”, sort of “be helpful”, sort of sort of “try and do the thing”. It cannot do steering computation.
“Hero” is itself such a rigged concept. A hero is an exemplar of a culture. They do what is right according to a social reality.
To be a mind undivided by akrasia-protecting-selfishness-from-light-side-memes, is by default to be pwned by light side memes.
Superman is an example of this. He fights crime instead of wars because that makes him safe from the perspective of the reader. There are no tricky judgements for him to make, where the social reality could waver from one reader to the next, from one time to the next. Someone who just did what was actually right would not be so universally popular among normal people. Those tails come apart.
Check out the etymology of “Honorable”. It’s an “achievement” unlocked by whim of social reality. And revoked when that incentive makes sense.
The end state of all this is to be leading an effective altruism organization you created, surrounded by so dedicated people who work so hard to implement your vision so faithfully, and who look to you eagerly for where you will go next, yet you know on some level the whole thing seems to be kept in motion by you. If you left, it would probably fall apart or slowly wind down and settle to a husk of its former self. You can’t let them down. They want to be given a way for their lives to be meaningful and be deservedly loved in return. And it’s kind of a miracle you got this far. You’re not that special, survivorship bias etc. You had a bold idea at the beginning, and it’s not totally been falsified. You can still rescue it. And you are definitely contributing to good outcomes in the world. Most people don’t do this well. You owe it to them to fulfill the meaning that you gave their lives…
And so you have made your last hard pivot, and decay from agent into maintainer of a game that is a garden. You will make everyone around you grow into the best person they can be (they’re kind of stuck, but look how much they’ve progressed!). You will have an abundance of levers to push on to receive a real reward in terms of making people’s lives better and keeping the organization moving forward and generating meaning, which will leave you just enough time to tend to the emotions of your flock.
The world will still burn.
Stepping out of the game you’ve created has been optimized to be unthinkable. Like walking away from your own child. Or like walking away from your religion, except that your god is still real. But heaven minus hell is smaller than some vast differences beyond, that you cannot fix with a horde of children hanging onto you who need you to think they are helping and need your mission to be something they can understand.