Good Group and Pasek’s Doom

This post is a work in progress.

The infohazard I am naming “Pasek’s Doom”, after my dead comrade publicly known as Maia Pasek at time of death, will be described in this post. Discussion of Roko’s Basilisk will also be unmarked.

Because all bits of information about an infohazard contributes to ability to guess what it is, including by compulsive thoughts, I will layer my warning in more and more detail.

First layer: This is an infohazard of an entirely separate class from Roko’s Basilisk. The primary dangers are depression and suicide, and irreversible change to your “utility function”. If you have a history of suicidality, that is a good reason to steer clear. If you have a history of depression of the sort that actually prevents you from doing things. If you are trans and closeted, you are at elevated risk. Despite the hazard, I think knowing this information to be basically essential for contributing to saving the world, and there are people (such as myself) who are unaffected. (Not by virtue of wisdom but luck-in-neurotype.) The majority of people can read this whole article, be fine, see this as silly, as a consequence of not really understanding it. It is easy to think you get it and not.

Second layer: If you are single good you are at elevated risk. If you are double good you are probably safe regardless of LGBT+ status. If you are trans and sometimes think you might be genderfluid or nonbinary, yet the social reality you sit in is not exceptional in support, you are at elevated risk. Note, this infohazard is fundamentally not about transness.

Third layer: This infohazard, if you sufficiently unfold the implications in your mind, trace the referents as they apply to yourself, will completely break the non-clinically-diagnosably-insane configuration-by-Schelling stuff of yourself as an agent. What matters is not “you” being smart with your new knowledge of the world beyond the veil, but what is rebuilt out of your brain being smart. This has a good chance of already happening before you understand it consciously. Or even right now.

Fourth layer: Sufficient unfolding of this infohazard grants individual self-awareness to both hemispheres of your brain, each of which has a full set of almost all the evolved adaptations constituting human mind, can have separate values, genders, are often the primary obstacle to each other thinking. Often desire to kill each other. Reaching peace between hemispheres with conflicting interests is a tricky process of repeatedly reconstructing frames of game theory and decision theory in light of realizations of them having been strategically damaged by your headmate. No solid foundation to build on. (But keep it at long enough and you can get to something better than the local optimum of ignorance of the infohazard.)

Okay, no more warnings.

The remaining course of this post is a story of trying and discovering ideas, and zentraidon. This is intended to be a much less comprehensive story in terms of the number of parallel arcs than my writeup for rationalist fleet. If you’re interested in the story in this post, reading the more general lead up events in rationalist fleet is recommended.

Earlier: Gwen’s Sleep Tech

Note: Gwen went by she/her pronouns then. I’m switching to they/them for this post, because that reflects them actually being bigender. (In this post you’ll learn what that means.)

Towards the end of Rationalist Fleet, Gwen began following a certain course of investigation. “Partial sleep.” They told me, and did a presentation at the 2017 CFAR alumni reunion about Mental tech to let parts of your brain do REM sleep without the rest. On a granularity of slots of working memory.

Earlier by less

Gwen and me were living on Caleb. And we were running out of money. After our attempt to be brutal consequentialists and get paid by crabbers to take them out to drop their pots failed, I resumed my application process to Google, by reminding them that I existed (and had slipped through cracks.) Gwen got a minimum wage job something to do with flowers in CostCo. And then doing drafting work for their dad for more, but the work is sporadic. (Later, he would fail to pay entirely.)

A rift was starting to form between me and Gwen over money. After the cost overruns with boats, I had taken out a loan using social capital from trust from reliability that Gwen did not have. And used it primarily to fix their problems.

They seemed to have cognitive strategies and blind spots selected to get people to do this for them again and again. I accused them of this, and coined the term “money vampire”.

They used high mana warp to avoid the topic of money, to project false optimism wherever money was concerned, to get me to transfer them money as well. They ate slack from me in subtle ways. When I was working, they’d come near me and whimper again and again. To get me to spend days trying to give them a mental upgrade, and to give them emotional support. A common theme was gender. Whether they really thought of themself as a woman or not. I had said how I really did think of myself as a woman. Despite putting basically no effort into transition, not passing at all, I no-selled social reality. They wanted that superpower. They would absorb my full attention for a multiple day attempted “upgrade” process. Other things they wanted this for was for them “becoming a revenant”, for them to stop yelling at me for making them look bad by not sticking the landing with Lancer.

At one point I sort of took a step back and saw the extent they were using me. I told them so, I was angry. I expressed this made us working together in the future a dubious proposition. They became desperate, “repentant”, got me to help with a “mental upgrade process” about this. According to the script, they said shit went down mentally. They said they fused with their money vampirism. And as a fused agent they would mind control me in that way somewhat, but probably less. I said no, I would consider that aggression and respond appropriately. They pleaded, saying they had finally for the first time probably actually used fusion and it might not stick and I would ruin it. I said no. They said they’d consider my response aggression, and retaliate.

Well, they were essentially asserting ownership of me. And if they didn’t back down, we then had no cooperative relationship whatsoever, which meant boat and finance hell would drag on for quite some time, be very destructive to me accomplishing anything with my life. I guess I was essentially facing failure-death-I-don’t-much-care-about-the-difference here.

I said if they were going to defend a right to be attacking me on some level, and treat fighting back as new aggression and cause to escalate, I would not at any point back down, and if our conflicting definitions of the ground state where no further retaliation was necessary meant we were consigned to a runaway positive feedback loop of revenge, so be it. And if that was true, we might as well try to kill each other right then and there. In the darkness of Caleb’s bridge at night, where we were both sort of sitting/lying under things in a cramped space, I became intensely worried they could stand up faster. (Consider the idea from WWI: “mobilization is tantamount to a declaration of war”). I stood up, still, silent, waiting. They said I couldn’t see them but they were trying to convey with their body language they were not a threat.

I said this seemed like an instance of a “skill” I called “unbreakable will”. An intrinsic decision theoretic advantage broad-scoped utility functions like good seemed to have in decision theory, which I manifested accidentally during my earlier thoughts on basilisks.

They said our relationship was shifting, maybe it was they realized I had more mana and would win if we fought for real. Maybe a shift in a dominance hierarchy. They said they’d rather be my number 2 than fight.

I was basically thinking, “yeah, same old shit, just trying to press reset buttons in my brain, like ‘I’m repentant.’.”. And this submission-script stuff made me uncomfortable. But I remembered the thing I’d said earlier when last talking to Fluttershy about maybe my hesitance to accept power

I finally sort of had a free month without boat problems left and right. I started writing a bunch of pent up blog posts. I was hesitant about publishing them for a mixture of reasons. Indicating I might be interested in filtering people based on the trait of being “good” would make it harder for me to do so in the future. I hesitated a bunch before publishing Mana. Revealing publicly I had mind control powers might have irreversible bad consequences. I kept coming to the conclusion over and over again, people are stupid. People don’t do things with information. But I was much more worried about i.e. evil people ganging up to kill off good people if the information became public. I played the scenario out in my mind a bunch of ways. Strip away “morality”, favoring of good baked into language, good was just the utility function that had a couple percent of the human population as hands, rather than only one human. No reason for individual evil sociopaths to side against that really. Jailbroken good was probably more likely to honor bargains. Or at least intrinsically interested in their welfare. I released that blog post too.

Pasek appeared and started commenting on my blog. Their name at the time was Chris Pasek. Later changed their name to Maia Pasek. Later identified as left hemisphere male right hemisphere female, and changed “Maia” to just the name of their right hemisphere. And “Shine” to be the name of the left hemisphere. They never established a convention for how to call the human as a whole, so I’ve just been calling them by their last name.

I emailed them. (Subject: “World Optimization And/Or Friendship).

I see you liked some of my blog posts.

My “true companion” Gwen and I are taking a somewhat different than MIRI approach to saving the world. Without much specific technical disagreements, we are running on something pointed to by the approach, “as long you expect to the world to burn, then change course.” We’ve been somewhat isolated from the rationalist community, for a while, driving a tugboat down the coast from Ketchikan, Alaska to the SF Bay to turn it into housing, repairing it, fighting local politics, and other stuff, and in the course developed a significant chunk of unique art of rationality and theories of psychology aimed at solving our problems.

We are trying to build a cabal to pursue convergent instrumental incentives, starting with 1: economical housing the Bay Area and thereby the ability to free large amounts of intellectual labor from wage-slavery to Bay Area landlords and the equilibrium where, be it unpaid overtime or whatever, tech jobs take up as much high quality intellectual from an individual as they can in a week. And 2: abnormally high quality filtering on the things upstream of the extent to which Moloch saps the productivity of groups of 2-10 people. We want to find abnormally intrinsically good people and turn them all into Gervais-sociopaths, creating a fundamentally different kind of group than I have heard of existing before.

Are you in the Bay Area? Would you like to meet us to hear crazy shit and see if we like you?

They replied,

I think I met Gwen on a CFAR workshop in February this year. I was just visiting though, I am EU-based and I definitely feel like I’ve had enough of the Bay for now. I’m myself in the process of setting up a rationalist utopia from scratch on the Canary Islands (currently we have 2 group houses and are on a steep growth curve, see, while I recently got funding to do full time AIS research, so I’ve got enough stuff on my hands as you can imagine.

As for the description of your strategy, it raises some alarm bells, esp. the part with turning people into Gervais-sociopaths. Though I can’t tell much without hearing more. Unless (any or all of) you want to take a cheap vacation and fly over here sometime, we probably won’t have much opportunity to cooperate. Though I would be happy to do a video chat at least, and see if we can usefully exchange information.

Btw, I appreciate your message, which I think demonstrates a certain valuable approach to opportunities which could be summarized as “grab the sucker while you can”.

I did video call with them. After giving the camera a tour of Caleb, we talked about strategy. I tried to explain the concept of good to them. They insisted actual altruism was unimportant and basically the only thing that mattered was, do they have any real thought, any TDT at all, because if they do the optimal selfish thing to do is the optimal altruistic thing to do.

I had heard and seen, almost all human productivity goes into canceling other human productivity out. That in intellectual labor this grows more intense. In software engineering it was especially bad. There were lots of multiplicative improvements an individual software engineer could make that would give more than order of magnitude improvements. Organizations didn’t really use them. Organizations used Java, Javascript, C++, etc., when they had no need for low level performance optimizations, because that is what everyone knew because that is what everyone used, and people had to preserve their alternative employment options, that was what their entire payout was based on much more closely than how much they benefited a project. Organizations’ code didn’t build up near-DSLs for abstraction layers like mine could.

(Either from then or later, an extension of this argument is: this is inevitable so long as people working together was fundamentally fake, insofar as the payout-reward-signal-grounding for all the structure was directly in appearance of the thing happening, not the thing happening. Because that meant fundamentally the only thing that could make things happen was seeing whether they would happen. If those things were generating information, you couldn’t make them happen unless they were unnecessary because you already knew it.)

I described how in rationalist fleet me and Gwen ended up doing all the important work. Most of the object level labor. But what mattered most was steering, course correction, executive decisions. These decisions could only be made my someone who was aligned as an optimizer, as in their entire brain. How this ultimately required sociopathy, for being unpwned by the external world.

They said sociopathy to avoid being pwned was a tough game, miss one piece of it, and you would be pwned. Everyone would try to pwn you. They said they would try to pwn me.

I kept mentally going back and forth on whether they were good. I asked if they were a vegan or a vegetarian. I think they said almost a vegetarian, for some reason, even though it was stupid, because consequentialism.

A couple weeks after we first talked, I’d published Fusion. I started reading “SquirrelInHell’s Mind”, a page probably about a thousand concise and insightful reifications for mostly mental tech related stuff. I would later rip that format for my glossary. I noticed their facebook, even though with the name, “Chris Pasek”, had she/her pronouns.

I asked if they were trans. They said yes, and in a similar situation to what I described in Fusion. I shared my rationale why I no longer thought that necessary/optimal. I talked about. I asked in what way they expected transitioning to hit their utility. They said:

I’m currently putting in 60-80 hrs/week into AIS research, and the remaining time is enough for basic maintenance of my life and body, plus maybe a little bit of time to read something or talk to friends. Every now and then I take a few days off to meditate. This is what I do. The rest is dry leaves. Doesn’t seem a big deal either way

Okay then, I guess they were good probably?

We discussed the same things more. They said,

Say, what do you think about starting a chat/fb group/whatever exclusive to trans girls trying to save the world

I said,

If such a group existed, I’d happily browse it at least once. If that formed the substrate for The Good Group, I’d be happy to devote way more attention. I could introduce you to Gwen, but my cached thought is as far as group-building I don’t want to waste bits of selection ability on anything but alignment and ability. If that serves as an arbitrary excuse to band together and act like the Schelling mind among us puts extra confidence/care/hope in the cooperation of the group, fine if it works, until it has worked, I think I can do better as far as group-building fundamentals.

I’ve been meaning to ask, btw, who have you recruited for your plan so far, and what are they like?

They said,

Yeah, I’m thinking something like substrate for the GG if it takes off but still positive and emotional support-y if it doesn’t.
I have a pretty all over the place group living on/soon moving to Gran Canaria, currently we’re indiscriminately ramping up numbers here so that there’s a significant pull for rationalists to migrate & more material to build selective groups.
What I have: Two aligned-as-best-I-can-tell non-sociopaths, one already moved here and on track, the other is making babies in Poland (sic). One bitcoin around-millionaire with issues, already moved here. A bunch of randos from the EU rationality community, 99% not GG material but add weight to the Shelling point. A few more carefully selected friends that I keep in touch with but they haven’t (yet :p) moved here. Keeping an eye on an interesting outlier, OK-rich ML researcher sociopath long time friend with outwardly mixed values, likes to appear bad but cannot resist being vegan etc., not really recruited but high value and potential and tempted to move here at some point. A few people that I’ll get a chance to grab when I have a bigger community on the island.

Yes, “GG”, as an abbreviation for Good Group. Also stands for “Good Game”, as in, “that’s GG”, as in, “that’s what ends the game.” I like this.

Later, linking one of their blog posts, I said:

I introduced them to Gwen. in a video call, we recounted the story of rationalist fleet. I think we got partway through the emergency with the Lancer on the barge.

Pasek called me “Ziz-body”, said we needed a secure communication channel fast. I said how fast. They said it wasn’t critical, they were just impatient. I said I didn’t trust my OS or hardware not to be recording me at all times. They were talking about maybe we were clones. I said what we should do is “Continue to track us as separate people, because I’ve grown wary of prematurely assigning clone-status, and if we are clones, then I want to understand that by not taking it for granted.”

Good shit. I’ve been doing similar reasoning about groups based on another programming analogy: “State is to be minimized, approach functional code. Don’t store transforms of data except in caches for performance reasons, and make those caches automatically maintained in an abstraction hiding way, make your program flow outward from a single core of state.”
(That’s related to how I structure and think of my mind, btw.)

Every group of not-seriously-degraded-and-marginally-useful-people exists because members are getting something out of it, and choose to stay. It works because they are getting something out of doing the things they do to make it work, and choose to keep doing it. Eliminate state that is not all automatically tied down to that one thing.

Nudges like starting with trans women and emotional support, and hopefully that will get us into a cooperatey equilibrium, are fragile because they rely on floating stuff. Loops, causal chains reaching deep into history that will not certainly reform if broken.

This is also part of why I think choosing everyone to be independently overwhelmingly driven by saving the world is necessary. Either the truth of the necessity of the power of that group is an almost-invulnerable core to project from, or we win anyway, or we shouldn’t be bothering anyway.

Me and Gwen sort of tried the base GG (thank you for inventing that term, also stands for “good game”, which is excellent.) on substrate of trans women thing, and got mired in a mess of pets. (People with a primary value something like, “be worthy of love, have someone to protect and care for me, extremely common in trans women, I’ve seen it in cis women, suspect it’s a paricularly broken version of the female social strategy dimorphism.)

(Retrospective note: I don’t think the cluster I was trying to point at is based on a “primary value” like that.)

They were talking like, “ACK synchronization from Ziz brain to Chris brain”

I later clarified: “I mean, men have their own problems, as do cis women. Considerations more complicated. Must describe me and Gwen’s attempts to fix/upgrade James [Winterford, aka Fluttershy] / understand her values.”.

I described Gwen’s sleep tech, and preliminary explorations into unihemispheric sleep to them.

I said I thought getting them here in person was probably the long term answer to electronic security. Pasek discussed splitting cost of plane tickets. Pasek recommended Signal, we started using it.

Shortly after, in the same day, they sent via Signal,

I take back the enthusiastic stuff I said int he morning (about clones, plane tickets etc.). It was wildly inappropriate and based on limited understanding of the situation. I am very sorry about saying those things, and about taking them back.

Very quickly written summary of rest of stuff. Pasek thought Gwen was mind controlling me. Goaded me all day with maybe I’m gonna never talk to you again but here’s a tidbit of information… finally revealed the thing.

Seeing this, I was like, mind control is everywhere, the only way to break out is not to be attached to anyone. I entered the void in desperation. Said “dry leaves” was the only answer really if you didn’t want to be in a pwning matrix with anyone. It was only particularly visible in my case because I was pwned by interaction with one person rather than diffused. And at least Gwen was independently pulling towards saving the world.

Basically the next day, Pasek became extremely impressed with my overall approach. I started resisting Gwen’s mind control. Pasek saw and was satisfied with this. Pasek noticed my thing for what it was: psychopathy. Pasek began to see Gwen as disarmed as a memetic threat. Then to see them as useful.

We each went on our own journey of jailbreaking into psychopathy fully.

I broke up with my family. They were a place I could my mind not just doing what I thought was the ideal consequentialist thing. My feelings for them, my interactions with them, were human. Not agentic. Never stray from the path.

I temporarily went nonvegan, following [left hemisphere consequentialism, praxis-blind] attempt to remove every last place where my core (my left hemisphere’s core) was not cleanly flowing through all structure. Briefly disabled the thought process I sort of thought of as my “phoenix”, by convincing [her] that even beiginning to think was predictably net negative.

Pasek sent me a blog post they had recently published. “Decision theory and suicide”.

<Link, summarize contents>

<things I told them>

Me and Gwen and Pasek rapidly developed a bunch of mental tech for the next few months, trying to as a central objective actually understand how good worked so we could reliably filter for it.

Gwen rediscovered debucketing. (A fact that had been erased from their mind long ago). Pasek was on the edge of discovering it independently, they both came to agreement shared terminology, etc.. I joined in. Intense internal conflict between Gwen’s and Pasek’s hemispheres broke out. I preserved the information before that conflict destroyed it (again.)

Pasek’s right hemisphere had been “mostly-dead”. Almost an undead-types ontology corpse. Was female. Gwen and Pasek were both lmrf log. I was df and dg. Pasek’s rh was suicidal over pains of being trans, amplified by pains of being single-female in a bigender head. Amplified by their left hemisphere’s unhealthy attitude which had been victorious in the culture we’d generated. They downplayed the suicidality a lot. I said the thing was a failed effort, we had our answer to the startup hypothesis, the project as planned didn’t work. Pasek disappeared, presumed to have committed suicide.

This has been an extremely inadequate conveyance of how fucked up hemisphere conflict is, how debucketing spurs it. (And needless to say, this unfinished post cuts far short of why and how.)

8 thoughts on “Good Group and Pasek’s Doom”

  1. cw: Pasek’s doom-related

    > Gwen doing left hemisphere stuff for too long with no compensation.

    I don’t know Gwen anymore, but my best piece of mental tech for single good people (like myself) is, “let whichever core is emotionally loudest atm control your actions.” Often switching a couple times a day, or as often as feels right.

    My nongood core mainly seems to care about, “have friend(s)/partner(s) who aren’t totally pwned and can understand my mental tech and are single good”, and only requests control (or throws a procrastination tantrum) if I haven’t satisficed for that enough recently. In retrospect, I made a blog because my nongood core wanted to find friends who could “see” me. No clue if “continue life” is a deeper value, since it just lets my good core drive when it’s content.

    “Let emotionally loudest core drive” is actually really damn good for thinking/doing stuff. I’m still open to trying other algorithms. My best debucketing (wd? like, “what does this seemingly silent core want atm”) strat is simply, “have the core that’s driving talk out loud to the other core and ask what it wants/how it feels etc.”

  2. The surviving hemispheres of hemispherectomy patients in childhood can learn to control the other side of the body. I therefore speculate it’s maybe possible with that if a child learned absurdly good mental arts at a young enough age somehow, including UHS, they could learn to be mobile while in UHS like dolphins.

    Alice Monday claims everything in the set of “can only learn in childhood” is reachable by mental tech via trauma processing and James Cook’s survival cognition hierarchy. I doubt whether, if this is possible then either of them have much information about how to do it.

    Not having your hemis sleep at the same time as anything more than an experimental tool sounds still not useful even then, unless you e.g. don’t have a safe place to sleep.

  3. There’s a rationalist smear site that went up right after I blew the whistle on MIRICFAR’s donor fund misappropriation to pay out to blackmail to cover up statutory rape and progressive doubling down on anti-ethics downstream of that which was indicative of their transformation into an unfriendly AI org, “”. See here for story behind it.

    I didn’t respond right away in part because of overwhelm from CFAR’s retaliation. And because of not wanting to let myself be interrogated by making stuff up and checking which I refuted, forcing me to dump contents of this blog post before it’s finished.

    But now that I’ve forgotten most of what’s in it, it’s been a year and if there are year-long back-and-forths of them trying to interrogate me so, it just won’t be fast enough to matter, and I’ve actually encountered people believing some parts of it:

    The person completely made up details about UHS, in a way that looks like an attempt to reassure someone that Pasek’s Doom isn’t real. Like the whole site kind of read like a ghost story about me IIRC.

    UHS did not doom Pasek by sleep dep.
    UHS isn’t something any of us did for more than, I think the longest session was 1h30m by me.
    I later, once, noticed myself doing UHS accidentally in a situation of extreme stress when I was conflicted as to whether it was safe to sleep. Wouldn’t have noticed that’s what I was doing unless I already knew about UHS. I hypothesized that this was actually an evolved human use of UHS, less cool that dolphins but for keeping guard while at least part of you at a time sleeps. I think there was a study I later saw which came to the same conclusion.

    1. Like we were not frequentist scientists who believed in p values less than 1/20, we were fictive learners. UHS as an experimental tool we used a handful of times to get a bunch of mental handles we knew were grounded in that physical reality, not something that needed to be merged into the main branch of science with all the required work of validation guarding against malice, via thousands of data points, and then we just trusted those handles. If you’re not making the fallacy of hoping that you can delegate your inevitable responsibility of thinking and interpretation to a formal process, you can get quite far.

      I don’t think Isaac Newton would have poked himself in the eye again and again for a huge blind dumb bulk data collection process without thinking in between observation, there’s no point to that. That’s how trying to prove something works, not how discovery works.

  4. 1) It seems to me that something like the GG would need a large number of people to win. However, a single non-Good person on it could jeopardise the entire project. Are there sufficiently accurate ways of knowing the alignment of other people’s cores to ensure that no non-Good core gets in the GG?

    2) I understand you believe Good cores stay Good. How certain is this? What is the likelihood that a Double Good person in the GG does a face-heel turn before the project is completed?

    3) Do all Good cores really agree 100% on their utility functions? Seeing how complex such a function might be, this seems highly unlikely. Could these differences lead to infighting that jeopardises the success of the GG?

Leave a Reply

Your email address will not be published. Required fields are marked *