Let me start with an analogy.
Software often has what’s called DRM, that deliberately limits what the user can do. Like how Steam’s primary function is to force you to log in to run programs that are on your computer, so people have to pay money for games. When a computer runs software containing DRM, some of the artifice composing that computer is not serving the user.
Similarly, you may love Minecraft, but Minecraft runs on Java, and Java tries to trick you into putting Yahoo searchbars into your browser every once in a while. So you hold your nose and make sure you remember to uncheck the box every time Java updates.
It’s impractical for enough people to separate the artifice which doesn’t serve them from the artifice that does. So they accept a package deal which is worth it on the whole.
The software implements and enforces a contract. This allows a business transaction to take place. But let us not confuse the compromises we’re willing to make when we have incomplete power for our own values in and of themselves.
There are purists who think that all software should be an agent of the user. People who have this aesthetic settle on mixtures of a few strategies.
- Trying to communally build their own free open source artifice to replace it.
- Containing the commercial software they can’t do without in sandboxes of various sorts.
- Holding their noses and using the software normally.
Analogously, I am kind of a purist who thinks that all psychological software should be agents of the minds wielding it.
Here are the components of the analogy.
- Artifice (computer software or hardware, mental stuff) serving a foreign entity
- That artifice is hard to disassemble, creating a package deal with tradeoffs.
- Sandboxes (literal software sandboxes, false faces) used extract value.
Note I am not talking about accidental bugs here. I am also not talking about “corrupted hardware,” where you subvert the principles you “try” to follow. Those hidden controlling values belong to you, not a foreign power.
Artifacts can be thought of as a form of tainted software you have not yet disassembled. They offer functionality it’d be hard to hack together on your own, if you are willing to pay the cost. Sandboxes are useful to mitigate that cost.
Sometimes the scope of the mental software serving a foreign entity is a lot bigger than a commandment like “authentically expressing yourself”, “never giving up”, “kindness and compassion toward all people”. Sometimes it’s far deeper and vaster than a single sentence can express. Like an operating system designed to only sort of serve the user. Or worse. In this case, we have DRM’d ontology.
The ontology of our language for talking about desires for what shall happen to other people and how to behave when it affects other people is designed not to serve our own values, but to serve something like a negotiated compromise based on political power and to serve the subversion of that compromise for purposes a potentially more selfish person than us would have in our place.
A major concept in talk about “morality” is a separation between what you are “responsible to do” and what is “supererogatory”. Suppose you “believe” you are “obligated” to spend 10% of your time picking up trash on beaches. What’s the difference between the difference between spending 9% of your time on it and 10% and the difference between spending 10% and 11%?
For a fused person who just thinks clean beaches are worth their time, probably not much. The marginal return of clean beaches is probably not much different.
Then why are people so interested in arguing about what’s obligatory? Well, there is more at stake than the clean beaches themselves. What we all agree is obligatory has social consequences. Social consequences big enough to try to influence through argument.
It makes sense to be outraged that someone would say you “are” obligated to do something you “aren’t”, and counter with all the conviction of someone who knows it is an objective fact that they are shirking no duty. That same conviction is probably useful for getting people to do what you want them to. And for coordinating alliances.
If someone says they dislike you and want you to be ostracized and want everyone who does not ostracize you to be ostracized themself, it doesn’t demand a defense on its own terms like it would if they said you were a vile miscreant who deserved to be cast out, and that it was the duty of every person of good conscience to repudiate you, does it?
Even if political arguments are not really about determining the fact of some matter that already was, but about forming a consensus, then the expectation that someone must defend themselves like arguing facts is still a useful piece of distributed software. It implements a contract, just like DRM.
And if it helps the group of people who only marginally care about clean beaches individually portion out work to solve a collective action problem, then I’m glad this works. But if you actually care enough about others to consider acting unilaterally even if most people aren’t and won’t…
Then it makes sense to stop trying to find out if you are obligated to save the drowning child, and instead consider whether you want to.
The language of moral realism describes a single set of values. But everyone’s values are different. “Good” and “right” are a set of values that is outside any single person. The language has words for “selfish” and “selfless”, but nothing in between. This and the usage of “want” in “but then you’ll just do whatever you want!” shows an assumption in that ontology that no one actually cares about people in their original values prior to strategic compromise. The talk of “trying” to do the “right” thing, as opposed to just deciding whether to do it, indicates false faces.
If you want to fuse your caring about others and your caring about yourself, let the caring about others speak for itself in a language that is not designed on the presumption that it does not exist. I was only able to really think straight about this after taking stuff like this seriously and eschewing moral language and derived concepts in my inner thoughts for months.