Share
Explore

On e/acc

I’ve been grateful to the couple of people who have been prompting me for my thoughts on e/acc as a movement and Beff Jezos as a character. I’m not sure why it matters to you what I think, but I know my thoughts have a tendency to occupy much of my waking attention so it might benefit me to spill them on the page so as to make room for new ones.
As I spend time with the e/acc movement I walk away with basically one conclusion: it’s fine.
I have some disagreements with them, but overall I’m not sure we could ask for better adversity to the ai safety movement.
And since you know I’m interested in and working on ai safety, you might think I mean this in a mean spirited way, like that he’s not very competent. But no I genuinely think that Beff (as I’ll continue to call him in protest of his doxxing) is thoughtful, knowledgable, has good points, and is building something cool.
On Twitter, this doesn’t come across at all. I would try and pull up some examples for you but opening social media is dangerous. I’ll come back to it when I have donned my attentional hazmat suit.
Beff agrees that he’s intentionally engineered his account to maximize “memetic fitness”. Honestly, one of my main learnings about the algorithm is that, yeah, whether you mean to or not either your message becomes a caricature of itself or you don’t get much reach. I’ve had this happen to me, too. So, I’m not going to ding him too much for the Twitter messaging, I’m going to blame the incentive landscape induced by the platform (which is self serving of me, because I’m working on systems that don’t have that sort of effect).
Instead, I’ll try to localize my criticisms to the parts of his message I’ve heard through him speaking, as on the MoZ podcast or on Lex Fridman. At this point I’ve spent a lot of time with his thoughts, and I’ll first and foremost say that they’re excellent, I’m grateful he exists and I hope he succeeds in his mission. (Note: as a founder I’m contractually obligated to wish him well because building things that work is really fucking hard even when you’re supported and cheered on).
Broadly, the thesis that Beff’s e/acc movement serves is “AI isn’t dangerous”, which is then wrapped in a bunch of physics babble, technology optimism, antiregulation slant libertarianism, arguments about regulatory capture, images that contain no plants, and buoyed by the now vogue anti-wokeness wave.
For me, everything is a distraction other than AI safety and regulatory capture. It’s not clear that AI is naturally safe by constitution, and it’s obvious that a monopoly on it would be wonderful for anyone who could acquire such a thing.
To be honest, I’m deeply concerned about both. And thus I’m grateful that the ai safety movement has an excellent adversary in Beff and friends. There is this narrow critical path from here to a beautiful future, and on one side is catastrophe and on the other is totalitarianism.
Never in the history of technology has there been such a powerful tool for the control of populations. Religion, newspapers, television, and social media all look in envy at ai and its potential. This is Beff’s supporting argument to swing in favor of ai that should be open sourced. And Beff would argue, compellingly I’ll add, that it’s though instilling the fear that these systems will be dangerous that someone could justify centralizing control of the technology. This is a good point.
On the other hand, it’s not clear that we shouldn’t fear it. The argument for ai safety normally goes something like this:
we will design ai that is more intelligent than us
we will grant them agency
they will use that intelligence and agency to act in ways that are surprising to us
“surprising”, as used here, doesn’t mean birthday party.
I view this series of inferences as essentially inarguable.
We’re already quite close to ai that’s smarter than us. E.g. it’s quite helpful in writing code, and what it lacks in precision it’s currently making up for in speed.
Among the first things people did was try to grant them agency through things like LangChain. In other domains like autonomous embodied robotics is getting better at a mile a minute.
If they don’t use their intelligence and agency in ways that surprise us then they must not be very intelligent. The problem is that what they choose to do will very much depend on what prior preferences they’re endowed with. In the case those preferences don’t perfectly match ours, their processing velocity means that playing games with us will be like trying to outrun a tree.
This is where I’m tempted to drop in to a discussion about paths to safe AI, but, wisely I think, Beff stays out of it. I don’t think Beff’s point is that AI safety isn’t important, he’s just representing the other side of the argument, the side that is concerned with the equally threatening costs that might come from granting unilateral control over those priors into the hands of one group.
So, fine, regulatory capture and ai monopoly = bad. Agreed.
But if you’re reading this and you’re particularly staunch on AI safety you might be tempted to point out the incentives. After all, Beff and friends are all building technology that could be worth fortunes if AI isn’t regulated, and regulation threatens those returns. To that I say, this is just more Bootlegger & Baptist Fallacy.
In the bootlegger and baptist fallacy you point out that there are some people who are sincere about a topic, and other people that have a horse in the race and therefore benefit from one of the sides winning. The bootleggers — those with something to gain — benefit from the work of the baptists, whose intentions are unimpeachable and so can act as the face of the movement.
What makes this a fallacy is that this is true about every movement. Just try to think of a single thing that you like about society that didn’t have some group it benefitted. In fact, it’s excellent when there exist two groups with almost equal but opposite benefits from the two possible outcomes, because they can balance one another out, we call this competition and we like it. Historically, many important issues have not had this shape. Cigarettes and climate change are two oft cited examples: who gets richer because less cigarettes are smoked? No one, and so the incentives are all skewed — you don’t have any anti-cigarette bootleggers to fund the anti-cigarette baptists, and so they’re perennially outgunned. Unincidentally, this is why governments create agencies for public health, the coordination problems of individuals is too high.
And this is especially true in the AI context. There are literally trillions of dollars to be made by ai companies, so of course there’s incentive and funding to build it, build consensus, and, yes, shape regulatory environments.
This brings up the next good point, which is that, if anything, we should be much more concerned about the asymmetric incentives in favor of building unsafe AI rather than safe AI. Safe AI could slow us down another 10 years, and would require public goods type investments. But superhuman AI will happen independent of our public interest investments — it’s totally clear that the incentives are such that we’re just going to keep sprinting toward the thing that looks like a cliff and hope our wings work by the time gravity sets in. What’s difficult about challenging this rapid progress are two facts (I only call information a fact when it’s not in my favor):
there aren’t any clearly threatening or dangerous ai incidents to point to (yet?)
ai will massively (to the power of graham’s number) improve our lives if it’s aligned
The absence of major failure cases so far means that it’s hard to tell a compelling story other than through syllogisms that are only convincing to nerds (this nerd included). I mean, at this point in the automobile revolution at least a thousand people had been crushed in horrible accidents and workers were getting their arms ripped off by manufacturing equipment (at least, as used in this sentence, refers to a minimum quantity, not a lament).
This is contrasted with the humongous upside of building ai. I mean, we’re talking about reaching energy and actuation levels that will allow us to entirely rebuild cities in mere days (for no environmental cost). Everyone decides they want to remake New York but this time hot pink for 4 days? Done. Travel costs will be near 0. Medicine will be extremely personalized and affordable, we’ll live much longer. If we also fix our permission systems — food insecurity will be as anachronistic as .
In fact, my guess is that the primary political vector for attack will be exactly this dimension: ai poses such a threat to existing income for so many people, and offers so much opportunity for enrichment, that it can be used to terrify people about their imminent loss of a job.
In response to this you normally say something like, “Yeah, but every time this has happened in the past new jobs have been created.” And I certainly agree with that. There will be new work, in the best outcome, most of our work will actually be to want things and to make up our mind about things. But doubtless the reskilling process will not be so simple, and if we know anything about people it’s that they would prefer to keep the dog gnawing at their ankles rather than be surprised by that dreadful thing called change. Good luck to whoever has the job of combatting those unions.
Somewhere near the beginning of this rant there was a mention of a movement called e/acc. And since then we’ve traversed the stochastic mountainsides of idea space, always predicting that next most likely token found just one step downhill. You might have noticed that most of these tokens have little to do with that movement. And that’s because I feel much closer to the twin questions of: How dangerous is decentralized AI? How dangerous is centralized AI? and quite distant from the exact ways those have manifested as collectives. What’s great about the discussion landscape I see shaping up so far is that it seems to be quite sensitive to good arguments. That’s great news! It’s only possible due to the kind of audience that’s showing up to watch these things — collectively we maintain a fairly high bar, and that has influenced which memes have lift. I mean for godssake the primary countermovement justifies itself with (faulty, we’ll get to that) entropic arguments, pretty cool!
This finally brings me to the less substantial disagreements I have with e/acc. First phlowers, then fysics.
E/acc has what they call a “cypherpunk” aesthetic. Basically, picture Blade Runner. Dank, mostly night time, red city lights, no green. Not tree in sight. I guess all their food is grown hydroponically underground or something. They’ve done a great, though perhaps a bit too eager, job visually distinguishing themselves. Ironically, I happen to know that many of the people involved in the movement are avid outdoors, live-off-the-land, wilds of nature type people.
I’m going to make up a thesis: I think it’s basically an attempt to signal “not associated with climate concerned people”. Because green implies climate, climate action requires government and government is the enemy here. If no greenery is what people want that’s fine, but idk probably is some useful fodder for their opposition when it comes time to point out that these are just city people with no appreciation nor interest in the rest of the country, or whatever the ppl with the mics will say. Personally, it bums me out because I really like plants.
In that sense though, e/acc is kinda endearing. It’s weird, like it passed a bunch of filters it shouldn’t have. “Accelerationism” as a term is actually a communist idea, which tries to hasten the downfall of capitalism by speeding it up and making things bad for people so they have to develop a class consciousness and suddenly revolt to finally produce a dictatorship of the proletariat! Who knows which direction that psyop goes. It’s sorta cute, almost quaint to see the term associated with a decidedly bottom-up, anti-government, pro capitalism movement. In many ways it is unplanned, organic, ad-hoc, unpolished, figure it out as we go. Good signs, fairly hard to fake those signals, too.
Some of that retrojustification though is a bit weird. The physics arguments in particular are just about nonsensical and I don’t understand why it’s worth anyone’s time.
There’s this thread to the argument that goes something like, “entropy can only increase, and the universe favors entities that increase entropy the fastest, therefore we must increase entropy as quickly as possible!”
What’s weird is that it’s a transparently useless argument for directing action, and yet is used to try to point in a particular direction. As if one day a civil engineer will show up to work and just start slinging I-beams onto the build site without any plan because “entropy, it must increase”. Yes, there are many ways to increase entropy, but the universe is very much not interested in how messy you can make your room, and much more interested in whether you can build a wind turbine that converts the predictable patterns of fluid flow into electricity, which can heat the engineer’s Cup of Noodles so that she can design a new and improved small modular reactor. I promise that most of the methods for accelerating entropy aren’t integral maximizing, and the ones where we go extinct certainly aren’t. The lioness pauses in the underbrush, as long as she remains utterly frozen she’s invisible to her prey, Beff Jezos can be heard in the distance, “GET MOVING! ENTROPY MUST BE MAXIMIZED!” Taking the safe path in the context of dangerous paths is entropy maximizing! For all e/acc’s appeals to evolutionary arguments, does it surpass them to realize that fear is adaptive (if perhaps oft overrepresented) as part of this wonderful and critical part of free energy minimization called planning.
If it’s news to you that you are a walking puddle, enveloped in epithelia, prying free energy from the heavens like so many diamonds, permitted to exist only due to the entropic emissions of the universe, then I guess I see why you think what he says is interesting! It is cool! But that knowledge is about as informative for our AI policy as for our content policies. Lesser minds would invoke category errors, but literally, as usual, it all comes down to energy and coordination. E/acc is team energy, ea is team coordination. In the end there’s a beautiful synthesis where it’s easy in retrospect to say that they’re both right. We already know, today, sitting here that that will happen. We just don’t know how right each of the respective views will be. And that’s the debate, one that I’m looking forward to watching unfold, lucky enough to have front row seats at the unending inflection point today called human progress, which is happening so fast that soon enough the modifier ‘human’ might seem too narrow.
See you out there, in our pink city.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.