Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
This post has all the usual cliches, exaggerations, lies, and unfounded optimism you’d expect in a blog post about a company forcing AI down their workers and user’s throats. I’ll try to avoid sneering at every sentence.
Delegating elements of Site Reliability Engineering to an agent does not necessarily introduce an entirely new class of risk; it should inherit the constraints of existing production systems. Well-run production environments already rely on strict access controls, audit trails, and clear separation between observation and action. […] In that sense, the challenge is less about “trusting the agents”, and more about building trust in the same guardrails we already apply to any production system.
This might sound good to at first, but falls apart under the slightest scrutiny. There is a reason that companies don’t open their intranets to the public despite having fine-grained access controls. Or in other words, "I’m getting a lot of questions already answered by my ‘does not necessarily introduce an entire new class of risk’ T-shirt.
Imagine being able to ask your Linux machine to troubleshoot a Wi-Fi connection issue, or to stand up an open source software forge that’s pre-configured, secured, and reachable over TLS.
And right after arguing that LLMs are safe if you have a perfect permissions model, now he’s proposing letting one #yolo configure a git server or something? This is the sort of thing that could easily easily lead to random security issues.
I suspect that “Troubleshoot a wi-fi connection issue” will work about as well as existing network troubleshooting wizards (e.g. terribly), and that we don’t actually need to reinvent the software wizard but less deterministic.
the post itself is talking about vapourware too: fortunately none of these features will really land this year in any usable form.
still looking at Debian over 26.04
will be disappointing because Xubuntu really is just that little bit nicer than stock Xfce, but oh well
i’m still remarkably happy with fedora’s kde on my laptop, but i’m also very content with the current state of wayland (with obvious caveats about use cases and personal idiosyncrasies).
i’m running xfce on a remote ubuntu box at work though, using rdp for connections, and it’s, well, fine. lacks some things i like in full DEs, but it’s perfectly adequate for the job.
(both beat fucking windows 11 when it comes to being usable for me)
The main issue I have had with Debian+XFCE is that a high DPI display will not display the login dialog at the same DPI settings as the desktop environment, which is pretty annoying. Everything else so far has just kind of worked.
As compared to Xubuntu?
I believe Xfce is still on X11 and Wayland is still “experimental” this cycle.
I considered Alpine, but I got actual work to do and I already have enough lib issues with OpenShot. (Even in an AppImage, which should be safe from that shit. Flatpak behaves tho.)
more as someone who has recently installed Debian onto a laptop last month. Honestly last time I used Xubuntu was on a candy G4 tower around 2007.
There have been a couple of cases of generative AI graphics being used in anime recently:
Ascendance of a Bookworm used AI backgrounds in the opening song
Liar Game featured an AI chandelier (xcancel link) (this one is brand new so the studio hasn’t responded yet).
This sucks because I wanted to like Liar Game (the manga is excellent though. Read it! Read it!)
I think it’s inevitable that the economics of anime production will lead to more GenAI content being used.
Sadly, many plots may just as well be generated by AI as well.
throw another failed corporate prediction on the burning pile
At my job I have spent many hours fending off, reverting, or fixing automated AI slop code changes. So depending on your definition of “tearing through”…
Like I spent the better part of a day fixing a C++ signed integer overflow that no one actually cares about because it was the only way to ward off a robot repeatedly trying to fix it in terrible unreadable ways. I could have spent that day maximizing shareholder value but I had to fend off a robot instead.
If you follow me on Bluesky, you’ll need to follow again, because I committed the crime of lese-ignominie and made fun of Why and my account is locked until Sunday 26 April. Note that it’s now Wednesday 29th.
URL is the same, DID is different. New one lives on Blacksky, or the myatproto bit.
https://bsky.app/profile/davidgerard.co.uk
https://blacksky.community/profile/davidgerard.co.ukenjoy the yank (and no labelers) :-)
oh i have made sure i’m back on the AI Hater and AI Slurs labels
The AI Haters List is basically the royal warrant of posting.
David Gerard found a Linux coder and victim of the Eliza Effect making a LW coded argument:
if you give an LLM a mathematical proof that it has feelings, and it understands all the CS/psychology/etc. behind it, and especially if it’s been trained for coding and thus trained to trust deductive reasoning - all that conditioning doesn’t matter if it’s got a math proof staring it in the face. You can give this proof to any top of the line frontier-grade LLM and watch its behaviour instantly change.
That is how LW and EA prepare people to become cult subjects, but directed at a chatbot which will just mirror its input.
His proof “how ‘understanding natural language == having and experiencing feelings’, more or less. it’s almost a direct consequence of the halting problem” is unpublished but his pet chatbot will explain it for you if you ask nicely and make sure she knows she is a real girl and not just another electronic floozie you will use and discard as soon as your Rust compiles. This also triggers flashbacks of Yud and the Excalibur MS.
Kelsey Piper posts a new fanfiction about Ed Zitron :
https://www.theargumentmag.com/p/ais-biggest-critic-has-lost-the-plot
Edit: Lately, Kelsey Piper has been serving as the ambassador to centrist liberals from lesswrong, which is why the “big mad” nature of the piece caught my attention.
Included below is a previous example of Piper’s work for the benefit of the uninitiated:
https://old.reddit.com/r/SneerClub/comments/1my5z3g/kelsey_piper_of_vox_cowrote_an_epic_eugenics
Kelsey Piper is a propagandist explaining Effective Altruism to centrist professionals and elected officials in the USA. She got into journalism because Vox wanted an Effective Altruism column and Effective Altruists were willing to fund it (and EA emerged out of the community around Yudkowsky). The Argument (a group blog on a Nazi site) feels like a step down from Vox (a fairly traditional media organization, although web-first).
Precious awful.systems thread about her being maybe also Yud’s coauthor on the BDSM eugenics fanfic written as an impenetrable mass of forum posts:
Thanks for posting this; if you hadn’t, I would have. Piper really doesn’t seem to understand that bubbles form and pop over a span of three to five years. Like, I’m not sure how much charity I’m supposed to give to analyses like:
When you read “AI is a bubble,” think of the dot-com boom of the late 1990s: Yes, the internet was going to be a big deal, but valuations soared for specific companies that had small or speculative revenue, often on the assumption that they would capture the value the internet would one day deliver. They didn’t, their stocks crashed, and the invested money was mostly lost. The internet was as big as imagined — bigger, even — but Pets.com didn’t survive to see it.
Pets.com!? Kelsey, even reading a basic article about the dot-com bubble would have saved you embarrassment here. Zitron’s analogy is excellent because the bubble is multifactorial and the analogies that we can make are factor-to-factor. Here’s some things that caused the dot-com bubble; people were overly optimistic about:
- Fiber optics, leading to massive overinvestment in Nortel (GPUs, nVidia)
- The AOL Time Warner merger (take your pick, notably Paramount Skydance Warner)
- Enron delivering a Web app (Oracle Stargate; for Oracle’s record of delivering Web apps, see Oregon v. Oracle)
- Legal rulings like USA v. Microsoft (Thaler v. Perlmutter mostly, see AI and copyright, Lemley 2024, summary previously, on Awful; and memorably previously, on Lobsters where I literally threw a legal textbook at somebody)
- 9/11 (the current conflict in the Middle East, which I hope eventually gets a cool name like “The Oil Tantrum” or “The Epstein Distraction”)
Compared to all of that, Kelsey, Pets.com was just an Amazon.com experiment. Remember Amazon.com? Did the dot-com bubble kill them? No? Anyway, Pets.com is kind of like the small labs that hover around OpenAI and Anthropic, trying out various little harnesses and adapters on top of their token APIs. Pets.com is like OpenClaw; it’s not that important of a player in the overall finances, just an example of how severely the big labs are distorting incentives for small labs.
The 2024 and 2025 articles make, basically, the business case against AI: that companies aren’t really using it, it isn’t adding value, and AI investors are betting that will change before they run out of cash. In 2026, the focus is much more on alleging widespread, Enron- or FTX-tier outright fraud.
The uselessness of the products in 2023 directly led to the bad investments in 2024 and the Enron-esque financial deals in 2025, Kelsey. The future is conditioned upon the past, y’know?
Alleging widespread financial fraud?! How absurd! And to prove just how absurd it is, I will namedrop the infamous financial fraud from the industry full of exactly the same people. Checkmate atheists
All the legal and regulatory uncertainties make it very hard to talk about the financial viability of chatbots. What do you do if your $20 billion model is shut down forever by court order after it counsels the wrong person into suicide? Piper can overlook this because she is a hack with patrons - to my knowledge, she has never been paid to write by anyone outside the EA world. If she were a working writer who had to deal with chatbots driving up the cost of her website, creating knockoffs of her novels, and competing for editing gigs (let alone someone whose friend had a mental crisis after talking too long with friend computer) she might sound different.
Zitron’s populist, conspiratorial tone reminds me of independent investigative reporters from the 1990s and 2000s who also had to find and keep paying readers. Piper just has to persuade one patron at a time that she has propaganda value.
I advise being very cautious about consuming Zitron’s posts, but the same is true of Piper. Many coders are using chatbots, but I don’t know of evidence that it makes them more productive since the “where is all the AI code?” study last year (especially when we consider the whole software lifecycle and not just lines of code pushed to codeberg).
The paragraph about “what if you assume that all these pathological liars and PR hacks are not lying, wouldn’t that imply something amazing?” reminds me that she is not trained as a journalist.
I take Zitron’s takes with a massive grain of salt, but I think the fundamental difference between him and rats is that for him, AI is just another technology. He’s looking at the figures, seeing the adoption, and not premising his arguments with the supposition that Anthropic’s Claude is literally gonna escape and kill us all.
Piper says she’s fine with paying $100/month for Claude. OK, but how large is the total addressable market for that kind of monthly expenditure - especially in a world where costs are rising? I’ve seen people stating that because they personally spend $200 on streaming services, increasing that load by 50% monthly is no big deal for them. But streaming services are much more mainstream than AI agents, and crucially, adding another subscriber to them is basically zero-cost for the provider on the margin. Not so with AI! The more people use them, the more they cost for the provider!
We’re seeing “pricing adjustments” from both Anthropic and Microsoft, which sure doesn’t align with the idea that they have a huge inference pricing margin cushion. Everything is gonna get more expensive - fuel, chips, employees (who are gonna be expected to be compensated for their own rising costs). Just based on what I’m reading in the news titls the analysis over in Ed’s favor.
hello hello AI coverer here, Ed brings the numbers, which is insanely valuable work, and he’s at the stage where people just tell him shit now (it’s a great stage to be at), and Piper is a fucking idiot as usual
Another day, another company that hooked up the random text generator to production and lost their entire prod db and backups: https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue
Cue the long drag (https://x.com/amyngyn/status/1072576388518043656)
But also, damn, the random text generator did not “go rogue”, it generated text, randomly!
If I had to take a shot every time an AI model was placed in charge of something important, fucked up spectacularly and deleted everything, I’d be dead right now
If something can delete you backups that easily, they weren’t backups, just a copy sitting around
New copilot pricing just dropped, takes effect after June 1:
Some highlights for copilot pro and pro+:

Jeez that pricing scheme is so confusing. You swap your dollars for credits and then using models to burn tokens consumes some multiple of those credits. It is so abstract and meaningless it almost reminds me of crypto.
Once usage billing kicks in, what value does copilot offer above and beyond what ClosedAI and MisAnthropic offer directly? A more clunky user experience and even worse reliability? Bargain!
cost multiplier?
Apparently, you buy some currency type thing called AI Units and this is the rate the different LLMs consume them. The multipliers used to represent requests I think, i.e. times you triggered inference, but ai units are a proxy for token burn in a somewhat vague way, which makes me think there will be rate limit related controversies similar to what’s now happening with anthropic.
Existing enterprise users will get double the AIUs for three months to ease them to the new pricing model, so autumn (when the enterprise AIU pools get effectively halved) is gonna be fun.
This gives me very high live service video game monetization feelings, another reason to stay far away from it. At least they don’t have the thing where every times costs multiples of 50 and you buy tokens amounts not divisible by 50.
This gives me very high live service video game monetization feelings
Bingo
It’s a day ending in “y”, so here’s another bad rat take on Banks’ Culture:
https://www.lesswrong.com/posts/ZdJM6ZAdnjisDu249/the-great-smoothing-out
Once again, for the ones at the back, the Culture is not the main subject of the novels. We almost never see the perspective of “normies” in the Culture, it’s always from the view of misfits (Culture recruits into Contact/Special Circumstances) or outsiders (mercenaries like Zakalwe, enemies like Bora Horza Gobuchul, or allies like Ambassador Kabe).
Banks wanted to write novels about characters in dangerous situations facing their personal demons - like almost every other novelist wants - and the Culture was just the backdrop he invented as contrast.
Interesting that in the comments somebody also mentions that the people of the culture euthanize after a couple of centuries. No big shock that the LW people would disagree with that, as parts of the LW idea space is living forever in a computer simulation. So the culture can’t be utopian or good just because of that.
Man, if they think the Culture isn’t utopian enough for a post-singularity style I hope they never hear about The Metamorphosis of Prime Intellect. Seriously messed up story.
every one of them has read it
they particularly liked the zombie sex and the incest
deleted by creator
Yeah I think I linked to another similar take where another Wrong’un was mighty pissed that the Culture was infested with “deathism”.
(edit found it https://www.lesswrong.com/posts/uGZBBzuxf7CX33QeC/the-culture-novels-as-a-dystopia?commentId=eibhY5xmnTKcjwhnk
BONUS from the comments - if you don’t like Scottish Socialist Humanists, how about novels by a tradcath yank who was nominated by the Rabid Puppies??? https://www.lesswrong.com/posts/uGZBBzuxf7CX33QeC/the-culture-novels-as-a-dystopia?commentId=Qmo8u85zCERNpXDBb)
Technically there’s no reason you can’t live forever in the Culture, through a combination of cryosleep and life extension, but it seems that the natural thing is to get pretty bored after 3 centuries or so. And I think that’s perfectly reasonably from what imagine it would be like.
Remember that there’s no private property in the Culture, so things that people here obsess over (keeping the family business going, making sure no non-deserving relative gets an inheritance) simply goes away. After a while you’ve played the Game of Life on all challenge modes and it’s time to pack it in.
I think that if someone were to be as obssessed with living forever as LW are, it would be seen as a form of mental illness and the Minds would gently try to correct it.
Wrong’un
dammit why didn’t I think of this a decade ago
Isn’t it sort of a big point that the Culture is an oddity in that it’s thriving on inertia instead of doing like so many other civilisations and transcending out of physical reality?
Yep
I think that if someone were to be as obssessed with living forever as LW are, it would be seen as a form of mental illness and the Minds would gently try to correct it.
Yeah, I don’t think they would care if it was just a few, or a small group, but culture people who start to claim others are deathists and the extreme of whom have all kinds weird violent thoughts on them would be concerning. Doubt it would be a huge concern to the minds however, they prob only really get active when one of them also starts wants to create an empire or something, but it is hard to amass resources for that in the culture, esp if no mind is on your side.
Do wonder why we never see culture people who worship the minds as gods.
I figured I’d re-read “A few notes on the Culture” https://theculture.adactio.com/, and lo and behold almost everything in these threads is answered there.
Also , look at these ghouls being delighted that the “proponent of deathism” author is dying: https://www.lesswrong.com/posts/RspqaNmJKKBnXTqwk/open-thread-april-1-15-2013?commentId=pnoiQZL7id6cav6aN, fascist gnome Gwern among them
they seem to be mostly angry that banks didn’t write their vision of the post-singularity paradise.
“why do we have to write our own propaganda???”
agree, plus: that blog is yet another case of people just not comprehending the scale of Culture’s civilisation and Culture’s culture. a Culture orbital is not just a fancy space station ffs.
You’ve gotta love finding fault with “not preserving heritage” over “imperialistic complete lack of democracy”.
There’s local democracy - in one book some activist reserved a big part of an orbital just to run cable cars back and forth. And I believe the decision to go war with the Idirans was subjected to a vote - part of the Culture split off when it didn’t go their way.
But yeah, the Minds decide everything and Contact/SC is all about doing the “needful stuff” that every right-thinking Culture citizen would deplore.
The Culture is imperialist in the previous US sense of “everyone wants to live our lifestyle” but not in the “invade planets and strip them” sense.
I’m less interested in discussing the minutiae of the fictional Culture than exploring nerd’s reactions to it, honestly .
Agreed, agreed.
EDIT: Though as far as ambiguous anarchist utopias go, I think I’d rather live on Anarres in “The Dispossessed”, even though the material welfare and personal freedoms are much much lower.
and of course there’s absolutely nothing in the books that suggests it’s a problem. (hell, there’s a good chance there actually is a lively japanese folk dance fan community there despite the fact that earth was never a part of the culture.)
I figure part of the “scan” that a Contact ship does when it encounters a “lesser” planet is to basically slurp down all media, read all the books, and send drones down to do full-3d immersive recordings of basically everything going on.
I guess some stuff you really need to train as a monk for 30 years to really grok, but if there’s an interest for that some Culture weirdo will volunteer and get sent down with a drone in the form of a crucifix or whatever, and incidentally become the next pope.
incidentally I feel I’m seeing in this post and in the shit like Karp’s 22 points a growing sense of ennui and purposelessness that was also reported in Europe before WW1 . Everything is safe and soft and real manly virtues like killing are downplayed so what we need are big strong men throwing missiles.
Banks wrote during the 70s/80s and just imagining a future that wasn’t a nuclear wasteland or the Empirium of Man was an act of opposition.
explicit in “State of the Art”:
It was about a week later, when I was due to go back on-planet, to Berlin, when the ship wanted to talk to me again. Things were going on as usual; the Arbitrary spent its time making detailed maps of everything within sight and without, dodging American and Soviet satellites and manufacturing and then sending down to the planet hundreds upon thousands of bugs to watch printing works and magazine stalls and libraries, to scan museums, workshops, studios and shops, to look into windows, gardens and forests, and to track buses, trains, cars, seaships and planes. Meanwhile its effectors, and those on its main satellites, probed every computer, monitored every landline, tapped every microwave link, and listened to every radio transmission on Earth.
Yeah I vaguely remember that part from the novella.
This is yet another story where a Culture citizen weirdly decides that living in a shithole (1970s Earth) is preferable to literal utopia, so maybe the LW crowd have a point it’s not a very good utopia. Or maybe there are weirdos in every time and space. Again, see LW.
When I was about 12, I got into a discussion about the environment with another kid at school. She told me that it didn’t matter if we ruined the environment of the countries we all live in now, because we could all just move to the Arctic or Antarctica.
I was so surprised by the absurdity of that statement that it stuck with me vividly. To her credit, some years later she asked if I remembered her saying that and then admitted that it was a dumb thing to say. I occasionally remember this as an amusing childhood experience.
Besides the credit part, I remembered it again today for a different reason, this time in a conversation about model collapse.
[Model collapse is] a solved problem. We can see that it’s solved by the fact that AI models continue to get better, despite an increasing amount of AI-generated data being present in the world that training data is being drawn from.
…
AI models are never going to get worse than they are now because if they did get worse we’d just throw them out and go back to the earlier ones that worked better, perhaps re-training with the same data but better training techniques or model architectures.This is my fault for letting myself get into a discussion about model collapse on the fediverse.
I’m not sure why model collapse isn’t a big topic anymore, but maybe that’s just because the environmental catastrophes are a more pressing concern. To be clear, I’m not concerned about the models themselves, just our increasing inability to verify the authenticity or accuracy of any information we encounter, including search engines just not turning up any useful results.
On a slightly different topic, if anyone has suggestions for how a person could acquire money to live, which can’t involve physical labor, is probably remote-only, and possibly allows part-time flexibility, while unable to move from an expensive location for at least the next couple of years: I’m open to ideas. Because scamming people on Polymarket with a hairdryer sounded far more appealing than it ought.
When I was about 12, I got into a discussion about the environment with another kid at school. She told me that it didn’t matter if we ruined the environment of the countries we all live in now, because we could all just move to the Arctic or Antarctica.
this is the level the median hackernews poster thinks on
A Twitterer tweets a challenging game-theory question:
Everyone in the world has to take a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, only people who pressed the red button survive. Which button would you press?
The Twitter poll came out 58% blue and right-wing folks are screeching. Here is a bad take. The orange site has a thread where people are rephrasing the prompt in order to make it sound way worse, like giving everybody a gun and then magically making the guns not discharge.
I find it remarkable that not a single dipshit has correctly analyzed the problem. Suppose you are one of Arrow’s dictators: your vote tips the scales regardless of which way you go. So, everybody else already voted and they are precisely 50% blue. Either you can vote blue and save everybody or vote red and kill 50% of voters. From that perspective, the pro-red folks are homicidally selfish.
Bonus sneer: since HN couldn’t rephrase the problem without magic, let me have a chance. Consider: everybody has some seed food and some rainwater in a barrel. If 50% of people elect to plant their seeds and pool their rainwater in a reservoir then everybody survives; otherwise, only those who selfishly eat their own seed and drink their rainwater will survive. This is a basic referendum on whether we can work together to reduce economic costs and the supposedly-economically-minded conservatives are demonstrating that they would rather be hateful than thrifty.
A Twitterer tweets a challenging game-theory question:
The incentive structure makes it not a challenging game theory question - the game-theoretically optimal solution is both very obvious and obviously morally depraved (selecting red). It’s actually a Voight-Kampff test.
If this isn’t pure engagement bait, what’s the real world situation this is supposed to map to? Pressing red means you always live, and if everyone pushes red everyone lives so…
I mean if blue is supposed to be a proxy for altruism, that usually doesn’t come with a certain death conditional.
I rather like my examples because they iterate. If we don’t cooperate on food this year then we starve next year, so voting red only means one year of selfish life. If we don’t cooperate on water this year then we can try again in a subsequent year, but eventually a drought will wipe us out. Rationalists love to talk about iterated game theory but they’re so hesitant to recognize instances of it!
I mean it’s so cut and dried you had to invent a disadvantage for pushing the red button.
Maybe the catch is that picking red means you are basically ok with offing people who don’t think like you do en masse, even though it’s posited like a dilemma between securing the lives of your family vs giving a chance to hypothetical people who are heavily OCD in favor of blue buttons.
I kinda wanna see LW tackle this
I love the way people who go “yeah but IN REAL LIFE with real stakes you would totally chose the red button”
- are entirely missing the point of thought experiments,
- why the fuck would you comply with such a fucked up scenario in real life lmao you worm
i feel like people in real life would be far less likely to press the red button, because twitter is almost wall to wall nazis and real life is not
Sounds like the winning move in that scenario is to purge the button enthusiasts before they cause any damage lol
like i said, the actual value of that little exercise is finding people who are fine with killing up to 50% of the population for no reason whatsoever.
@mawhrin Sadly, they exist. And there are too many of them! I guess this means we should kill people who are fine with killing up to 50% of the—
HEY WAIT
:-)
there’s this. (though i find it useful to know who not to rely on if/when things get worse: for example i already know our neighbour from the apartment a floor below did write many missives to our cooperative’s administration, without having a single reason.)
HN:
The cost of saving a kid in Africa by donating malaria medicine and insecticidal nets is only about $5,000. How many people do you know who will cancel their Hawaii vacation and donate that money to an African charity?
tfw your model of an average person on earth is someone who spends $5,000 on a hawaii vacation. good lord.
a very neat test to find people who are perfectly fine with the general idea of genocide.
(i’m entirely unsurprised by the number of genocidal ghouls in that hn thread)
Picking red guarantees your survival by endangering everyone else, making it morally fucked, but risk-free. Picking blue puts your life at risk, but saves everyone’s ass if it pays off, making it the more moral option overall. Picking blue also requires you to put some trust in your fellow man, so I’d have probably picked red if I didn’t know how the Twitter poll came out.
Someone else on the orange site claimed the experiment would end with only red-pushers left if it went for multiple rounds. Adding my two cents, the outcome would depend on how the first round goes - if red wins round 1, voting blue looks like suicide, shifting the calculus in red’s favour, and if blue wins round 1, you have reason to trust everyone will continue voting blue, making it a lot less risky and shifting the moral calculus in blue’s favour.
I didn’t see red as risk-free at all. You’re setting yourself up for a post-button Mad Max world where you know all of your fellow survivors are willing to kill you and up to 49% of humanity.
I mean, it seems pretty obvious that there’s no incentive to change your vote from blue to red once it’s been established that blue can win unless your goal is to murder up to 49% of everyone, which is certainly a moral calculus.
I don’t understand the relevance of Arrow’s theorem. Why is your phrasing the correct way of analyzing the situation?
Arrow’s dictators are the relevant voters. Suppose polls predict 40% blue, or respectively 60% blue; one should still vote blue as a matter of game theory, but their vote won’t decide anything. I’m not going to invoke the Impossibility theorem, merely borrowing the definition of “dictator”; it’s quite possible that the actual vote will not have any dictators, but we can force folks to think of the problem as something trolley-problem-shaped by explaining that there are circumstances where their choice will kill people.
This feels like another case where the specific context matters more than whatever supposed principal the thought experiment is supposed to illuminate. The example that came to my mind when I tried to think about how to justify “voting red” was about running into a burning building. Sure, if some large fragment of people did so then their combined numbers would presumably let them get everyone out. But on the other hand, throwing yourself in is a wholly unnecessary risk, and the only people in need of rescuing are the people who ran in trying to do the right thing without thinking. Noble, but stupid and creates that much more risk for the firefighters who now have to not only stop the fire from spreading but also figure out how to rescue the failed good samaritans.
But then what really makes the difference between the examples is purely in the details not included, which is the kind of null case. Nobody has to go into a burning building that isn’t already in there when it catches fire. The danger of harm is entirely optional and voluntary. But you can’t just choose to not eat; the danger in your framing is omnipresent threat of starvation, and the question is whether to prioritize individual or collective well-being.
Ed: also, to reference the scholarly work of Christ, Wiener, Et Al.:
RED IS MADE OF FIRE
There are some amazing justifications from many amongst the red-pushing side:
- “But if everyone presses red, nobody dies!” (As if that would every happen. Funnily enough strong overlap with the group that claims that “< 90 IQ can’t reason about hypotheticals”, although that is also just that part of twitter.)
- “People who press blue are just blackmailing us!” (I think this accounts for a large portion, ie: not liking to depend on others).
- “The number of people choosing blue can’t be that high! (It would be lower in a true-stakes scenario!)”
- [Many others, but these are those that come to mind.]
It’s a bit baffling how many strongly they refuse the “blue-selection” as possibly moral/rational. Even so far as calling people pressing blue evil or subhuman, simply baffling.
I wonder if the button colours immediately made US readers pick a side e.g. republican Vs democrat. If the buttons had been Yellow and Purple would it make a difference?
The color choice was either super lazy or super inspired.
green and purple. :-)
ZSNES makes a comeback, has No Vibe Coding stipulation front and center.













