Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:
Quantum fucking ai? Motherfucker,
- You donāt have ai, you have a chatbot
- You donāt have a quantum computer, you have a tech demo for a single chip
- Even if you had both of those things, you wouldnāt have āquantum aiā
- if you have a very specialist and probably wallet-vaporisingly expensive quantum computer, why the hell would anyone want to glue an idiot chatbot to it, instead of putting it in the hands of competent experts who could actually do useful stuff with it?
Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says āaiā to them.
Quantum computing reality vs quantum computing in popculture and marketing follows precisely the same line as quantum physics reality vs popular quantum physics.
- Reality: Mostly boring multiplication of matrices, big engineering challenges, extremely interesting stuff if youāre a nerd that loves the frontiers of human knowledge
- Cranks: Literally magic, AntMan Quantummania was a documentary, give us all money
Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says āaiā to them.
Thatās my hope either - every dollar spent on the technological dead-end of quantum is a dollar not spent on the planet-killing Torment Nexus of AI.
this is how one department head at Google gets more money for his compensation package out of the other bits of Google
New article from Axos: Publishers facing existential threat from AI, Cloudflare CEO says
Baldur Bjarnason has given his commentary:
Honestly, if search engine traffic is over, it might be time for blogs and blog software to begin to deny all robots by default
Anyways, personal sidenote/prediction: I suspect the Internet Archiveās gonna have a much harder time archiving blogs/websites going forward.
Up until this point, the Archive enjoyed easy access to large swathes of the 'Net - site owners had no real incentive to block new crawlers by default, but the prospect of getting onto search results gave them a strong incentive to actively welcome search engine robots, safe in the knowledge that theyād respect robots.txt and keep their server load to a minimum.
Thanks to the AI bubble and the AI crawlers its unleashed upon the 'Net, that has changed significantly.
Now, allowing crawlers by default risks AI scraper bots descending upon your website and stealing everything that isnāt nailed down, overloading your servers and attacking FOSS work in the process. And you can forget about reigning them in with robots.txt - theyāll just ignore it and steal anyways, theyāll lie about who they are, theyāll spam new scrapers when you block the old ones, theyāll threaten to exclude you from search results, theyāll try every dirty trick they can because these fucks feel entitled to steal your work and fundamentally do not respect you as a person.
Add in the fact that the main upside of allowing crawlers (turning up in search results) has been completely undermined by those very same AI corps, as āAI summariesā (like Googleās) steal your traffic through stealing your work, and blocking all robots by default becomes the rational decision to make.
This all kinda goes without saying, but this change in Internet culture all-but guarantees the Archive gets caught in the crossfire, crippling its efforts to preserve the web as site owners and bloggers alike treat any and all scrapers as guilty (of AI fuckery) until proven innocent, and the web becomes less open as a whole as people protect themselves from the AI robber barons.
On a wider front, I expect this will cripple any future attempts at making new search engines, too. In addition to AI making it piss-easy to spam search systems with SEO slop, any new start-ups in web search will struggle with quality websites blocking their crawlers by default, whilst slop and garbage will actively welcome their crawlers, leading to your search results inevitably being dogshit and nobody wanting to use your search engine.
FWIW, due to recent developments, Iāve found myself increasingly turning to non-search engine sources for reliable web links, such as Wikipedia source lists, blog posts, podcast notes or even Reddit. This almost feels like a return to the early days of the internet, just in reverse and - sadly - with little hope for improvement in the future.
Searching Reddit has really become standard practice for me, a testament to how inhuman the web as a whole has gotten. What a shame.
Sucks that a lot of reddit is also being botted. But yes reddit still good. Still fucked that bots take a redit post as input, rewrite it into llm garbage and those then get a high google ranking, while google only lists one or two reddit pages.
I donāt like that itās not open source, and there are opt-in AI features, but I can highly, highly recommend Kagi from a pure search result standpoint, and one of the only alternatives with their own search index.
(Give it a try, theyāve apparently just opened up their search for users without an account to try it out.)
Almost all the slop websites arenāt even shown (or put in a āListiclesā section where they can be accessed, but are not intrusive and do not look like proper results, and you can prioritize/deprioritize sites (for example, I have gituib/reddit/stackoverflow to always show on top, quora and pinterest to never show at all).
Oh, and they have a fediverse ālensā which actually manages to reliably search Lemmy.
This doesnāt really address the future of crawling, just the āGoogle has gone to shitā part š
In other news, I got an āIs your website AI readyā e-mail from my website host. I think Iām in the market for a new website host.
āwe set out to make the torment nexus, but all we accomplished is making the stupid faucet and now we canāt turn it off and itās flooding the house.ā - Every AI company, probably.
Pre GPT data is going to be like the steel they fish up from before there were nuclear tests.
E: https://arstechnica.com/ai/2025/06/why-one-man-is-archiving-human-made-content-from-before-the-ai-explosion/ ow look, my obvious prediction was obvious.
Alright OpenAI, listen up. Iāve got a whole 250GB hard drive from 2007 full of the Star Wars/Transformers crossover stories I wrote at the time. I promise you itās AI-free and wonāt be available to train competing models. Bidding starts at seven billion dollars. Iāll wait while you call the VCs.
Do you want shadowrunners to break into your house to steal your discs? Because this is how you get shadowrunners.
dark forest internet here we go!!!
deleted by creator
preprint, but this looks like itāll be making a splash soon
I wouldnāt be shocked about it - the general throughline of āAI rots your brainā, plus the ongoing discussion of AI in education, would give any shrewd politician an easy way to pull a Think Of The Childrentm on the AI industry, with minimal risk of getting pushback.
deleted by creator
First confirmed openly Dark Enlightenment terrorist is a fact. (It is linked here directly to NRx, but DE is a bit broader than that, it isnāt just NRx, and his other references seem to be more garden variety neo-nazi type (not that this kind of categorizing really matters)).
They have a badge now, JFC
I misinterpreted this reply as the guy in the post being hired as a police officer. Thank god.
Nope. That would be more immediately concerning but less dumb than the reality.
his other references seem to be more garden variety neo-nazi type
Also apparently pro LGBT neo-nazis, which I refuse to believe are not a parody. See this cursed screenshot:
Modern twitter is a parody of itself.
LGBT neo-nazis
That felt as a trollish misdirection to me tbh.
Its a thing believe it or not.
The Nashville chudās handle is even ārohmstuffā.
Me, a Nashvillian: I would like one bonaroo please
God: sorry Iām fresh out of bonaroo, have some weird nazis instead
One of the pedo crew to boot apparently.
So us sneerclubbers correctly dismissed AI 2027 as bad scifi with a forecasting model basically amounting to āline goes upā, but if you end up in any discussions with people that want more detail titotal did a really detailed breakdown of why their model is bad, even given their assumptions and trying to model āline goes upā: https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models
tldr; the AI 2027 model, regardless of inputs and current state, has task time horizons basically going to infinity at some near future date because they set it up weird. Also the authors make a lot of other questionable choices and have a lot of other red flags in their modeling. And the picture they had in their fancy graphical interactive webpage for fits of the task time horizon is unrelated to the model they actually used and is missing some earlier points that make it look worse.
If the growth is superexponential, we make it so that each successive doubling takes 10% less time.
(From AI 2027, as quoted by titotal.)
This is an incredibly silly sentence and is certainly enough to determine the output of the entire model on its own. It necessarily implies that the predicted value becomes infinite in a finite amount of time, disregarding almost all other features of how it is calculated.
To elaborate, suppose we take as our ābase modelā any function f which has the property that lim_{t ā ā} f(t) = ā. Now I define the concept of āsuper-fā function by saying that each subsequent block of āvirtual timeā as seen by f, takes 10% less āreal timeā than the last. This will give us a function like g(t) = f(-log(1 - t)), obtained by inverting the exponential rate of convergence of a geometric series. Then g has a vertical asymptote to infinity regardless of what the function f is, simply because we have compressed an infinite amount of āvirtual timeā into a finite amount of āreal timeā.
Yeah AI 2027ās model fails back of the envelope sketches as soon as you try working out any features of it, which really draws into question the competency of itās authors and everyone that has signal boosted it. Like they could have easily generated the same crit-hype bullshit with ājustā an exponential model, but for whatever reason they went with this model. (They had a target date they wanted to hit? They correctly realized adding in extraneous details would wow more of their audience? They are incapable of translating their intuitions into math? All three?)
Good for him to try and convince the LW people that the math is wrong. Do think there is a bigger problem with all of this. Technological advancement doesnāt follow exponential curves, it follows S-curves. (And the whole āthe singularity is nearā 'achtually that is true, but the rate of those S-curves is in fact exponential is just untestable unscientific hopeium, but it is odd the singularity people are now back unto exponential curves for a specific tech).
Also lol at the 2027 guys believing anything about how grok was created. Nice epistemology yall got there, hows the Mars base?
Also lol at the 2027 guys believing anything about how grok was created.
Judging by various comments the AI 2027 authors have made, sucking up to techbro side of the alt-right was in fact a major goal of AI 2027, and, worryingly they seem to have succeeded somewhat (allegedly JD Vance has read AI 2027) but lol at the notion they could ever talk any of the techbro billionaires into accepting any meaningful regulation. They still donāt understand their doomerism is free marketing hype for the techbros, not anything any of them are actually treating as meaningfully real.
Yeah, think that is prob also why a Thiel supports Moldbug, not because he believes in what Moldbug says, but because Moldbug says things that are convenient for him if others believe it (Even if Thiel prob believes a lot of the same things, looking at his anti-democracy stuff, and the ārape crisis is anti menā stuff (for which he apologized, wonder if he apologized for the apology now that the winds have seemingly changed).
titotal??? I heard they were dead! (jk. why did they stop hanging here, I forgetā¦)
We did make fun of titotal for the effort they put into meeting rationalist on their own terms and charitably addressing their arguments and you know, being an EA themselves (albeit one of the saner ones)ā¦
Ah, right. That. Reminds me of that old adage about monsters and abysses. āFighting monsters and abyss staring is good and cool, actually. France is bacon.ā Something like that, donāt fact check me.
Unilever are looking for an Ice Cream Head of Artificial Intelligence.
I think I have found a new favorite way to refer to true believers.
This role is responsible for the creation of a virtual AI Centre of Excellence that will drive the creation of an Enterprise-wide Autonomous AI platform. The platform will connect to all Ice Cream technology solutions providing an AI capability that can provide [blah blah blahā¦]
itās satire right? brilliantly placed satire by a disgruntled hiring manager having one last laugh out the door right? no one would seriously write this right?
I mean it does return a 404 now.
maybe they filled that position already
AllTrails doing their part in the war on genAI by disappearing the people who would trust genAI: https://www.nationalobserver.com/2025/06/17/news/alltrails-ai-tool-search-rescue-members
Amazing. Canāt wait for the doomers to claim that somehow this has enough intent to classify as murder. I wonder if theyāll end up on one of the weirdly large number of ābad things that happen to people in the national parksā podcasts.
Darwin Award-as-a-service
Donāt make tap the sign:
Donāt feed the bears!
My AllTrails told me bears keep eating his promptfondlers so I asked how many promptfondlers he has and he said he just goes to AllTrails and gets a new promptfondler afterwards so I said it sounds like heās just feeding promptfondlers to bears and then his parks service started crying.
OT: boss makes a dollar, I make a dime, thats why I listen to audiobooks on company time.
(Holy shit I should have got airpods a long time ago. But seriously, the jobs going great.)
Orange site being orange again⦠āPwease donāt hurt the fascists feelings š„ŗā
Irrelevant. Please stay on topic and refrain from personal attacks.
I think if someone writes a long rant about how germany wasnāt at fault for WW2 in a COC for one of their projects, its kinda relevant.
New lucidity post: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/
The author is entertaining, and if youāve not read them before their past stuff is worth a look.
looking forward to mastodon awful dot systems
There should be a weekly periodical called Business Idiot.
Re-begun, the edit wars over EA have:
That hatchet job from Trace is continuing to have some legs, I see. Also a reread of it points out some unintentional comedy:
This is the sort of coordination that requires no conspiracy, no backroom dealingāthough, as in any group, Iām sure some discussions go onā¦
Getting referenced in a thread on a different site talking about editing an article about themselves explicitly to make it sound more respectable and decent to be a member of their technofascist singularity cult diaspora. Iām sorry that your blogs arenāt considered reliable sources in their own right and that the āheterodoxā thinkers and researchers you extend so much grace to are, in fact, cranks.
And sure enough, just within the last day the user āHand of Lixueā has rewritten large portions of the article to read more favorably to the rationalists.
User was created earlier today as well. Two earlier updates from a non-account-holder may be from the same individual. Did a brief dig through the edit logs, but Iām not very practiced in Wikipedia auditing like this so I likely missed things. Their first couple changes were supposedly justified by trying to maintain a neutral POV. By far the larger one was a āculling of excessive referencesā which includes removing basically all quotes from Cade Metzā work on Scott S and trimming various others to exclude the bit that says āthe AI thing is a bit weirdā or ānow they mostly tell billionaires itās okay to be richā.
I suppose you could explain that on the talk page, if only you expressed it in acronyms for the benefit of the most pedantic nerds on the planet.
Also, not sure if thereās anything here but the Britannica page for Lixue suggests that thereās no way in hell its hand doesnāt have some serious CoIs.
Ed:
Also shout-out to the talk page where the poster of our top-level sneer fodder defended himself by essentially arguing āI wasnāt canvassing, I just asked if anyone wanted to rid me of this turbulent priest!ā
also: lol @ good faith edits.
A glorious snippet:
The movement
connected toattracted the attention of the founder culture of Silicon Valley andleading to many shared cultural shibboleths and obsessions, especially optimism about the abilityof intelligent capitalists and technocrats to create widespread prosperity.At first I was confused at what kind of moron would try using shibboleth positively, but it turns itās just terribly misquoting a citation:
Rationalist culture ā and its cultural shibboleths and obsessions ā became inextricably intertwined with the founder culture of Silicon Valley as a whole, with its faith in intelligent creators who could figure out the tech, mental and physical alike, that could get us out of the mess of being human.
Also lol at insiting on āexonymā as descriptor for TESCREAL, removing Timnit Gebru and Ćmile P. Torres and the clear intention of criticism from the term, it doesnāt really even make sense to use the acronym unless youāre doing critical analasis of the movement(s). (Also removing mentions of the espcially strong overalap between EA and rationalists.)
Itās a bit of a hack job at making the page more biased, with a very thin verneer of still using the sources.
So many of those changes are just weird and petty, too. Like, I canāt imagine a good reason to not reference Vitalik Buterin as āEthereum Founderā rather than just a billionaire. Iām sure that I can level the same critique at some pages that are neutrally trying to meet Wikipediaās standards, but especially in this context itās pretty straightforward to see that itās an attempt to remove important context and accurate information that might make them look bad.
There might be enough point-and-laugh material to merit a post (also this came in at the tail end of the weekās Stubsack).
The opening line of the āBeliefsā section of the Wikipedia article:
Rationalists are concerned with improving human reasoning, rationality, and decision-making.
No, they arenāt.
Anyone who still believes this in the year Two Thousand Twenty Five is a cultist.
I am too tired to invent a snappier and funnier way of saying this.
I do think Ed is overly critical of the impact that AI hype has had on the job market, not because the tools are actually good enough to replace people but because the business idiots who impact hiring believe they are. I think Brian Merchant had a piece not long ago talking about how mass layoffs may not be happening but thereās a definite slowdown in hiring, particularly for the kind of junior roles that we would expect to see impacted. I think this actually strengthens his overall argument, though, because the business idiots making those decisions are responding to the thoughtless coverage that so many journalists have given to the hype cycle just as so many of the people who lost it all on FTX believed their credulous coverage of crypto. If weāre going to have a dedicated professional/managerial class separate from the people who actually do things then the work of journalists like this becomes one of their only connectors to the real world just as its the only connection that people with real jobs have to the arcane details of finance or the deep magic that makes the tech we all rely on function. By abdicating their responsibility to actually inform people in favor of uncritically repeating the claims of people trying to sell them something theyāre actively contributing to all of it and the harms are even farther-reaching than Ed writes here.
I donāt like to speculate, but it seems tptacek is not happy about it for some reason: https://news.ycombinator.com/item?id=44291623
If tptacek werenāt a chicken heād go ask Ed directly. Aināt like heās hard to find.
Given the relative caliber of those two I think this may be considered an attempted inducement to suicide by better writer. Not that Iām complaining, mind you.