Getting flashbacks to the people who thought the GameStop guy was a leftist
Getting flashbacks to the people who thought the GameStop guy was a leftist
Iām wondering if this might have stemmed from A) OpenAI making it a nightmare for him, B) feeling despondent about the case, or C) personal things unrelated to the lawsuit. Kind of like what happened with the Boeing whistleblower after he had been fighting them for years and Boeing retaliated against him and got away with it. I donāt know if weāll ever know though.
Friends donāt let friends OSINT
i can stop any time I want I swear
The youtube page you found is less talked about, though a reddit comment on one of them said āanyone else thinking burntbabylon is Luigi?ā. I will point out that the rest of his online presence doesnāt really paint him as āanti techā overall, but who can say.
apparently there was an imposter youtube channel too I missed
not sure what his official instagram is, but I saw a mention of the instagram account @nickakritas_ around the beginning of his channel (assuming itās his). didnāt appear in the internet archive though.
also saw these twitter & telegram links to promote his channel, the twitter one was deleted or nuked (I use telegram to talk with friends who have it but the lack of content removal + terrible encryption means I donāt touch unknown telegram links with a 10ft pole, so I have no idea whatās in there):
I missed a couple videos which survived on the internet archive but I couldnāt make it through 5 seconds of any of them. one of them (āHow Humans Are Becoming Dumberā) cites that tech priest guy Gwern Branwen and āAnti-Techā was gone from the channel name by then. he changed the channel name a lot so maybe he veered away from it being an anti-tech channel?
edit: channel names were a little wrong, I put them in the parent comment
EDIT: this probably isnāt him, but Iāll leave it up. the real account appears to be /u/mister_cactus
Unsure where to put this or if itās even slightly relevant, but Iāve had some fun looking up the UH shooter guy.
I think Iāve found both his Reddit account and YouTube channel (itās been renamed a couple times). Kinda just wanted to see how much I could dig up for the hell of it. Big surprise that heās completely nuts
He got raked over the coals for this: https://www.reddit.com/r/collapse/comments/126vycx/why_scientists_cant_be_trusted/
https://api.pullpush.io/reddit/search/comment/?author=burntbabylon
hereās my chain of reasoning to get to the youtube channel:
burntbabylon
his early channel had some thumbnails made for him by ābastizopilledā, an ironic/unironic ābastizo futuristā whose does interviews in a black mask with a gun on him. he leads right into a bunch of other groypers and the guy in the screenshot I posted below. kind of wonder if that āblack mask with a gunā aesthetic influenced the clothes he brought to the shooting.
the channel names he used in 2023:
hereās a big pile of crazy tags he wrote on one of those videos (were people still writing tags in their video descriptions in 2023?):
unabomber, kaczynski, ted kaczynski, unabomber cabin, kasinski, kazinski, industrial society and and its future, unabomber manifesto, the industrial revolution and its consequences, transhumanism, futurism, anprim, anarchoprimitivism, anarchism, leftism, liberalism, chad haag, nick akritas, gerbert johnson, hamza, anti tech collective, what did ted kaczynski believe, john doyle, hasanabi, self improvement, politics, jreg, philosophy, funny tiktok, kaczynski edit, ted kaczynski edit, zoomer, doomer, A.I. art, artifical intelligence, elon musk, AI art, return to tradition, embrace masculinity, reject modernity, reject modernity embrace masculinity, reject modernity embrace tradition, jReg, Greg Guevara, sam hyde, oversocialized, oversocialization, blackpilled, modernity, the industrial revolution, self improvement
edit again: holy shit these people all suck. assuming the youtube channel is the shooter, heās a friend-of-a-friend of this guy:
and if thatās true, heād be a friend-of-a-friend-of-a-friend of nick fuentes
Iām not super familiar with Lobsters but I love how they represent bans: https://lobste.rs/~SuddenBraveblock
I saw this linked in the weekly thread and thought it was about Godot at first, but I thought that was just me. Didnāt expect to see 90% of the people here thought the same thing lol
edit: oh man, some of those comments. I still get culture shock from true believers, I forgot this probably got some attention on the orange site
Hidden horses is too good of a phrase to leave buried here
We lost āMechanical Turkā as a descriptor for AI because itās literally the name of the service they use for labeling training data. āActually Indiansā is still on the table.
edit: context https://www.independent.co.uk/tech/chatgpt-david-mayer-name-glitch-ai-b2657197.html
Time for another round of Rothschild nutsoās to come around now that ChatGPT canāt say one of their names.
At first I was thinking, you know, if this was because of the GDPRās right to be forgotten laws or something that might be a nice precedent. I would love to see a bunch of people hit AI companies with GDPR complaints and have them actually do something instead of denying their consent-violator-at-scale machine has any PII in it.
But honestly itās probably just because he has money
I think Sam Altmanās sister accused him of doing this to her name awhile ago too (semi-recent example). I donāt think she was on a ādonāt generate these words everā blacklist, but it seemed like she was erased from the training data and would only come up after a web search.
RationalWiki really hits that sweetspot where everybody hates it and you know that means itās doing something right:
From Prolewiki:
RationalWiki is an online encyclopedia created in 2007. Although it was created to debunk Conservapedia and Christian fundamentalism,[1] it is also very liberal and promotes anti-communist propaganda. It spreads imperialist lies and about socialist states including the USSR[2] and Korea[3] while uncritically promoting narratives from the CIA and U.S. State Department.
From Conservapedia:
RationalWiki.org is largely a pro-SJW atheists website.
[ . . . ]
RationalWikians have become very angry and have displayed such behavior as using profanity and angrily typing in all cap letters when their ideas are questioned by others and/or concern trolls (see: Atheism and intolerance and Atheism and anger and Atheism and dogmatism and Atheism and profanity).[33]
From WikiSpooks (with RationalWikiās invitation for anyone to collaborate highlighted with an emotionally vulnerable red box for emphasis):
Although inviting readers to āregister and engage in constructive dialogueā, RationalWiki appears not to welcome essays critical of RationalWiki[3] or of certain official narratives. For example, it is dismissive of the Journal of 9/11 Studies, terming it, as of 2017, it a āpeer- crank-reviewed, online, open source pseudojournalā.[4]
And a little bonus:
My site questions Darwinism but thatās become quite mainstream. But my rationalwiki page has over 20 references to me being a creationist, and is tagged āpseudoscience.ā Untrue
I donāt think the main concern is with the license. Iām more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. Iāve tested it and it works just fine on valkey 7.2, but there is a gate that checks if itās not Redis and throws an exception. I think this is the behavior that might spread.
Jesus, thatās nasty
That kind of reminds me of medical implant hacks. I think theyāre in a similar spot where weāre just hoping no one is enough of an asshole to try it in public.
Like pacemaker vulnerabilities: https://www.engadget.com/2017-04-21-pacemaker-security-is-terrifying.html
caption: āāāAI is itself significantly accelerating AI progressāāā
wow I wonder how you came to that conclusion when the answers are written like a Fallout 4 dialogue tree
Iāve seen people defend these weird things as being ācoping mechanisms.ā What kind of coping mechanism tells you to commit suicide (in like, at least two different cases I can think of off the top of my head) and tries to groom you.
Hi, guys. My name is Roy. And for the most evil invention in the world contest, I invented a child molesting robot. It is a robot designed to molest children.
You see, itās powered by solar rechargeable fuel cells and it costs pennies to manufacture. It can theoretically molest twice as many children as a human molester in, quite frankly, half the time.
At least The Rockās child molesting robot didnāt require dedicated nuclear power plants
One of my favorite meme templates for all the text and images you can shove into it, but trying to explain why you have one saved on your desktop just makes you look like the Time Cube guy
I love the word cloud on the side. What is 6G doing there
Oh wow, Dorsey is the exact reason I didnāt want to join it. Now that he jumped ship maybe Iāll make an account finally
Honestly, what could he even be doing at Twitter in its current state? Besides I guess getting that bag before it goes up or down in flames
e: oh god itās a lot worse than just crypto people and Dorsey. Back to procrastinating
I know this shouldnāt be surprising, but I still cannot believe people really bounce questions off LLMs like theyāre talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery
I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, āHallucination is Inevitable: An Innate Limitation of Large Language Modelsā, submitted on 22 Jan 2024.
It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.
Then he immediately follows up with:
Then I started to discuss with o1. [ . . . ] It says yes.
Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].
Then I asked o1 [ . . . ], to which it says yes too.
Iām not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.
Cambridge Analytica even came back from the dead, so thatās still around.
(At least, I think? Iām not really sure what the surviving companies are like or what they were doing without Facebookās API)
Former staff from scandal-hit Cambridge Analytica (CA) have set up another data analysis company.
[Auspex International] was set up by Ahmed Al-Khatib, a former director of Emerdata.
Iām in the same boat. Markov chains are a lot of fun, but LLMs are way too formulaic. Itās one of those things where AI bros will go, āLook, itās so good at poetry!!ā but they have no taste and canāt even tell that it sucks; LLMs just generate ABAB poems and getting anything else is like pulling teeth. Itās a little more garbled and broken, but the output from a MCG is a lot more interesting in my experience. Interesting content thatās a little rough around the edges always wins over smooth, featureless AI slop in my book.
slight tangent: I was interested in seeing how theyād work for open-ended text adventures a few years ago (back around GPT2 and when AI Dungeon was launched), but the mystique did not last very long. Their output is awfully formulaic, and that has not changed at all in the years since. (of course, the tech optimist-goodthink way of thinking about this is āsmall LLMs are really good at creative writing for their size!ā)
I donāt think most people can even tell the difference between a lot of these models. There was a snake oil LLM (more snake oil than usual) called Reflection 70b, and people could not tell it was a placebo. They thought it was higher quality and invented reasons why that had to be true.
Orange site example:
Reddit:
For story telling or creative writing, I would rather have the more interesting broken english output of a Markov chain generator, or maybe a tarot deck or D100 table. Markov chains are also genuinely great for random name generators. Iāve actually laughed at Markov chains before with friends when we throw a group chat into one and see what comes out. I canāt imagine ever getting something like that from an LLM.