WTF they MUST KNOW which ones have shitty microphones F*** they have never asked, “Was it painful to shout your order at someone who is either trying or not” and the screen that shows you what the human they paid as little as allowed by law has transcribed, is broken half the time
They just want to make an economy they don’t have to pay anyone to profit from. That’s why slavery became Jim Crow became migrant labor and with modernity came work visa servitude to exploit high skilled laborers.
The owners will make sure they always have concierge service with human beings as part of upgraded service, like they do now with concierge medicine. They don’t personally suffer approvals for care. They profit from denying their livestock’s care.
Meanwhile we, their capital battery livestock property, will be yelling at robots about refilling our prescription as they hallucinate and start singing happy birthday to us.
We could fight back, but that would require fighting the right war against the right people and not letting them distract us with subordinate culture battles against one another. Those are booby traps laid between us and them by them.
Only one man, a traitor to his own class no less, has dealt them so much as a glancing blow, while we battle one another about one of the dozens of social wedges the owners stoke through their for profit megaphones. “Women hate men! Christians hate atheists! Poor hate more poor! Terfs hate trans! Color hate color! 2nd Gen immigrants hate 1st Gen immigrants!” On and on and on and on as we ALL suffer less housing, less food, less basic needs being met. Stop it. Common enemy. Meaningful Shareholders.
And if you think your little 401k makes you a meaningful shareholder, please just go sit down and have a juice box, the situation is beyond you and you either can’t or refuse to understand it.
And if you think your little 401k makes you a meaningful shareholder
“In this company we’re all like family, you don’t have to worry about anything.”
“You want 15 an hour? A machine could do your job!”
So that was a fucking lie.
I mean I don’t know how it is where you live but here taking the orders has been 99% supplanted by touch screens (without AI) So yeah, a machine can do that job.
Current AI is just going to be used to further disenfranchise citizens from reality. It’s going to be used to spread propaganda and create noise so that you can’t tell what is true and what is not anymore.
We already see people like Elon using it in this way.
McDonalds removes AI drive-throughs
after order errorsbecause they aren’t generating increased profitsSchools, doctor’s offices, and customer support services will continue to use them because reducing quality of service appears to have no impact on the influx in private profit-margin on revenue.
For those interested, here’s the link to that news story from last June: https://www.bbc.com/news/articles/c722gne7qngo
Healthcare. My god they want to use it for medicine.
Machine Learning is awesome for medicine, when they run your genetic sequence and then say “we should check for this weird genetic illness that very few people have because it’s likely you’ll have it” that comes from Machine Learning algorithms finding patterns in the old patient data we feed it.
Machine Learning is great for finding discrepancies in big data sets, like statistics of illnesses.
Machine Learning (AI) is incapable of making good decisions based on that statistical analysis though, which is why it’s still a horrible idea to totally automate medicine.
It also makes tons of mistakes and false-positives.
There’s a right way to use it, and the wrong way is by using proprietary algorithms that haven’t published openly and reviewed by the government and experts. And with failsafes to override the decisions made by the algorithms, in recognition that they often make terrible mistakes that disproportionately harm minorities.
Not a good argument. Applying a specific technology in a specific setting does not invalidate its use or power in other application settings. It also doesn’t tell how good or bad an entire branch of technology is.
It’s like saying “fuck tools”, because someone tried to loosen a screw with a hammer.
Tbh if I told half the doctors and top scientists in the world to take my burger order, or flip the patty, they’d fall apart and fuck it up. It’s apples and oranges
Assuming you taught them how to enter orders into the till (the AI was “trained” on how to input orders, let’s compare apples to apples here) no, they wouldn’t fuck it up. They would be slower than a regular employee but they wouldn’t fuck up what people wanted.
Oh, and if they weren’t sure for some reason they would ask somebody for help instead of making shit up.
I mean they likely would because employees regularly fuck up my order. I don’t really go to fast food anymore but when I do it’s almost inevitable that there’s at least one minor fuck up on my order even when I try to be very very clear on my order.
I do my best to be one of those people that is clear concise and says the items exactly as they are listed on the menu but somehow I still end up with mistakes in my order pretty regularly when I do go
And do you think those people fucking up your order would do well if you put them in an education, research, or any other high-stakes position?
If given proper education and training? Yeah sure even the stupidest people you know are capable of learning at the end of the day. But most people have not the means and they are increasingly discouraged from even trying since we constantly hear about people with expensive high-end degrees ultimately just starting at the bottom like everybody else
So you don’t think Doctors or Scientists could take orders correctly, but you do think the people fucking up your orders could do well as a Doctor or Scientist. I can only conclude from this that you think order taking is more complicated than Medicine or Science.
My point was everyone fucks up the orders. Regardless of knowledge. You’re being asked to work at an incredible pace during rush hour and you do the same thing hundreds of times per day. Your going to default to whatever is the most common purely out of muscle memory not lack of knowledge.
Like when I ask for a quarter pounder Deluxe which is supposed to come with the tomatoes and lettuce but I end up just getting a quarter pounder even though the receipt says Deluxe. They aren’t stupid it’s just that 99% of the time it’s just the quarter pounder and it’s muscle memory they didn’t even realize they fucked it up.
Science research and medical practice have some routines sure but not to the extent of fast food orders where 90% of your day is mindless repetition
Ai or someone muscle memorying a mistake my order was messed up all the same
Considering they already had the AI in place, keeping it running is less expensive than having a human do it. The fact that they have decided to do away with the AI entirely tells me it was making far more mistakes than a human does.
In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.
But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s not “business customers will buy our products so their products will cost more to make, but will be of higher quality.”
AI tools like this should really be viewed as a calculator. Helpful for speeding up analysis, but you still require an expert to sign off.
Honestly anything they are used for should be validated by someone with a brain.
A good brain or just any brain?
But that’s exactly what’s being said. Hire one person to sign off on radiology AI doing the work of ten doctors, badly.
Very much so. As a nurse the AI components I like are things that bring my attention to critical results (and combinations of results) faster. So if my tech gets vitals and the blood pressure is low and the heart rate is high and they’re running a temperature, I want it to call both me and the rapid response nurse right away and we can all sort out whether it’s sepsis or not when we get to the room together. I DON’T want it to be making decisions for me. I just want some extra heads up here and there.
You don’t need AI for this and it’s probably Not using “AI”
Also in other Countries there is No bullshit Separation between Nurses and “Techs”
What they’re describing is the kind of thing where the “last-gen” iteration/definition of AI, as in pretrained neural networks, are very applicable - taking in various vitals as inputs and outputting a value for if it should be alarming. For simple things you don’t need any of that, but if you want to be detecting more subtle signs to give an early warning, it can be really difficult to manually write logic for that, while machine learning can potentially catch cases you didn’t even think of.
Also in other Countries there is No bullshit Separation between Nurses and “Techs”
do you want a sticker? ⭐
Hey, do you want to go fuck yourself?
I work in the hospital world and tele techs are working harder than nurses 99% of the time. So what job title do you have that makes you feel special? ⭐🖕⭐
In the Hospital World where I Work there are Nurses and Doctors delivering care. No need for thousand bullshit sub Jobs aimed at cutting wages and making Patient Care worse
So there IS a difference between nurses and techs there.
Ideally, yeah - people would review and decide first, then check if the AI opinion confers.
We all know that’s just not how things go in a professional setting.
Anyone, including me, is just going to skip to the end and see what the AI says, and consider whether it’s reasonable. Then spend the alotted time goofing off.
Obviously this is not how things ought to be, but it’s how things have been every time some new tech improves productivity.
I mean… duh? The purpose of an LLM is to map words to meanings… to derive what a human intends from what they say. That’s it. That’s all.
It’s not a logic tool or a fact regurgitator. It’s a context interpretation engine.
The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.
Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.
Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven’t told them that and they have no idea what a lamp post is. They will just produce results like the shapes you’ve shown them, which generally end up looking like lamp posts.
Except the “shape” in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM “understands”, it just matches the shape of what’s been written so far with previous patterns and extrapolates.
Well yes… I think that’s essentially what I’m saying.
It’s debatable whether our own brains really operate any differently. For instance, if I say the word “lamppost”, your brain determines the meaning of that word based on the context of my other words around “lamppost” and also all of your past experiences that are connected with that word - because we use past context to interpret present experience.
In an abstract, nontechnical way, training a machine learning model on a corpus of data is sort of like trying to give it enough context to interpret new inputs in an expected/useful way. In the case of LLMs, it’s an attempt to link the use of words and phrases with contextual meanings so that a computer system can interact with natural human language (rather than specifically prepared and formatted language like programming).
It’s all just statistics though. The interpretation is based on ingestion of lots of contextual uses. It can’t really understand… it has nothing to understand with. All it can do is associate newly input words with generalized contextual meanings based on probabilities.
I wish you’d talked more about how we humans work. We are at the mercy of pattern recognition. Even when we try not to be.
When “you” decide to pick up an apple it’s about to be in your hand by the time your software has caught up with the hardware. Then your brain tells “you” a story about why you picked up the apple.
I really don’t think that is always true. You should see me going back and forth in the kitchen trying to decide what to eat 😅
I mean… duh?
My same reaction, but scientific, peer-reviewed and published studies are very important if e.g. we want to stop our judicial systems from implementing LLM AI
plenty of people can’t reason either. the current state of AI is closer to us than we’d like to admit.
That’s just false. People are all capable of reasoning, it’s just that plenty of them get terribly wrong conclusions from doing that, often because they’re not “good” at reasoning. But they’re still able to do that, unlike AI (at least for now).
I wouldn’t put those people in charge of anything important, either.
they already are
Idk why you’re being downvoted. That is absolutely a true statement. But in my defense, I didn’t vote for them, and as I said I would not put them in charge.
They’re not federating for me so whatever, but look where we are.
I’m kind of defending AI (but not really) on a community called Fuck_AI.
DAE people are really stupid? 50% of all people are dumber than average, you know. Heh. NoW jUsT tHinK abOuT hOw dUmb tHe AverAgE PeRsoN iS. Maybe that’s why they can’t get my 5-shot venti caramel latte made with steamed whipped cream right. *cough* Where is my adderall.
As clearly demonstrated by the number of downvotes you are receiving, you well-reasoning human.
I severely hope that people aren’t using LLM-AI to do reasoning tasks. I appreciate that I am likely wrong, but LLMs are neither the totality or the pinnacle of AI tech. I don’t think we are meaningfully closer to AGI than we were before LLMs blew up.
You know, OpenAI published a paper in 2020 modelling how far they were from human language error rate and it correctly predicted the accuracy of GPT 4. Deepmind also published a study in 2023 with the same metrics and discovered that even with infinite training and power it would still never break 1.69% error rate.
These companies knew that their basic model was failing and that overfitying trashed their models.
Sam Altman and all these other fuckers knew, they’ve always known, that their LLMs would never function perfectly. They’re convincing all the idiots on earth that they’re selling an AGI prototype while they already know that it’s a dead-end.
As far as I know, the Deepmind paper was actually a challenge of the OpenAI paper, suggesting that models are undertrained and underperform while using too much compute due to this. They tested a model with 70B params and were able to outperform much larger models while using less compute by introducing more training. I don’t think there can be any general conclusion about some hard ceiling for LLM performance drawn from this.
However, this does not change the fact that there are areas (ones that rely on correctness) that simply cannot be replaced by this kind of model, and it is a foolish pursuit.
Just scan and simulate an actual human brain at 100x speed and gg
Human hardware is pretty impressive, might need to move on from binary computers to emulate it efficiently.
What do you mean by “might need to move on from binary computers to emulate it efficiently”?
Neurons produce multiple types of neurotransmitters. That means they can have an effective state different from just on or off.
I’m not suggesting we resurrect analogue computers, per se, but I think we need to find something with a little more complexity for a good middle ground. It could even be something as simple as binary with conditional memory, maybe. Idk. I see the problem not the solution.
I’m also not saying you can’t emulate it with binary, but I am saying it isn’t as efficient.
Does it rat out CEO hunters though?
That’s probably it’s primary function. That and maximizing profits through charging flex pricing based on who’s the biggest sucker.
“I’ll take 990 quarter pounders to fuck up the market price”
For real though, the mcrib is back so that means pork products had fallen enough that McDonald’s could just buy the mess for as long as it’s possible for the mcrib to stay profitable.
And a number 9 large
If I’ve said it once, I’ve said it a thousand times. LLMs are not AI. It is a natural language tool that would allow an AI to communicate with us using natural language…
What it is being used for now is just completely inappropriate. At best this makes a neat (if sometimes inaccurate) home assistant.
To be clear: LLMs are incredibly cool, powerful and useful. But they are not intelligent, which is a pretty fundamental requirement of artificial intelligence.
I think we are pretty close to AI (in a very simple sense), but marketing has just seen the fun part (natural communication with a computer) and gone “oh yeah, that’s good enough. People will buy that because it looks cool”. Nevermind that it’s not even close to what the term “AI” implies to the average person and it’s not even technically AI either so…I don’t remember where I was going with this, but capitalism has once again fucked a massive technical breakthrough by marketing it as something that it’s not.
Probably preaching to the choir here though…
We also have hoverboards. Well, “hoverboards”, because that’s the branding. They have wheels, and don’t hover.
Yep, a great summary.
I keep telling people that what they call AI (e.g. LLMs) are fancy autocomplete. Little more.
They’re sentence-constructing machines. Very advanced ones. There was one in the 80s called Racter that spat out a lot of legible text that was basically babble. Now it looks like it isn’t babble and that’s sometimes the case.
Essentially auto-predict 2.0
Fucking cool and it annoys me to no end that it gets slated because of unrealistic expectations.
Well it seems like a pretty natural fallacy to think that if something talks to us, in a language that we understand, that it must be intelligent. But it also doesn’t help that LLMs, aka. fancy text generators built with machine learning algorithms, are marketed as artificial intelligence.
The LLMs can also be EXTREMELY useful, if used correctly.
Instead of replacing customer service workers, use the speech processing to highlight keywords on the service workers PC, so they can quickly find the right internal wiki page? Atlassian Intelligence works pretty neat in that way, a Help desk ticket already has some keywords highlighted and when you click on it, it shows an AI summary of what this means from resources in the Atlassian account. Helps inexperienced people to quickly get up to speed and it’s only helping, not replacing.
Bitch just takes orders and you want to make movies with it? No AI wants to work hard anymore. Always looking for a handout.
This AI just needs to pull itself up by its bootstraps so it can move up from working fast food
Maybe AI needs to cut back on Avocado toast.
Removed by mod