You might’ve noticed that ChatGPT — and AI in general — isn’t good at math. There’s a reason, and it has to do with how modern AI is built.
Basically, they’re autocorrect on steroids. Which some of us have been saying for, like, ages.
You might’ve noticed that ChatGPT — and AI in general — isn’t good at math. There’s a reason, and it has to do with how modern AI is built.
Basically, they’re autocorrect on steroids. Which some of us have been saying for, like, ages.
Autocorrect that can program?
Programming languages are structured and have rigid syntax that fits well in a LLM model, so it spitting out working code for simple things is like having a sentence that is structured like a normal person.
The code might not do what you are actually trying to do, or might work while being inefficient, even if it runs.
Both of those are user issues.