The bot spit out a lot of text, yet failed to answer the basic question: is OnlyOffice Russian, or not?
“…faced allegations and concerns”. Just about anything can be alleged. But is it true?
“it is important to note that OnlyOffice is officially headquartered in Riga, Latvia.” This is what is so exasperating about LLMs. Duh! Duh! That has to be the least informative line in the response. It was stated as a known fact in the original discussion, but here that AI goes, parroting that as if it is an answer.
Pretty much all of the ai tools available now have been shown to hallucinate, even if it started out with an internet search.
I’ve had ai tools spit out real looking URLs that led to 404 pages, because it had hallucinated those links. It’s a place to start your research, to maybe refine your questions, but I wouldn’t trust it much with the actual research.
An LLM, a large language model, that an ai tool like Mistral is, doesn’t really use knowledge, it predicts what the next logical text is going to be based on information it has been trained on. It doesn’t think, it doesn’t reason, it just predicts what the next words are likely going to be.
It doesn’t even understand text, that’s why all of them claimed that there were just 2 Rs in strawberry. It doesn’t treat text as text.
You can use it to rewrite a text for you, perhaps even summarize (though there’s still the possibility of hallucinations there), but I wouldn’t ask it to do research for you.
I like using OnlyOffice, too, which is based in Latvia.
deleted by creator
Oh no, people are using AI slop to generate Lemmy comments
deleted by creator
The bot spit out a lot of text, yet failed to answer the basic question: is OnlyOffice Russian, or not?
“…faced allegations and concerns”. Just about anything can be alleged. But is it true?
“it is important to note that OnlyOffice is officially headquartered in Riga, Latvia.” This is what is so exasperating about LLMs. Duh! Duh! That has to be the least informative line in the response. It was stated as a known fact in the original discussion, but here that AI goes, parroting that as if it is an answer.
I don’t like LLMs. I don’t like the power use, I don’t like the question of suffering, I don’t like the spam. It’s slop because it comes from a genAI.
AI chatbots are not and probably never will be good tools to research information
deleted by creator
Why are you using AI for research? It’s a glorified predictive text.
deleted by creator
Pretty much all of the ai tools available now have been shown to hallucinate, even if it started out with an internet search.
I’ve had ai tools spit out real looking URLs that led to 404 pages, because it had hallucinated those links. It’s a place to start your research, to maybe refine your questions, but I wouldn’t trust it much with the actual research.
An LLM, a large language model, that an ai tool like Mistral is, doesn’t really use knowledge, it predicts what the next logical text is going to be based on information it has been trained on. It doesn’t think, it doesn’t reason, it just predicts what the next words are likely going to be.
It doesn’t even understand text, that’s why all of them claimed that there were just 2 Rs in strawberry. It doesn’t treat text as text.
You can use it to rewrite a text for you, perhaps even summarize (though there’s still the possibility of hallucinations there), but I wouldn’t ask it to do research for you.
deleted by creator
deleted by creator