

In any case, I think we have to acknowledge that companies are capable of turning a whistleblower’s life into hell without ever physically laying a hand on them.
In any case, I think we have to acknowledge that companies are capable of turning a whistleblower’s life into hell without ever physically laying a hand on them.
I would argue that such things do happen, the cult “Heaven’s Gate” probably being one of the most notorious examples. Thankfully, however, this is not a widespread phenomenon.
Yes, even some influential people at my employer have started to peddle the idea that only “old-fashioned” people are still using Google, while all the forward-thinking people are prompting an AI. For this reason alone, I think that negative examples like this one deserve a lot more attention.
From the original article:
Crivello told TechCrunch that out of millions of responses, Lindy only Rickrolled customers twice.
Yes, but how many of them received other similarly “useful” answers to their questions?
It is admittedly only tangential here, but it recently occurred to me that at school, there are usually no demerit points for wrong answers. You can therefore - to some extent - “game” the system by doing as much guesswork as possible. However, my work is related to law and accounting, where wrong answers - of course - can have disastrous consequences. That’s why I’m always alarmed when young coworkers confidently use chatbots whenever they are unable to answer a question by themselves. I guess in such moments, they are just treating their job like a school assignment. I can well imagine that this will only get worse in the future, for the reasons described here.