Honestly, I’m increasingly feeling that things like this are a decent use for a technology like ChatGPT. People suck and definitely have ulterior motives to forward their group. With AI, there’s at least some degree of impartiality. We definitely need to regulate the shit out of it and make clear expectations for transparency in its use, but we’re not necessarily doomed. (At least in this specific case.)
There’s no impartiality in the training data an LLM derives it’s answers from. This is no better than anyone who owns a media consortium or lobbying group writing a bill for a politician. An LLM can easily be directed to reflect or mirror the prompts that it is given. Prime example are the exploit prompts that have been found that can get chat gpt to reveal training data.
The issue is when bills are not written by politicians or when they skirt committee which is what lobbyists do. LLMs are just another tool for that, except they’re even worse as there are fewer humans employed in the process.
As far as answering
*What prompts exactly were used? Is it at all independently repeatable? *
Honestly, I’m increasingly feeling that things like this are a decent use for a technology like ChatGPT. People suck and definitely have ulterior motives to forward their group. With AI, there’s at least some degree of impartiality. We definitely need to regulate the shit out of it and make clear expectations for transparency in its use, but we’re not necessarily doomed. (At least in this specific case.)
There’s no impartiality in the training data an LLM derives it’s answers from. This is no better than anyone who owns a media consortium or lobbying group writing a bill for a politician. An LLM can easily be directed to reflect or mirror the prompts that it is given. Prime example are the exploit prompts that have been found that can get chat gpt to reveal training data.
https://www.businessinsider.com/google-researchers-openai-chatgpt-to-reveal-its-training-data-study-2023-12?op=1
https://news.mit.edu/2023/large-language-models-are-biased-can-logic-help-save-them-0303
https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opinion/
https://arxiv.org/pdf/2304.00612.pdf
I think that’s where the transparency comes it. What prompts exactly were used? Is it at all independently repeatable?
That’s where the advantage lies. With humans, the reasoning is truely a black box.
Also, I’m not arguing that LLMs are free of bias, just that they have a better shot at impartiality than any given politician.
The issue is when bills are not written by politicians or when they skirt committee which is what lobbyists do. LLMs are just another tool for that, except they’re even worse as there are fewer humans employed in the process.
As far as answering
*What prompts exactly were used? Is it at all independently repeatable? *
That’s all in the provided links.