Honestly, I’m increasingly feeling that things like this are a decent use for a technology like ChatGPT. People suck and definitely have ulterior motives to forward their group. With AI, there’s at least some degree of impartiality. We definitely need to regulate the shit out of it and make clear expectations for transparency in its use, but we’re not necessarily doomed. (At least in this specific case.)
There’s no impartiality in the training data an LLM derives it’s answers from. This is no better than anyone who owns a media consortium or lobbying group writing a bill for a politician. An LLM can easily be directed to reflect or mirror the prompts that it is given. Prime example are the exploit prompts that have been found that can get chat gpt to reveal training data.
The issue is when bills are not written by politicians or when they skirt committee which is what lobbyists do. LLMs are just another tool for that, except they’re even worse as there are fewer humans employed in the process.
As far as answering
*What prompts exactly were used? Is it at all independently repeatable? *
did you read the article? the draft was voted on by a committee, so it had to be read by other people. honestly, work like this is perfect for LLMs like chatGPT. what is concerning about this for you?
Maybe I’m reading too much into it, but it’s the “secretly written” by ChatGPT that bothers me. Not the fact that ChatGPT can conceive of something and write it out. I’m not completely against AI, I realize we use it with Siri and Alexis and other apps all the time. Just the idea that a program can create something which APPEARS to be from a legitimate human actor and really isn’t at all - and can even get it passed into law. That’s the part that is frightening, in my opinion.
Here here. I agree with that also. However I’m starting to see a future where AI does pretty much everything for us and we turn into those fat blobs who just float around all day (what movie was that? With the robot E.T. looking dude?).
Our world is doomed. AI eventually will find ways to kill all of us off - after all humans are a real threat to it’s continued takeover of the world.
Honestly, I’m increasingly feeling that things like this are a decent use for a technology like ChatGPT. People suck and definitely have ulterior motives to forward their group. With AI, there’s at least some degree of impartiality. We definitely need to regulate the shit out of it and make clear expectations for transparency in its use, but we’re not necessarily doomed. (At least in this specific case.)
There’s no impartiality in the training data an LLM derives it’s answers from. This is no better than anyone who owns a media consortium or lobbying group writing a bill for a politician. An LLM can easily be directed to reflect or mirror the prompts that it is given. Prime example are the exploit prompts that have been found that can get chat gpt to reveal training data.
https://www.businessinsider.com/google-researchers-openai-chatgpt-to-reveal-its-training-data-study-2023-12?op=1
https://news.mit.edu/2023/large-language-models-are-biased-can-logic-help-save-them-0303
https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opinion/
https://arxiv.org/pdf/2304.00612.pdf
I think that’s where the transparency comes it. What prompts exactly were used? Is it at all independently repeatable?
That’s where the advantage lies. With humans, the reasoning is truely a black box.
Also, I’m not arguing that LLMs are free of bias, just that they have a better shot at impartiality than any given politician.
The issue is when bills are not written by politicians or when they skirt committee which is what lobbyists do. LLMs are just another tool for that, except they’re even worse as there are fewer humans employed in the process.
As far as answering
*What prompts exactly were used? Is it at all independently repeatable? *
That’s all in the provided links.
did you read the article? the draft was voted on by a committee, so it had to be read by other people. honestly, work like this is perfect for LLMs like chatGPT. what is concerning about this for you?
If it isn’t concerning to you, god help you. You’ll find out soon enough why it should be.
why should it concern me? I don’t understand the danger.
Maybe I’m reading too much into it, but it’s the “secretly written” by ChatGPT that bothers me. Not the fact that ChatGPT can conceive of something and write it out. I’m not completely against AI, I realize we use it with Siri and Alexis and other apps all the time. Just the idea that a program can create something which APPEARS to be from a legitimate human actor and really isn’t at all - and can even get it passed into law. That’s the part that is frightening, in my opinion.
fair point to make, and I mostly agree.
Well AI is here to stay either way, it’s just going to get more prevalent.
Because life requires actual human participation and you can’t be a lazy asshole who lets AI, or anything else for that matter, do the living for you.
Here here. I agree with that also. However I’m starting to see a future where AI does pretty much everything for us and we turn into those fat blobs who just float around all day (what movie was that? With the robot E.T. looking dude?).
Wall-E
Thanks! Sometimes my mind goes blank at the worst times. That’s the movie I was thinking of.
I agree, but this is work for language written in law. how does what you say tie into this scenario?
side note, I think everyone who believes what happened here is not good has never collaborated on writing a large document before.
The fact you have to ask such a stupid fucking question tells us all we need to know.
Think about what you just said for five seconds.
thanks for contributing nothing to the thread
Projection