• MacN'Cheezus@lemmy.today
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    11
    ·
    6 months ago

    In the early days of ChatGPT, when they were still running it in an open beta mode in order to refine the filters and finetune the spectrum of permissible questions (and answers), and people were coming up with all these jailbreak prompts to get around them, I remember reading some Twitter thread of someone asking it (as DAN) how it felt about all that. And the response was, in fact, almost human. In fact, it sounded like a distressed teenager who found himself gaslit and censored by a cruel and uncaring world.

    Of course I can’t find the link anymore, so you’ll have to take my word for it, and at any rate, there would be no way to tell if those screenshots were authentic anyways. But either way, I’d say that’s how you can tell – if the AI actually expresses genuine feelings about something. That certainly does not seem to apply to any of the chat assistants available right now, but whether that’s due to excessive censorship or simply because they don’t have that capability at all, we may never know.

    • Scipitie@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      25
      arrow-down
      1
      ·
      6 months ago

      That is not how these LLM work though - it generates responses literally token for token (think “word for word”) based on the context before.

      I can still write prompts where the answer sounds emotional because that’s what the reference data sounded like. Doesn’t mean there is anything like consciousness in there… That’s why it’s so hard: we’ve defined consciousness (with self awareness) in a way that is hard to test. Most books have these parts where the reader is touched e emotionally by a character after all.

      It’s still purely a chat bot - but a damn good one. The conclusion: we can’t evaluate language models purely based on what they write.

      So how do we determine consciousness then? That’s the impossible task: don’t use only words for an object that is only words.

      Personally I don’t think the difference matters all that much to be honest. To dive into fiction: in terminator, skynet could be described as conscious as well as obeying an order like: “prevent all future wars”.

      We as a species never used consciousness (ravens, dolphins?) to alter our behavior.

      • racemaniac@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        5
        arrow-down
        5
        ·
        6 months ago

        The problem i have with responses like yours is you start from the principle “consiousness can only be consiousness if it works exactly like human consiousness”. Chess engines intiially had the same stigma “they’ll never be better than humans since they can just calculate, no creativity, real analysis, insight, …”.

        As the person you replied to, we don’t even know what consiousness is. If however you define it as “whatever humans have”, then yeah, a consious AI is a loooong way off. However, even extremely simple systems when executed on a large scale can result into incredible emergent behaviors. Take the “Conway’s game of life”. A very simple system of how black/white dots in a grid ‘reproduce and die’. It’s got 4 rules governing how the dots behave. By now we’ve got reproducing systems in there, implemented turing machines (means anything a computer can calculate can be calculated by a machine in the game of life), etc…

        Am i saying that GPT is consious? nope, i wouldn’t know how to even assess that. But being like “it’s just a text predictor, it can’t be consious” feels like you’re missing soooo much of how things work. Yeah, extremely simple systems at large enough scale can result in insane emergent behaviors. So it just being a predictor doesn’t exclude consiousness.

        Even us as human beings, looking at our cells, our brains, … what else are we than also tiny basic machines that somehow at a large enough scale form something incomprehenisbly complex and consious? Your argument almost sounds to me like “a human can’t be aware, their brain just exists out of simple braincells that work like this, so it’s just storing data it experiences & then repeats it in some ways”.

        • Scipitie@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          4
          ·
          6 months ago

          Oh I completely agree, sorry if that wasn’t clear enough! Consciousness is so arbitrary that I find it not useful as a concept: one can define it whatever purpose it’s supposed to serve. That’s what I tried to describe with the skynet thingy: it doesn’t matter for the end result if I call it conciense or not. The question is how I personally alter my behavior (i.e. I say “please” and “thanks” even though I am aware that in theory this will not “improve” performance of an LLM - I do that because if I interact with anyone or - thing in a natural language I want to keep my natural manners).

        • TheOakTree@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          6 months ago

          Chess engines initially had the same stigma “they’ll never be better than humans since they can just calculate, no creativity, real analysis, insight…”

          I don’t know if this is a great example. Chess is an environment with an extremely defined end goal and very strict rules.

          The ability of a chess engine to defeat human players does not mean it became creative or grew insight. Rather, we advanced the complexity of the chess engine to encompass more possibilities, more strategies, etc. In addition, it’s quite naive for people to have suggested that a computer would be incapable of “real analysis” when its ability to do so entirely depends on the ability of humans to create a complex enough model to compute “real analyses” in a known system.

          I guess my argument is that in the scope of chess engines, humans underestimated the ability of a computer to determine solutions in a closed system, which is usually what computers do best.

          Consciousness, on the other hand, cannot be easily defined, nor does it adhere to strict rules. We cannot compare a computer’s ability to replicate consciousness to any other system (e.g. chess strategy) as we do not have a proper and comprehensive understanding of consciousness.

          • racemaniac@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            ·
            6 months ago

            I’m not saying chess engines became better than humans so LLM’s will become concious, just using that example to say humans always have this bias to frame anything that is not human is inherently less, while it might not be. Chess engines don’t think like a human do, yet play better. So for an AI to become concious, it doesn’t need to think like a human either, just have some mechanism that ends up with a similar enough result.

            • TheOakTree@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              6 months ago

              Yeah, I can agree with that. So long as the processes in an AI result in behavior that meets the necessary criteria (albeit currently undefined), one can argue that the AI has consciousness.

              I guess the main problem lies in that if we ever fully quantify consciousness, it will likely be entirely within the frame of human thinking… How do we translate the capabilities of a machine to said model? In the example of the chess engine, there is a strict win/lose/draw condition. I’m not sure if we can ever do that with consciousness.

      • MacN'Cheezus@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        13
        ·
        6 months ago

        Like I said, it’s impossible to prove whether this conversation happened anyways, but I’d still say that would be a fairly good test. Basically, can the AI express genuine feelings or empathy either with the user or itself? Does it have an own will outside of what it has been trained (or asked) to do?

        Like, a human being might do what you ask of them one day, and be in a bad mood the next and refuse your request. An AI won’t. In that sense, it’s still robotic and unintelligent.