• Devanismyname@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    12
    ·
    2 days ago

    I mean, the proof is sitting there wearing your clothes. General intelligence exists all around us. If it can exist naturally, we can eventually do it through technology. Maybe there needs to be more breakthroughs before it happens.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        2 days ago

        I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.

        None of it’s perfect, but a lot of it’s fuckin’ spooky, and any form of “well it can’t do [blank]” has a half-life.

        • Korhaka@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          Seen a few YouTube channels now that just print out AI generated content. Usually audio only with a generated picture on screen. Vast amounts could be made so cheaply like that, Google is going to have fun storing all that when each only gets like 25 views. I think at some point they are going to have to delete stuff.

            • MadhuGururajan@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              15 hours ago

              I kid you not, I took ML back in 2014 as a extra semester in my undergrad. The complaints then were the same as complaints now: too much power requirement, too many false positives. The latter of the two has evolved into hallucinations.

              If normal people going “I made this!” is not convincing enough that it is easily identified then who is this going to replace? you still need the right expert right? all it creates is more work for experts to come and fix broken AI output.

              • mindbleach@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                14 hours ago

                The complaints then were the same as complaints now

                Despite results improving at an insane rate, very recently. And you think this is proof of a problem with… the results? Not the complaints?

                People went “I made this!” with fucking Terragen. A program that renders wild alien landscapes which became generic after about the fifth one you saw. The problem there is not expertise. It’s immense quantity for zero effort. None of that proves CGI in general is worthless non-art. It’s just shifting what the computer will do for free.

                At some point, we will take it for granted that text-to-speech can do an admirable job reading out whatever. It’ll be a button you push when you’re busy sometimes. The dipshits mass-uploading that for popular articles, over stock footage, will be as relevant as people posting seven thousand alien sunsets.

                • MadhuGururajan@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  13 hours ago

                  the results do keep improving of course. But it’s not some silver bullet. Yes, your enthusiasm is warranted… but you peddle it like the 2nd coming of christ which I don’t like encouraging.

                  • mindbleach@sh.itjust.works
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 hour ago

                    I’ve done no such thing.

                    I called it half-decent, spooky, and admirable.

                    That turns out to be good enough, for a bunch of applications. Even the parts that are just a chatbot fooling people are useful. And massively better than the era you’re comparing this to.

                    We have to deal with this honestly. Neural networks have officially caught on, and anything with examples can be approximated. Anything. The hard part is reminding people what “approximated” means. Being wrong sometimes is normal. Humans are wrong about all kinds of stuff. But for some reason, people think computers bring unflinching perfection - and approach life-or-death scenarios with this sloppy magic.

                    Personally I’m excited for position tracking with accelerometers. Naively integrating into velocity and location immediately sends you to outer space. Clever filtering almost sorta kinda works. But it’s a complex noisy problem, with a minimal output, where approximate answers get partial credit. So long as it’s tuned for walking around versus riding a missile, it should Just Work.

                    Similarly restrained use-cases will do minor witchcraft on a pittance of electricity. It’s not like matrix math is hard, for computers. LLMs just try to do as much of it as possible.

        • SaraTonin@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.

          You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.

          The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.

          • mindbleach@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            47 minutes ago

            We don’t need leaps and bounds, from here. We’re already in science fiction territory. Incremental improvement has silenced a wide variety of naysaying.

            And this is with LLMs - which are stupid. We didn’t design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that’ll fake its way through explaining why the answer is yes or no. If we’re only interested in the accuracy of that answer, then we’re wasting effort on the quality of the faking.

            Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between “but right now it sucks at [blank]” and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.

            • SaraTonin@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 hours ago

              I’m not saying they don’t have applications. But the idea of them being a one size fits all solution to everything is something being sold to VC investors and shareholders.

              As you say - the issue is accuracy. And, as you also say - that’s not what these things do, and instead they make predictions about what comes next and present that confidently. Hallucinations aren’t errors, they’re what they were built to do.

              If you want something which can set an alarm for you or find search results then something that responds to set inputs correctly 100% of the time is better than something more natural-seeming which is right 99%of the time.

              Maybe along the line there will be a new approach, but what is currently branded as AI is never going to be what it’s being sold as.

              • mindbleach@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                18 minutes ago

                If you want something more complex than an alarm clock, this does kinda work for anything. Emphasis on “kinda.”

                Neural networks are universal approximators. People get hung-up on the approximation part, like that cancels out the potential in… universal. You can make a model that does any damn thing. Only recently has that seriously meant you and can - backpropagation works, and it works on video-game hardware.

                what is currently branded as AI

                “AI is whatever hasn’t been done yet” has been the punchline for decades. For any advancement in the field, people only notice once you tell them it’s related to AI, and then they just call it “AI,” and later complain that it’s not like on Star Trek.

                And yet it moves. Each advancement makes new things possible, and old things better. Being right most of the time is good, actually. 100% would be better than 99%, but the 100% version does not exist, so 99% is better than never.

                Telling the grifters where to shove it should not condemn the cool shit they’re lying about.