Will Manidis is the CEO of AI-driven healthcare startup ScienceIO

  • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    131
    ·
    6 days ago

    (Already said this before, but let me reiterate:)

    Typical AITA post:

    Title: AITAH for calling out my [Friend/Husband/Wife/Mom/Dad/Son/Daughter/X-In-Law] after [He/She] did [Undeniably something outrageous that anyone with an IQ above 80 should know its unacceptable to do]?

    Body of post:

    [5-15 paragraph infodumping that no sane person would read]

    I told my friend this and they said I’m an asshole. AITAH?

    Comments:

    Comment 1: NTA, you are abosolutely right, you should [Divorce/Go No-Contact/Disown/Unfriend, the person] IMMEDIATELY. Don’t walk away, RUNNN!!!

    Comment 2: NTA, call the police! That’s totally unacceptable!

    And sometimes you get someone calling out OP… 3: Wait, didn’t OP also claim to be [Totally different age and gender and race] a few months ago? Heres the post: [Link]


    🙄 C’mon, who even think any of this is real…

    • Way too many…

      I was born before the Internet. The Internet is always lumped into the “entertainment” part of my brain. A lot of people that have grown up knowing only the Internet think the Internet is much more “real”. It’s a problem.

      • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        6 days ago

        I’ve come up with a system to categorize reality in different ways:

        Category 1: Thoughts inside my brain formed by logics

        Category 2: Things I can directly observe via vision, hearing, or other direct sensory input

        Category 3: IRL Other people’s words, stories, anecdotes, in face to face conversations

        Category 4: Acredited News Media, Television, Newspaper, Radio (Including Amateur Radio Conversations), Telegrams, etc…

        Category 5: The General Internet

        The higher the category number, means the more distant that information is, and therefore more suspicious I am.

        I mean like, if a user on Reddit (or any internet fourm or social media for that matter) told me X is a valid treatment for X disease without like real evidence, I’m gonna laugh in their face (well not their face, since its a forum, but you get the idea).

          • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            6 days ago

            So here’s the thing:

            I sometimes though I saw a ghost moving in a dark cornet of my eyes.

            I didn’t see a ghost.

            But then later I walk through the same place again, and also saw the same vision, but I already held the belief that ghosts dont exist, so I investigated, it turned out to be a lamp (that was off) that casted a shadow of another light source, so, when I happend to walk though the area, the shadow moved, and combined with my head turning motion, it made it appear like a ghost was there, but it was just a difference in lighting, a shadow. Not a ghost. I bet a lot of “ghosts” could be just interpreting lighting wrong and think its a ghost, not an actual ghost.

            Having you thoughts/logics prioritized is important to find the truth, and not just start believing the first thing you interpret like a vision of a “ghost”.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        7
        ·
        6 days ago

        I genuinely miss the 90s. I mean, yeah, early forms of internet and computers existed, but not everyone had a camera, and not everyone got absolutely bukkaked with disinformation. Not that I think everything is bad about the tech in of itself, but how we use it nowadays is just so exhausting.

    • LiveLM@lemmy.zip
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      6 days ago

      Man, sometimes when I finish grabbing something I needed from Reddit, I hit the frontpage (always logged out) just out of morbid curiosity.
      Every single time that r/AmIOverreacting sub is there with the most obvious “no, you’re not” situation ever.

      I never once seen that sub show up before the exodus. AI or not, I refuse to believe any frontpage posts from that sub are anything other than made up bullshit.

    • WilderSeek@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 days ago

      It’s “reality television” on a discussion forum to karma farm and help push other kinds of misinformation.

    • samus12345@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      6 days ago

      If it’s well-written enough to be entertaining, it doesn’t even matter whether it’s real or not. Something like it almost certainly happened to someone at some point.

    • WilderSeek@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      5 days ago

      I’m thinking of pulling the plug on Reddit (at least for a while). My tipping point has become how the “drone” story is becoming popular. At first it was intriguing and mysterious (the airport shutdowns and reports of large vehicles at low levels was fascinating), but I’m getting the vibe it’s a misinformation campaign to distract the US from how we are about to be changed.

      I was actually permabanned in the “News” sub for an innocuous comment. All it was is that I noted the federal authorities are likely correct for saying most of the reports of “UFOs” are likely airplanes and manmade drones, and to play devil’s advocate I mentioned there were likely legitimate reports of UAPs, but since the majority were probably mistaken planes the Federal agencies’ reactions were technically truthful.

    • SerotoninSwells@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      Look at that, the detection heuristics all laid out nice and neatly. The only issue is that Reddit doesn’t want to detect bots because they are likely using them. Reddit at one point was using a form of bot protection but it wasn’t for posts; instead, it was for ad fraud.

    • zeca@lemmy.eco.br
      link
      fedilink
      arrow-up
      29
      arrow-down
      2
      ·
      6 days ago

      there isnt so much incentive. No advertisement. Upvote counters behave weirdly in the fediverse (from what i can see).

      • GamingChairModel@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        5 days ago

        No advertisement

        You don’t think that commercial products can’t get good (or bad) coverage in a place like this? In any discussion of hardware, software (including, for example, video games), cars, books, movies, television, etc., there’s plenty of profit motive behind getting people interested in things.

        There are already popular and unpopular things here. Some of those things are pretty far removed from a direct profit motive (Linux, Star Trek memes, beans). But some are directly related to commercial products being sold now (current video games and the hardware to run them, specific types of devices from routers to CPUs to televisions to bicycles or even cars and trucks, movies, books, etc.).

        Not to mention the political motivations to influence on politics, economics, foreign affairs, etc. There’s lots of money behind trying to convince people of things.

        As soon as a thread pops up in a search engine it’s fair game for the bots to find it, and for that platform to be targeted by humans who unleash bots onto that platform. Lemmy/Mastodon aren’t too obscure to notice.

    • mtchristo@lemm.ee
      link
      fedilink
      arrow-up
      23
      arrow-down
      3
      ·
      6 days ago

      There are no virtual points to earn on Lemmy. So hopefully it will resist the enshitification for while.

        • Anarki_@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          19
          ·
          5 days ago

          Account age and karma makes an account look more legit and it’s thus more useful for spreading misinformation and/or guerilla marketing.

        • Irelephant@lemm.ee
          link
          fedilink
          arrow-up
          9
          ·
          5 days ago

          Same reason why people play cookie clicker, watch the useless number go up.

          Also, some subs are downright hostile to people with low karma.

          • Zahille7@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 days ago

            I actually got auto-added to some bullshit sub that was “for people with so much karma.” I don’t remember what the number was, because it was such a useless sub that no one engaged with.

        • mtchristo@lemm.ee
          link
          fedilink
          arrow-up
          4
          ·
          5 days ago

          Some subreddits require a minimum karma score for posting. And it gets less likely to get shadow banned the more karma you have.

          • weeeeum@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 days ago

            Ohhh right. I remember subs had that bullshit. I didnt know about the shadowban thing though.

  • GuitarSon2024@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    5 days ago

    This is the whole reason that I discovered and came to Lemmy. Reddit is literally 90% bots, from the posts, to the filtering, to the censoring, to outright banning. It’s a mess.

    • GHiLA@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      5 days ago

      Or getting this shit after you comment somewhere:

      “Excuse me but could you please send a direct message to our admins to verify your account before placing a comment? Everyone has to do it.”

      I replied “go fuck yourself” and they banned me instantly and I never even submitted anything lmao.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    6 days ago

    In the age of A/B testing and automated engagement, I have to wonder who is really getting played? The people reading the synthetically generated bullshit or the people who think they’re “getting engagement” on a website full of bots and other automated forms of engagement cultivation.

    How much of the content creator experience is itself gamed by the website to trick creators into thinking they’re more talented, popular, and well-received than a human audience would allow and should therefore keep churning out new shit for consumption?

    • conicalscientist@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      6 days ago

      It’s ultimately about ad money. They haven’t cared it’s humans or bots either. They keep paying out either way. This predates long before the LLM era. It’s bizarre.

      It’s pretty much a case of the POSIWID. The system is meant to be genuine human engagement. What the system does is artificial at every step. Turns out its purpose is to fabricate things for bots to engage with. And this is all propped up by people who for some reason pay to keep the system running.

      • aesthelete@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        5 days ago

        This reminds me of the ad supported games that advertise other ad supported games. I think I’ve even seen an ad supported game run an ad for itself.

        I wonder if at some point people will walk away from these platforms and the platform and its owners won’t even be able to tell.

  • plunging365@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    Does Lemmy have any features that resist this kind of astroturfing?

    No one would consider bots talking to one another a real conversation, but is there anything regular users can do?

  • ShadowRam@fedia.io
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    Anyone who did any kind of modding on Reddit could see the majority of posts and comments where mostly bots.

    Bots competitions for upvotes/views and clicks.

    Reddit’s been like that since ~2018

  • Dave@lemmy.nz
    link
    fedilink
    arrow-up
    20
    ·
    6 days ago

    Most people who have worked in customer service would believe every word because they have seen the absurdity of real people.

  • Mango@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    5 days ago

    Tbh, I see how this can be really good for people. We can never again believe that what apples are saying online is really representative of the general population. It never has been, but now we have a really solid reason to dispel the belief that doesn’t require much explanation.

    That said, we’ll need to combat this with more right knit communities where people can better identify themselves as human. Captcha doesn’t do that, but the Goth girls on VF so long ago had it figured out. We gotta do proper “salutes”.

  • Thrife@feddit.org
    link
    fedilink
    arrow-up
    16
    ·
    6 days ago

    Is reddit still feeding Googles LLM or was it just a one time thing? Meaning will the newest LLM generated posts feed LLMs to generate posts?

    • shittydwarf@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      edit-2
      6 days ago

      The truly valuable data is the stuff that was created prior to LLMs, anything after this is tainted by slop. Any verifiable human data would be worth more, which is why they are simultaneously trying to erode any and all privacy

      • gandalf_der_12te@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        6 days ago

        I’m not sure about that. It implies that only humans are able to produce high-quality output. But that seems wrong to me.

        • First of all, not everything that humans produce has high quality; rather, the opposite.
        • Second, with the development of AI i think it will be very well possible for AI to generate good-quality output in the future.
        • Danquebec@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          They can produce high-quality answers now, but that’s just because they wwre trained on things written by humans.

          Any training on things produced by LLMs will just reproduce the same stuff, or even worse actually because it will include hallucinations.

          For an AI to discover new things and truly innovate, or learn about existing products, the world, etc. it would need to do something entirely different than what LLMs are doing.

        • morrowind@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          6 days ago

          Microsoft’s PHI-4 is primarily trained on synthetic (generated by other AIs) data. It’s not a future thing, it’s been happening for years

    • whotookkarl@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      6 days ago

      These days the LLMs feed the LLMs so you can model models unless you’re excluding any public data from the last decade. You have to assume all public data based on users is tainted when used for training.

  • leadore@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    But I mean, AI is the asshole, so maybe that’s why they went to the front page?

  • sbv@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    6 days ago

    Why not? r/AmlTheAsshole is about entertainment, not truth. It would be an indictment of AI if it couldn’t replicate a short, funny story.