gay blue dog

https://lucario.dev/

  • 0 Posts
  • 122 Comments
Joined 1 year ago
cake
Cake day: March 19th, 2024

help-circle
  • no worries – i am in the unfortunate position of very often needing to assume the worst in others and maybe my reading of you was harsher than it should have been, and for that i am sorry. but…

    ā€œgenerative AIā€ is a bit of a marketing buzzword. the specific technology in play here is LLMs, and they should be forcefully kept out of every online system, especially ones people rely on for information.

    LLMs are inherently unfit for every purpose. they might be ā€œusefulā€, in the sense that a rock is useful for driving a nail through a board, but they are not tools in the same way hammers are. the only exception to this is when you need a lot of text in a hurry and don’t care about the quality or accuracy of the text – in other words, spams and scams. in those specific domains i can admit LLMs are the most applicable tool for the job.

    so when ostensibly-smart people, but especially ones who are running public information systems, propose using LLMs for things they are unable to do, such as explain species identification procedures, it means either 1) they’ve been suckered into believing they’re capable of doing those things, or 2) they’re being paid to propose those things. sometimes it is a mix of both. either way, it very much indicates those people should not be trusted.

    furthermore, the technology industry as a whole has already spent several billion dollars trying to push this technology onto and into every part of our daily lives. LLM-infested slop has made its way onto every online platform, and more often than not, with direct backing from those platforms. and the technology industry is openly hostile to the idea of ā€œconsentā€, actively trying to undermine it at every turn. it’s even made it all the way through to the statement attempting to reassure on that forum post about the mystery demo LLMs – note the use of the phrase ā€œmaking it opt-outā€. why not ā€œopt-inā€? why not ā€œwith consentā€?

    it’s no wonder that people are leaving – the writing is more or less on the wall.


    1. no one is assuming iNaturalist is being malicious, saying otherwise is just well-poisoning.
    2. there is no amount of testing that can ever overcome the inherently-stochastic output of LLMs. the ā€œbest-caseā€ scenario is text-shaped slop that is more convincing, but not any more correct, which is an anti-goal for iNaturalist as a whole
    3. we’ve already had computer vision for ages. we’ve had google images for twenty years. there is absolutely no reason to bolt a slop generator of any kind to a search engine.
    4. ā€œstaff is very much connected with usersā€ obviously should come with some asterisks given the massive disconnect between staff and users on their use and endorsement of spicy autocorrect
    5. framing users who delete their accounts in protest of machine slop being put up on iNaturalist, which is actually the point of contention here, as being over-reactive to the mere mention of AI, and thus being basically the same as the AI boosters? well, it’s gross. iNat et. al. explicitly signaled that they were going to inject AI garbage into their site. users who didn’t like that voted with their accounts and left. you don’t get to post-hoc ascribe them a strawman rationale and declare them basically the same as the promptfans, fuck off with that
















  • i’ve heard it said before from people better at wording it than i am, but seeing this: it’s crystalizing for me that people really do see ā€œa trans womanā€ as ā€œa woman i’m still allowed to abuseā€. i can call her mannish, i can tell everyone she’s making it all up, i can call her hysterical and dramatic, i can freely speculate on her mental state to the approval of my peers, and no matter what she does – leave loudly, leave quietly, stay and suffer the torment – it will always be her fault and she will always be doing it wrong



  • i can admit it’s possible i’m being overly cynical here and it is just sloppy journalism on Raffaele Huang/his editor/the WSJ’s part. but i still think that it’s a little suspect on the grounds that we have no idea how many times they had to restart training due to the model borking, other experiments and hidden costs, even before things like the necessary capex (which goes unmentioned in the original paper – though they note using a 2048-GPU cluster of H800’s that would put them down around $40m). i’m thinking in the mode of ā€œthe whitepaper exists to serve the company’s bottom lineā€

    btw announcing my new V7 model that i trained for the $0.26 i found on the street just to watch the stock markets burn