

- no one is assuming iNaturalist is being malicious, saying otherwise is just well-poisoning.
- there is no amount of testing that can ever overcome the inherently-stochastic output of LLMs. the ābest-caseā scenario is text-shaped slop that is more convincing, but not any more correct, which is an anti-goal for iNaturalist as a whole
- weāve already had computer vision for ages. weāve had google images for twenty years. there is absolutely no reason to bolt a slop generator of any kind to a search engine.
- āstaff is very much connected with usersā obviously should come with some asterisks given the massive disconnect between staff and users on their use and endorsement of spicy autocorrect
- framing users who delete their accounts in protest of machine slop being put up on iNaturalist, which is actually the point of contention here, as being over-reactive to the mere mention of AI, and thus being basically the same as the AI boosters? well, itās gross. iNat et. al. explicitly signaled that they were going to inject AI garbage into their site. users who didnāt like that voted with their accounts and left. you donāt get to post-hoc ascribe them a strawman rationale and declare them basically the same as the promptfans, fuck off with that
no worries ā i am in the unfortunate position of very often needing to assume the worst in others and maybe my reading of you was harsher than it should have been, and for that i am sorry. butā¦
āgenerative AIā is a bit of a marketing buzzword. the specific technology in play here is LLMs, and they should be forcefully kept out of every online system, especially ones people rely on for information.
LLMs are inherently unfit for every purpose. they might be āusefulā, in the sense that a rock is useful for driving a nail through a board, but they are not tools in the same way hammers are. the only exception to this is when you need a lot of text in a hurry and donāt care about the quality or accuracy of the text ā in other words, spams and scams. in those specific domains i can admit LLMs are the most applicable tool for the job.
so when ostensibly-smart people, but especially ones who are running public information systems, propose using LLMs for things they are unable to do, such as explain species identification procedures, it means either 1) theyāve been suckered into believing theyāre capable of doing those things, or 2) theyāre being paid to propose those things. sometimes it is a mix of both. either way, it very much indicates those people should not be trusted.
furthermore, the technology industry as a whole has already spent several billion dollars trying to push this technology onto and into every part of our daily lives. LLM-infested slop has made its way onto every online platform, and more often than not, with direct backing from those platforms. and the technology industry is openly hostile to the idea of āconsentā, actively trying to undermine it at every turn. itās even made it all the way through to the statement attempting to reassure on that forum post about the mystery demo LLMs ā note the use of the phrase āmaking it opt-outā. why not āopt-inā? why not āwith consentā?
itās no wonder that people are leaving ā the writing is more or less on the wall.