• 0 Posts
  • 677 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle


  • I feel like cult orthodoxy probably accounts for most of it. The fact that they put serious thought into how to handle a sentient AI wanting to post on their forums does also suggest that theyā€™re taking the AGI ā€œpossibilityā€ far more seriously than any of the companies that are using it to fill out marketing copy and bad news cycles. I for one find this deeply sad.

    Edit to expand: if it wasnā€™t actively lighting the world on fire I would think thereā€™s something perversely admirable about trying to make sure the angels dancing on the head of a pin have civil rights. As it is theyā€™re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.









  • Behind the Bastards just wrapped their four-part series on the Zizzians, which has been a fun trip. Nothing like seeing the live reactions of someone who hasnā€™t been at least a little bit plugged into the whole space for years.

    I havenā€™t finished part 4, but so far Iā€™ve deeply appreciated Robertā€™s emphasis on how the Zizzian nonsense isnā€™t that far outside the bounds of normal Rationalist nonsense, and the Rationalist movement itself has a long history as a kind of cult incubator, even if Yud himself hasnā€™t fully leveraged his influence over a self-selecting high-control group.

    Also the recurring reminders of the importance of touching grass and talking to people who havenā€™t internet-poisoned themselves with the same things you have.







  • I mean, it does amount to the US government - aka ā€œthe confederation of racist duncesā€ - declaring their intention to force the LLM owners - all US-based companies (except maybe those guys out of China, a famous free speech haven) - to make sure their model outputs align with their racist dunce ideology. They may not have a viable policy in place to effect that at this point, but it would be a mistake to pretend theyā€™re not going to implement one. The best case scenario is that it ends up being designed and implemented incompetently enough that it just crashes the AI markets. The worst case scenario is that we get a half-dozen buggy versions of Samaritan from Person of Interest but with a hate-boner for anyone with a vaguely Hispanic name. A global autocomplete that produces the kind of opinions that made your uncle not get invited to any more family events. Neither scenario is one that you would want to be plugged into and reliant on, especially if youā€™re otherwise insulated by national borders and a whole Atlantic ocean from the worst of Americaā€™s current clusterfuck.




  • Surely there have to be some cognitive scientists who are at least a little bit less racist who could furnish alternative definitions? The actual definition at issue does seem fairly innocuous from a laymanā€™s perspective: ā€œa very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.ā€ (Aside: it doesnā€™t do our credibility any favors that for all the concern about the source I had to actually track all the way to Microsoftā€™s paper to find the quote at issue.) The core issue is obviously that apparently they either took it completely out of context or else decided the fact that their source was explicitly arguing in favor of specious racist interpretations of shitty data wasnā€™t important. But it also feels like breaking down the idea itself may be valuable. Like, is there even a real consensus that those individual abilities or skills are actually correlated? Is it possible to be less vague than ā€œamong other things?ā€ What does it mean to be ā€œmore able to learn from experienceā€ or ā€œmore able to planā€ that is rooted in an innate capacity rather than in the context and availability of good information? And on some level if that kind of intelligence is a unique and meaningful thing not emergent from context and circumstance, how are we supposed to see it emerge from statistical analysis of massive volumes of training data (Machine learning models are nothing but context and circumstance).

    I donā€™t know enough about the state of non-racist neuroscience or whatever the relevant field is to know if these are even the right questions to ask, but it feels like thereā€™s more room to question the definition itself than weā€™ve been taking advantage of. If nothing else the vagueness means that we havenā€™t really gotten any more specific than ā€œthe brainā€™s ability to brain good.ā€