• 1 Post
  • 105 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle




  • I don’t like the idea of restricting the model’s corpus further. Rather, I think it would be good if it used a bigger corpus, but added the date of origin for each element as further context.

    Separately, I think it could be good to train another LLM to recognize biases in various content, and then use that to add further context for the main LLM when it ingests that content. I’m not sure how to avoid bias in that second LLM, though. Maybe complete lack of bias is an unattainable ideal that you can only approach without ever reaching it.















  • But the bribe amounts have very little to do with how unfathomably rich the “donors” are! If you look at all those bribes, the amounts are still within the realm of what the 99% could put together.

    But I don’t even think it would cost the 99% that much, because it would force the 1% to up their game (in other words, there’d be bribe inflation) until the 99% can’t follow suit, which means the 99% wouldn’t even need to pay, in the end. But the higher price would make some bribers think twice, which might lead to less bribery happening.