Once you’ve trained your large language model on the entire written output of humanity, where do you go? Here’s Ilya Sutskever, ex-OpenAI, admitting to Reuters that they’ve plateaued: [Reuters] The…
This is a misleading headline. The “scaling” used in the quote refers to the size of text data that the models get trained on. So now that the vast majority of written text is already in the training library, that means the training library can’t be scaled up any further than that.
BUT that doesn’t mean ai isn’t going to be able to improve and expand it’s capabilities. Increasing the training library size is a ‘quick’ and ‘easy’ way to improve ai output, but it’s not the only way. At a bare minimum there are a lot more potential ways that current ai tech can be applied than is currently commercial available, and those are very much already in the development stages.
I get that many of us don’t like the idea of ai, but we don’t do ourselves any favors by falling for misinformation just because it says what we want to hear
There are many companies right now exploiting employing contract workers to rate chatbot responses in order to improve the models. I have first hand experience with this work and let me tell you it’s…a complete fucking shitshow. I can’t imagine they’re getting much good data from it but they are absolutely throwing money at the problem hand over fist.
This is a misleading headline. The “scaling” used in the quote refers to the size of text data that the models get trained on. So now that the vast majority of written text is already in the training library, that means the training library can’t be scaled up any further than that.
BUT that doesn’t mean ai isn’t going to be able to improve and expand it’s capabilities. Increasing the training library size is a ‘quick’ and ‘easy’ way to improve ai output, but it’s not the only way. At a bare minimum there are a lot more potential ways that current ai tech can be applied than is currently commercial available, and those are very much already in the development stages.
I get that many of us don’t like the idea of ai, but we don’t do ourselves any favors by falling for misinformation just because it says what we want to hear
There are many companies right now
exploitingemploying contract workers to rate chatbot responses in order to improve the models. I have first hand experience with this work and let me tell you it’s…a complete fucking shitshow. I can’t imagine they’re getting much good data from it but they are absolutely throwing money at the problem hand over fist.100%, the anti AI hype is as misinformed as the AI hype. We have so much work ahead of us to effectively utilize the current LLMs.