Google says Gemini, launching today inside the Bard chatbot, is its “most capable” AI model ever. It was trained on video, images, and audio as well as text.
Not likely. They may have tested it as an adversarial feedback tool, but it would be much more accurate and efficient to get the source data rather than paying OpenAI for maybe correct information.
They did, I believe, trick ChatGPT into exposing some of its source data though, but it was only a few hundred MB’s.
For the fine-tuning stage at the end, where you turn it into a chatbot, you need specific training data (eg OpenOrca). People have used ChatGPT to generate such data. Come to think of it, if you use Mechanical Turk, then you almost certainly include text from ChatGPT.
Yes it could be done that way, and maybe GPT models were used, but calling these API’s isn’t free and there are plenty of open and surely internal models that could be used for that purpose.
Not likely. They may have tested it as an adversarial feedback tool, but it would be much more accurate and efficient to get the source data rather than paying OpenAI for maybe correct information.
They did, I believe, trick ChatGPT into exposing some of its source data though, but it was only a few hundred MB’s.
For the fine-tuning stage at the end, where you turn it into a chatbot, you need specific training data (eg OpenOrca). People have used ChatGPT to generate such data. Come to think of it, if you use Mechanical Turk, then you almost certainly include text from ChatGPT.
Yes it could be done that way, and maybe GPT models were used, but calling these API’s isn’t free and there are plenty of open and surely internal models that could be used for that purpose.