OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it is pulling through only because of Microsoft's $10 billion funding
“Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.”
So basically create its own Fox News and see how that goes.
The full suggestion includes “This would allow ChatGPT to learn from its own mistakes”, which implies that the text it generated would be evaluated and curated before being sent back into it for training. That, as well as including non-AI-generated text along with the AI generated stuff, should stop model collapse.
Model collapse is basically inbreeding, with similar causes and similar solutions. A little inbreeding is not inherently bad, indeed it’s used frequently when you’re trying to breed an organism to have specific desirable characteristics.
If having an AI tell researchers that they should base its next iteration off of Megatron isn’t the plot of a Michael Bay Transformers movie already, it should have been.
deleted by creator
“Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.”
So basically create its own Fox News and see how that goes.
this is widely known to very quickly destroy your model
deleted by creator
The feedback loop is already happening, and is called model collapse.
It’s not a good thing.
The full suggestion includes “This would allow ChatGPT to learn from its own mistakes”, which implies that the text it generated would be evaluated and curated before being sent back into it for training. That, as well as including non-AI-generated text along with the AI generated stuff, should stop model collapse.
Model collapse is basically inbreeding, with similar causes and similar solutions. A little inbreeding is not inherently bad, indeed it’s used frequently when you’re trying to breed an organism to have specific desirable characteristics.
If having an AI tell researchers that they should base its next iteration off of Megatron isn’t the plot of a Michael Bay Transformers movie already, it should have been.