> Distillation isn’t an uncommon practice, but OpenAI’s terms of service prohibit customers from using the company’s model outputs to build competing AI.
I have the absolute tiniest of violins for this given OpenAI's behaviour vs everyone else's terms of service.
The default Llama 4 system prompt even instructs it to avoid using various ChatGPT-isms, presumably because they've already scraped so much GPT-generated material that it noticably skews their models output.
I recall that AI trained on AI output over many cycles eventually becomes something akin to noise texture as the output degrades rapidly.
Won’t most AI produced content put out into the public be human curated, thus heavily mitigating this degradation effect? If we’re going to see a full length AI generated movie it seems like humans will be heavily involved, hand holding the output and throwing out the AI’s nonsense.
Some will be heavily curated, by those who care about quality. This is a lot slower to produce, requires some expertise to do right, so there will be far less of it
The vast majority of content will be (is) the fastest and easiest to create - AI slop
I think there's probably a distinction to be made between deliberate, careful use of synthetic data, as opposed to blindly scraping 1PB of LLM generated SEO spam and force-feeding it into a new model. Maybe the former is useful, but the latter... probably not.
Interesting. The tonal change has definitely been noticeable. It also seems a bit more succinct and precise with its word choice, less flowery. That does seem to be in line with Gemini's behavior.
> Distillation isn’t an uncommon practice, but OpenAI’s terms of service prohibit customers from using the company’s model outputs to build competing AI.
I have the absolute tiniest of violins for this given OpenAI's behaviour vs everyone else's terms of service.
“Copyright must evolve into the 21century (…so that AI can legally steal everything produced by people”
And also “Don’t steal our AI!”
The world is not prepared for the mental gymnastics that OpenAI/Google/etc will employ to defend their copyright if their big models ever get leaked.
I see no evidence that Google is doing this. Any sources?
I'm still unclear how they are able to claim this considering their raw thinking traces were never exposed to the end user, only summaries.
The raw thinking traces were shown previously in the API.
At this point, they all using each other because so much of the new content they are scraping for data is generated.
These models will converge and plateau because the datasets are only going to get worse as more of their content is incestuous.
The default Llama 4 system prompt even instructs it to avoid using various ChatGPT-isms, presumably because they've already scraped so much GPT-generated material that it noticably skews their models output.
I recall that AI trained on AI output over many cycles eventually becomes something akin to noise texture as the output degrades rapidly.
Won’t most AI produced content put out into the public be human curated, thus heavily mitigating this degradation effect? If we’re going to see a full length AI generated movie it seems like humans will be heavily involved, hand holding the output and throwing out the AI’s nonsense.
Some will be heavily curated, by those who care about quality. This is a lot slower to produce, requires some expertise to do right, so there will be far less of it
The vast majority of content will be (is) the fastest and easiest to create - AI slop
Yes indeed some studies were already done on this.
There might be a plateau coming but I’m not sure that will be the reason.
It seems counterintuitive but there is some research suggesting that using synthetic data might actually be productive.
I think there's probably a distinction to be made between deliberate, careful use of synthetic data, as opposed to blindly scraping 1PB of LLM generated SEO spam and force-feeding it into a new model. Maybe the former is useful, but the latter... probably not.
Interesting. The tonal change has definitely been noticeable. It also seems a bit more succinct and precise with its word choice, less flowery. That does seem to be in line with Gemini's behavior.
I wonder if at this point it really matters who used whose data ...