AI

Generative AI Goes ‘MAD’ When Trained on AI-Created Data Over Five Times


A new study on AI has found an inherent limitation on current-generation networks such as the ones employed by ChatGPT and Midjourney. It seems that AI networks trained on AI outputs (like the text created by ChatGPT or the image output created by a Stable Diffusion model) tend to go “MAD” after five training cycles with AI-generated data. As you can see in the above images, the result is oddly mutated outputs that aren’t reflective of reality.

MAD – short for Model Autophagy Disorder — is the acronym used by the Rice and Stanford University researchers involved in the study to describe how AI models, and their output quality, collapses when repeatedly trained on AI-generated data. As the name implies, the model essentially “eats itself,” not unlike the Ouroboros of myth. It loses information on the tails (the extremes) of the original data distribution, and starts outputting results that are more aligned with the mean representation of data, much like the snake devouring its own tail.

See more

In essence, training an LLM on its own (or anothers’) outputs creates a convergence effect on the data that composes the LLM itself. This can be easily seen in the graph above, shared by scientists and research team member Nicolas Papernot on Twitter, where successive training iterations on LLM-generated data leads the model to gradually (yet dramatically) lose access to the data contained at the extremities of the Bell curve – the outliers, the less common elements.

The data at the edges of the spectrum (that which has fewer variations and is less represented) essentially disappears. Because of that, the data that remains in the model is now less varied and regresses towards the mean. According to the results, it takes around five of these rounds until the tails of the original distribution disappear – that’s the moment MAD sets in.

See more





Source link

Most Popular

To Top