
New work explaining the inner workings of artificial intelligence could provide a way around the threat of AI “model collapse,” potentially averting growing numbers of AI hallucinations in the future.
First coined in 2024, “model collapse” refers to a scenario where an AI model trained on AI-produced data ceases to provide accurate results, instead producing inaccurate “gibberish” because of the poor quality of its training data.
Some have warned that high-quality text data to train systems like Large Language Models (LLMs) is set to run out as early as this year, and so data produced by models themselves has taken a larger training role—inviting the threat of model collapse.
Simple statistical models reveal a fix
Through analysis of a simple yet powerful set of statistical ...



Recent Comments