Generative AI Runs into a Problem – Generative AI


Artificial intelligence has long been one of humanity’s greatest dreams. Whether looking at the biological designs of Mary Shelley’s Frankenstein’s monster or the more mechanical and digital imaginations of the concept in The Matrix, the idea isn’t a new one. As much effort has been put into this concept over the years, it’s only really recently that generative AI has started to match what we dreamed AI capable of.

With systems like ChatGPT and Midjourney, generative AI has been producing output that appears outright human. Of course, like humans, no AI is perfect, and these programs are beginning to run into problems of their own unintentional making. The repercussions here could have major implications for the future of generative AI, and the potential output it can produce.

Generative AI learns by example, and it’s in this learning process that problems are beginning to grow. AI learns by looking at millions of examples of media, and then finding patterns within this media. If Midjourney sees a picture which is labelled ‘ball’, for example, it can eventually derive an idea of what a ball is through hundreds of thousands of images. This worked great at first because at first, it only had human examples from which to draw.

In a couple of years since it’s become mainstream, an enormous amount of media created by AI has been added to the zeitgeist. These can be fantastic and accurate, but they’re often full of little mistakes that humans can easily avoid. Fingers and hands are the most obvious example in AI image creation, but they appear in AI-generated text too. Over time AI has begun to copy not human work, but the slightly imperfect work of AI. AI doesn’t know it’s copying AI, and like making a VHS copy of a VHS copy, the results continue to degrade.

It’s important to note, however, that this issue with AI is confined to generative systems. AI is far broader than just this subsection, playing huge parts in other media which avoids the problem. Consider, for example, the AI that goes into creating online casino games. Virtual slot games are tested by AI, but this AI is built on a static and strict set of rules. Such testing ensures these games are reliable, and playable equally well on different machines like tablets, smartphones, laptops, and desktops. It’s also here, in the idea of static base design, that a potential solution for generative AI issues might appear.

A Limited Solution? AI works best when limited to strict data sets. This is where generative AI began, and where it can still succeed when taking from user data like that found in online search queries. For broader image and text learning, solutions could thus be found by restricting the searched media to that created within a certain timeframe. Anything created before 2020 would be free of AI influence, and thus issues of copy-degradation could be avoided.

Generative AI is still in its infancy, so it’s too early to tell how solutions will shake out. If not addressed soon, however, AI output could continue to suffer. It’s an unprecedented problem, but one which billions of dollars are riding on.


Please enter your comment!
Please enter your name here