Why fuss over originality?
The most recent iteration of AI technology, generative AI, has been controversial too say the least. Some praise it for the efficiency it brings; others blame it for killing individual creativity. Part of that criticism has merit: generative models do not invent out of intuition or by their "gut".
But what does “original thought” even mean? Is there such a thing as pure originality? And why does our civilization’s latest technology work by remixing and recombining what came before? That’s what I want to explore in this post.
A tale from the R&D multiverse
Let’s start with a story. I work in an R&D department of a multinational company; a big part of my job is feasibility testing for new product ideas. My team and I have been developing one particular pitch for about five years. At the project's start, management handed us pitch slides with polished graphs and a bold statement: this new technology could disrupt up to 30% of the relevant market.
Over those five years we did the work. We modelled the technology, ran safety and cybersecurity analyses, and based on our new insights re-estimated the potential disruption at roughly 38%.
That outcome felt satisfying: the number seemed to confirm and even exceed the original ambition. We adopted the 30% figure as a reasonable threshold in our reports, papers and external messaging, even reusing the original graphs in modified form.
Here’s where it gets twisted. Recently, while assigned to strategic work, my team read consultancy reports submitted to the company over the last years. To my astonishment, one report dated less than a year before our project began had warned that a new technology could disrupt about 30% of the market — and included graphs very similar to the ones we had been using. We were looking at the seeds of our project.
The ouroboros of knowledge
This story about our little R&D project got me thinking. This probably started by one senior leader just wanting to explore the potential threat that the consultants warned him about. Some years of research later, the worry was a bit more substantiated and a bunch of more publications and papers were made public. This "new" knowledge probably will be processed by more analysts down the line. Eventually, at someone's desk a very similar "potential 30% disruption" report might appear. And we might not even have been the first link in the chain.
I think I understand why this happened. Its all about institutional incentives. Consultants, managers, and analysts operate under reputational pressure. A neat, memetic number like “30%” is catchy: it travels. Slides get copied, summaries are made, and an idea acquires momentum independent of its original evidence. The process is social as much as intellectual.
Crucially, this chain of regurgitation happened before accessible generative AI became common. For example, music sampling and the recent obsession over nostalgic reboots and franchises, show remixing predates generative AI. The image that comes to mind is the ouroboros — a snake eating its tail. Is that what our culture has (d)evolved into?
Break the loop
True creativity however does not necessarily need to come from "thin air". In fact, most discoveries have historically being all about mistakes or applying known tools to a different domain. Having access to the same pool of data, but looking at them from a different angle *can* still create new information. True originality is all about interpretation, not production.
Perhaps that’s why generative AI behaves the way it does: it connects patterns from the data we collectively produce. We have realised there is power in our shared information, but our risk aversion sometimes prevents us from spotting the patterns ourselves. So we build machines to connect the dots.
How do we use that to our advantage? Break the loop: take the insights the machines give you and own them. Ship the product; see what happens. It might succeed, it might fail — or it might do something surprising. Either way, making is how we learn. Generative AI is not the origin of idea recycling; it’s an accelerator and a mirror. If we want less echo and more novelty, change the incentives: experiment, tolerate failure, and value interpretation over repetition. Let’s find out!
Comments
Post a Comment