Is History Repeating?
An apology and a reminder
I was recently interviewed for “Huete Journal,” a program on the German TV network ZDF. They asked the usual questions about my artistic process of working with AI. But they also wanted to hear my thoughts regarding those people whose work has been used in training the latest round of AI systems (ChatGPT, Midjourney, etc). Now that anyone can create work in their “style,” did I think that they had been robbed? What advice did I have for creative people who might find themselves out of work, replaced by an AI system?
I wasn’t very sympathetic to the soon-to-be-unemployed creative. And I wasn’t very alarmist about the coming impact of AI. My response was an amalgam of things I’ve been thinking about for a while: 1) that AI is just the latest in an ongoing series of technological innovations and that soon there will be something else demanding our attention, 2) there is a whole history of creative disruptions caused by new technologies (think: photography or desktop publishing), and that, on the whole, they’ve spurred new forms of creativity that were never possible before, and 3) that copyright is a relatively new concept, one which has the potential to stifle creativity, and so maybe it’s time to revisit our dependence on it.
In retrospect I was wrong.
As I thought about the interview, I was surprised at my seemingly passive acceptance of AI and its inevitability. That was certainly not my attitude when I started working with “modern” machine learning in the mid-2010s. At that time I was seriously concerned about how AI would impact society. Looking back to my 2019 essay “Tabula Rasa” I wrote:
“…the future is being built on AI. It is poised to be the latest in the inevitable progression of technologies that brings greater efficiency and optimization of all aspects of our lives — with little regard for any unintended consequences. And so we are resigned to the inevitable disruption that will follow in AI’s wake, cynically aware of how it may be used for the benefit of those in power, and that most people will have very little say in what that future will be.”
My work at the time was based on the belief that beauty could act as a Trojan horse — drawing people in, getting them curious about the technology, allowing aesthetic experiences to be the basis by which diverse and non-technical voices could develop intuitions for what AI was and could become, and feel empowered to participate in shaping AI’s future.
I think I was taken off-guard when the latest massive (or “large” as they’re called) AI learning models were released. Systems like ChatGPT and Midjourney created amazingly clever output, but my attitude was that people who used them were cheating. I questioned why people would want to create works that were merely backwards-looking “statistical renderings” (A great term from Hito Steyerl). I agreed with Ted Chang who thought we should be saying “applied statistics” instead of “AI,” and that all the talk of “intelligent” systems was a distraction.
And yet, despite my (and others’) skepticism about these new tools, people started using them en masse. AI moved from a coming curiosity to something which was being used everywhere. There was a near universal belief that the world was in the midst of a radical change. And the tech companies certainly reinforced this with their endless hype and alleged “concerns” about the dangers of their “miraculous” and mysterious new technology.
I recently participated in an event where people were asked to place AI on a spectrum — where did we think it belonged in the range between “revolution” and “history repeating?” I was fairly strong on the history repeating side. This was partly because I’ve been working with AI for almost 40 years and have seen booms and busts in the field before, but also because I’ve seen numerous other disruptive technologies come along, and am resigned to the fact that there will be others after this.
But the history that is repeating is not simply a story of yet another new tech. It’s one where a new technology, again, harms our humanity. This recent article on the early computer scientist Joseph Weizenbaum nicely summarizes the problem:
“…the computer revolution… was actually a counter-revolution. It strengthened repressive power structures instead of upending them. It constricted rather than enlarged our humanity, prompting people to think of themselves as little more than machines. By ceding so many decisions to computers… we had created a world that was more unequal and less rational, in which the richness of human reason had been flattened into the senseless routines of code.”
And so, going back to the interview that triggered this post, I was surprised at my lack of anger when I was asked about today’s AI systems. It wasn’t that I had forgotten the concerns that started my AI recent work in the 2010s. But I was prioritizing my narrative that each “new” is yet one more in an endless sequence of technological development, and so we can think less seriously about it, rather than considering the actual impact of new technologies.
Writing this post is a kind of a mea culpa for my lackluster interview response. But it has also brought to the foreground a question that I’ve recently been struggling with: Is it too late for art to have an impact on the future of AI? And, if so, is it time for me to put AI aside and move on to the next new?
To those questions… I haven’t decided yet. But I don’t think so. Despite all the hype, AI is still in its infancy and is deeply flawed, as is the industry developing and promoting it. And those flaws need to be made visible, to highlight the technology’s dangers. It’s more important than ever that we each act to create a more equitable and humane future.
It’s a topic that I will explore in future posts. Stay tuned.