Most people are probably pretty familiar with artificial intelligence and the discourse around it by now. As much as it has been a subject of fear-mongering and misinformation, the fact that you have probably used ChatGPT before (even if just for fun) proves it has already been normalized. The more relevant conversation now is about whether AI can be ethically used. On one hand, you have those who argue it is needlessly stealing jobs from real people, particularly artists, just so that executives can get quicker and bigger profits. On the other hand, its supporters claim it is an innovation that will unlock new creative potential and which, like technological innovations preceding it, will create new jobs to replace the ones it takes. As a writer, the existence of something like ChatGPT puts me on edge, so it’s not surprising that I found myself in the first camp pretty quickly. However, even I have to admit that AI’s supporters are right about at least one thing: AI is not inherently evil. But that doesn’t mean we shouldn’t regulate it.
Supporters of AI usually ground their takes in history. They point to previous industrial revolutions, particularly the first, and compare AI’s critics to those of, for instance, factory machinery. Workers of the period were also scared their livelihoods would be replaced by machines, and yet more jobs were actually created by the implementation of machinery in factories. Just like those anxious workers, everyone scared of being replaced by AI will be proven incorrect once it actually creates new work for everyone while improving the standards and output of creative industries, right?
Well, not quite. History does repeat itself, but generalizations about how entire industries change do not accurately reflect what might happen on an individual level. The eventual creation of new jobs does not inherently mean the people being replaced will be able to acquire (and be qualified for) these new positions. One might say certain jobs should be sacrificed for the overall enrichment of society and point to the exponential increase in profits and wealth following each technological revolution. But who is that wealth and profit really going to? Is it really helping those whose jobs are affected by AI?
In a world where the wealth gap grows wider and wider every day, inflation continues to outpace livable salaries and rapidly growing industries bring us closer to the brink of environmental destruction, is the goal of more overall wealth really still the ideal? Especially when the money is mostly going to the top 1%? A movie written by AI, as opposed to actual writers, will not improve anyone’s life. It’s certainly not helping the writers who were not employed to produce it. And it’s not like those writers can suddenly go get one of the new jobs in AI development, either. Their only hope would be to join part of the lucky few kept on to manage and refine the AI output. This is even more true for visual artists, whose skills in practical art-making may not transfer at all to the way AI art is generated using keywords.
However, I don’t think the solution is eradicating AI. In fact, its supporters are correct when they describe AI’s potential benefits. Everything from cancer screening to environmental protection can be done with AI’s assistance more efficiently than humans alone ever could. Even in the creative world, AI can produce images and written passages quickly and cheaply with tools like MidJourney, allowing creatives, especially those without access to big studio resources, to experiment with ideas they might not otherwise explore. Storyboards, concept visuals and even idea generation are all uses of AI that would actually aid smaller creators. However, these potential benefits bank on AI being developed and utilized for the good of everyone, which is not our current reality.
With the ongoing writers’ and actors’ strikes, it is pertinent to point out that the fight for fairer compensation in these fields is linked to the discourse around AI in these industries. This is not because Hollywood’s unions are blind to the potential of innovation. It’s because the way AI has already started to pop up in these industries has been explicitly exploitative. Disney recently scanned the likenesses of dozens of underpaid background actors so that they could use digital replicas of them in any future projects they wish. These actors were paid for only one day of work. It’s basic restrictions against practices like these that the unions are fighting for, and the studios seemingly don’t want to agree to any of it. Nothing exists to incentivize these big companies and executives to use AI ethically. The only possible incentive would be constraints, including government regulation.
I recognize the broad strokes of history: I know that these kinds of technological revolutions are an integral part of our society and have happened many times before with positive outcomes. I know that people have complained about losing their jobs to new technologies just as many do now. I would love to take refuge in the fact that my career will be fine in the future. However, the truth is that this technology can be made dangerously exploitative for creatives if not regulated at all. If executives don't have to pay actual people to put out creative projects, more profits are generated — which is all that really matters to their balance sheets.
It is more important than ever to support the efforts of artists to ensure this new technology isn’t used to undermine their ability to make a living. The incredible creative potential of AI can and should be accessed without needing to exploit artists for profit. This can only really be achieved by fighting for proper guardrails on the usage of this technology in the arts. After all, these revolutions and the profits they produce should benefit all of us, not just a handful of executives.
Paulie Malherbe ’26 can be reached at paulie_malherbe@brown.edu. Please send responses to this opinion to letters@browndailyherald.com and other op-eds to opinions@browndailyherald.com.