the grok of it all
I received a routine call from my doctor’s office recently, expecting to hear the familiar voice of the receptionist calling with an appointment reminder. When I answered, I was taken aback to be greeted by a strange voice: “this call is being recorded with AI.” The receptionist clicked over to remind me of my appointment, so I thanked her and confirmed, then asked her why AI was on the call. She hastily informed me that this was a top-down decision from her bosses, to ensure that proper notes get taken. (Immediately, I wondered internally why her bosses would presume that she would be incapable of taking sufficient notes.) I accepted her explanation (appreciating that this decision had not come from her), and lamented that I had not consented to any of this. I felt deeply uncomfortable with the idea of my voice being recorded and used for any future purposes forever, both without my consent and against my will.
To her credit, she conceded and even acknowledged that a few other clients had expressed similar concerns. She reassured me that AI would strictly be utilized for notetaking, but that she would pass my concerns on to her management. I thanked her, hung up, and paused for a few moments. Even in its brevity, that moment crystallized something I have been grappling with for quite some time now: AI integration has been accelerated because we live in a society where consent is neither valued nor prioritized.
Sometimes it feels like I blinked once in 2020, and now, I cannot escape the reach of artificial intelligence. AI has very visibly been forcefully integrated into nearly every social media platform and software app. Whether these technologies are providing any necessary use or not, have changed these apps for the better, or improved user experiences, companies have rapidly embraced adoption. Not to be “left behind” in the global AI arms race, governments have incorporated AI into their software tools and tech. Even the nonprofit sector has finessed its way into the AI world, often qualifying its efforts under the messaging of “AI for good”.
As a result of this global convergence, and the strength of media in manufacturing consent, AI resistance is often met with either defeatism, nihilism, or disgruntled calls for deference. Regular civilians are doing unpaid PR for tech billionaires, arguing people down about the so-called “inevitability” of AI. Surrendering their data for free and destroying the environment, all in exchange for AI-generated images of baby Steve Harvey and a hallucination machine masquerading as your all-encompassing virtual best friend.
Take a glance at LinkedIn and there is no shortage of fear-mongering about how a refusal to engage with the tech will render you unemployable in the future, latching on to this newfangled notion of “AI literacy”. It feels nonsensical to think that these new technologies (that we have all spent our lives without thus far) must now be embraced or else you will be left behind. Yet again, force and a dissolution of consent become key components in propping up these kinds of arguments. But, the efficacy of this messaging is clear when you see how many people have been convinced of the need for these technologies. Every day online, I am inundated with an abundance of people who now use AI on a daily basis, defending its uses so fervently as if they truly believe that they cannot continue on without these “tools”.
If it weren’t so socially damaging, it would be quite a remarkable feat from the tech industry. To have finally cracked the ultimate question of capitalism: how do you convince people that they need the product you have designed, when they have lived their entire lives without it? To have successfully engineered a new need within the psyche of the public, making your product indispensable and recession-proofing your company. Unfortunately enough, it’s becoming more evident that the jig has worked on a large swath of the populace.
It’s interesting because I have watched the discourse change considerably online over the last year. A few different factions have emerged: the “never AI” folks, who are quite stringent on never wanting AI use. Then, there are the “AI evangelists”. Oftentimes, these are tech folks with industry experience who staunchly believe that there are some legitimate use cases for AI. And then, there are the regular civilians who have been seduced into AI use. This category of users have offloaded the bulk of their everyday responsibilities onto these chatbots: serving as their therapist, their confidant, their personal assistant, etc. Living in the United States, I do have to marginally suppress my condescension with the third group at times. As much as I align most firmly with the “never AI” category, I often think about how if we had universal healthcare, we likely wouldn’t have so many people turning to ChatGPT for medical advice and therapy. It is sobering to reflect on the myriad ways this country has failed us. And I make this distinction not to absolve people of their individual responsibility, but to add systemic failure as another layer in understanding how tech found an entry point through government-designed scarcity. Still, the weight of this systemic failure feels particularly substantial when you see some of the stories surfacing lately on AI-induced psychosis. But, alas. Greatest country in the world, am I right?
When you revoke consent by design, you create an environment sustained by a dissolution of consent. Consider that many instances of AI integration require the user to opt-out, instead of opt-in.
On a much larger scale, one of the clearest emerging examples of the dissolution of consent is in the music industry. Against my will, I have come across the phenomenon of “AI artists”. Xania Monet and Solomon Ray, are two AI-generated Black “artists” that have received notable Spotify performance metrics (1.2m monthly listeners and 582.2k monthly listeners, respectively) and Instagram followings (196k and 135k, respectively). Their existence in the industry elevates some new, intersectional future concerns: the potential for manipulating Black likenesses by non-Black studios and production companies, and the new avenues for digital blackface.
In theory, when you create these Black AI-generated “artists”, you can make them say whatever you want them to. You can force their likeness to engage with any politics, use their figure as the face of any agenda you want to push. They have nullified any pushback from the artist by design. In turn, they have duped the listener into thinking they are supporting Black musicians. Consent completely gets erased from either side of the equation. The average listener may not be able to detect any of the giveaways of an AI song that an audio engineer, or any other skilled musician may be able to pick up on. Without any labeling of AI-generated music to delineate these distinctions on the app, even if you want to abstain from listening to AI-generated music, it feels like a statistical inevitability that you will be forced to without your consent or knowledge. Suno (an app that allows people to generate music with AI) has increased that likelihood. Suno generates Spotify’s entire music catalog every 2 weeks (approximately 7 million songs per day).
Some of the more unsettling uses I’ve seen of AI have been its deployment to generate people’s likenesses posthumously. Every time I remember the eerie hologram of Tupac at Coachella, I think about how technology has been shaping this future of perpetual digital servitude for years. Last year, the BBC used AI to resurrect Agatha Christie’s likeness for an online course. While these may seem like innocuous decisions to some, I personally think that using someone’s likeness in perpetuity is a gross violation of consent, and presents a number of new ethical concerns. It feels especially heinous when you consider all of the people who never even had the chance to consent to this use, such as the likes of Tupac and Agatha Christie.
No exploration of AI and consent can truly be complete without an undertaking of the monstrosity that is Grok, Twitter’s AI chatbot. Just last week, I came across stories of users tagging Grok to remove hijabs and saris, and to completely undress images of women. A new wave of crimes are now shaping the internet experience, with the ability for users to generate nonconsensual deepfakes of women, children, etc. Because of course, the worst predators on the internet have also turned to using Grok to generate deepfakes of children and CSAM. The most vulnerable groups of people will continue to be exploited through this tech, rife with consent violations. With Grok generating thousands of deepfakes per day, what feels most devastating is the realization that we have truly only scratched the surface here with how much harm there is to come.
I can’t help but worry about the future of deepfakes, and what other novel, nefarious purposes people find for their use. Here, we have uncharted territory: how do you navigate the online world absent concrete distinctions between what is real and what is not? If regular Twitter users can use AI to generate something as iniquitious as a nonconsensual deepfake, they can also use that chatbot to generate events or actions that have never happened, misrepresent people and politicians through speeches that they never made, etc. In an online environment already saddled with a deep misinformation problem, we are careening towards an unusable internet. No thanks to our governments that have propped up the broligarchy, and the tech companies that have done their best to quash any legitimate criticism from AI ethicists.
With all things considered, I continue to anchor my stance on AI with an eye towards its future impacts. Most notably, I always return to AI’s environmental implications. It’s not lost on me that some of the most resource-intensive data centers (the facilities used to power AI) are being built in mainly poor, Black areas. To name just a few locations, Elon’s XAI has hitched its wagon to Memphis, and Mark Zuckerberg’s Meta has sunk its talons into rural Louisiana. Capitalism, like AI integration, thrives in an environment that nurtures its extraction through force.
I find comfort in seeing communities start to protest these corporate efforts, and can only hope that public opinion continues to shift against AI for the betterment of the people and the environment. As the staggering costs of AI integration become too unmistakable to ignore, I wonder at what point will people reconsider if these technologies were truly ever free.