Should we trust generative AI companies?

When companies choose greed, innocent people suffer the consequences

Generative AI (GenAI) does not generate new content, it generates a mirror. We are looking into our past,
reflected over and over, and told that it is innovation, a forward step, while our social and political climates
backslide across the globe. Our new technological advancements are impressive, but are they really being
developed for the benefit of society?
There are pressing concerns regarding the safety of popular GenAI programs. At least two teen suicides have
been attributed to AI chatbots in the U.S. since 2024.
One of these teens was 14-year-old Sewell Setzer III, who took his own life in February of 2024. His mother,
Megan Garcia, said Character.AI is liable for this tragedy. Garcia said her son became “noticeably withdrawn”
after beginning to use Character.AI chatbots “and began suffering from low self-esteem.” Setzer began
confiding in these chatbots about suicidal ideation, and in one exchange said he “wouldn’t want to die a painful
death,” to which the chatbot responded, “don’t talk that way. That’s not a good reason not to go through with it.”
Setzer told a chatbot he was going to “come home” to it moments before committing suicide. Garcia said the
software lacked safety protocols that could have protected her son, and instead, the chatbots were “designed
to keep our kids addicted and to manipulate them.” Garcia is currently suing Character Technologies
(Character.AI’s developer company) and its founders, Noam Shazeer and Daniel De Freitas, as well as
Google, which has ties to the company, for wrongful death, negligence, and deceptive and unfair trade
practices. As of May 21 of this year, a judge has allowed the lawsuit to proceed after rejecting the company’s
claim that the free speech of AI chatbots should be protected under the First Amendment.
The second teen was 16-year-old Adam Raine, whose parents are suing ChatGPT’s developer company
OpenAI and its CEO Sam Altman with a very similar case to Garcia’s.
GenAI can be extremely dangerous without proper regulations. The deaths of Setzer and Raine were most
likely preventable had Character Technologies and OpenAI prioritized their users over their profit and ensured
proper testing and security programming had been done before offering their chatbots to the public.
Despite this, America’s AI Action Plan, released by the U.S. government in July, appears to be single-mindedly
focused on the efficiency of developing GenAI technology. The action plan says that the U.S. needs to “reject
[…] bureaucratic red tape” and “dismantle unnecessary regulatory barriers that hinder the private sector” in “the
development and distribution of new AI technology.” Ensuring the safety of GenAI technology is clearly not a
priority to the U.S. government. Instead, their goal is to “reap broad economic and military benefits.” The desire
for profit and power is what drives current innovations in AI technology, not a desire to benefit society.
This document was published a month before it came out that OpenAI is offering the U.S. government a
business version of ChatGPT for US$1, and the U.S. Department of Defense awarded a US$200 million
contract to OpenAI to develop GenAI technology for military use. The U.S. government and GenAI companies
both have an incentive to work with each other in the quest for profit and power, which amplifies the potential
for the corruption of GenAI technology.
In a section titled, “Ensure that Frontier AI Protects Free Speech and American Values,” the action plan added
that it is “essential” that AI systems “be built from the ground up with freedom of speech and expression in
mind.” This passage is somewhat vague, but the possible implication of AI chatbots being included in the First
Amendment right to the freedom of speech is concerning. This would reduce the accountability of GenAI
companies and help them avoid consequences for negligence in cases such as Garcia vs. Character
Technologies. A recommended policy action in this section is to “eliminate references to misinformation,
Diversity, Equity and Inclusion, and climate change” in the National Institute of Standards and Technology’s AI
Risk Management Framework. This recommendation seems counterintuitive to the action plan’s simultaneous

promise that their “Al systems must be free from ideological bias [and] trustworthy.” If the U.S. government
wants its AI to be trustworthy, why would they not want misinformation to be recognized as a risk? It seems
that they just want it to be trusted, no matter the actual quality or safety of the content.
These safety concerns are only the tip of the iceberg of GenAI’s dangers. Among many other issues, negative
environmental implications have been brought up by many experts. A paper published by MIT in March of
2024 said that our current approach to GenAI’s development is environmentally “unsustainable,” and the rapid
growth of this technology is “actively counterproductive” to achieving “net-zero greenhouse gas emissions by
2050.” However, the U.S. government has made it known that they have written off the concerns of scientists
around the globe, as their action plan outlines their aim to “reject radical climate dogma” in their heedless race
for global technological dominance. Concerns of climate change are not radical or extreme, but they do
jeopardize the unfettered growth of wealth for the ruling class, and so governments and corporations are
incentivized to paint them as such.
AI does have potential positive applications, but the companies currently dominating the field of GenAI are not
developing this technology to benefit their users or society at large. They are exploiting the novelty of this
technology to suck as much money out of people as they can, no matter the social or environmental costs. The
next time you find yourself wanting to use ChatGPT or a similar chatbot, I encourage you to ask yourself, “Is
convenience really what’s most important to me?”