Beyond the Hype: Unmasking the Real Dangers of LLMs, Not Fictional AI Threats
The conversation surrounding "AI" systems is often dominated by anxieties over rare and unlikely threats. While these concerns capture headlines, they frequently divert attention from the tangible issues presented by Large Language Models (LLMs) and broader societal problems. It's time to prioritize our efforts on the most significant challenges, not the most sensational.
It's Not AI (Yet): Understanding Large Language Models
True Artificial Intelligence (AGI), defined as a computer possessing the mental capacity of a household pet, remains a distant goal. Despite the hype, we are far from developing such a system, perhaps even further than we were from landing on the moon in 1962. What we currently possess are sophisticated pattern recognition systems, primarily Large Language Models (LLMs) like ChatGPT.
These systems predict the most probable next piece of text, generating essays, code, and other content. While their output can be impressive and useful, it lacks genuine thought, functioning as an advanced form of pattern matching, akin to the famous 1964 program Eliza. LLMs analyze billions of documents, condense this data mathematically, and then extract information to produce coherent text. To label these systems as "AIs" without significant qualification, especially when critiquing them, distorts the debate and overstates their current capabilities. The real ethical dilemmas of truly thinking AI, as explored in sci-fi like Lena by qntm, are still theoretical.
Debunking Misguided AI Criticisms
Many common arguments against "AI" are often unreasonable, distracting from both real technological challenges and deeper societal issues.
Exaggerated Fears: Suicide, Homicide & Public Safety
The Wikipedia page listing deaths linked to chatbots, while tragic, contains a minuscule number of entries compared to other preventable causes of death. Media focus on these unusual incidents often overshadows the more pervasive issues. For instance, school is a significant factor in youth suicide, with Scientific American highlighting a correlation between child suicide and school days. Similarly, US CDC data reveals youth suicide as the 3rd largest cause of death in the 14-18 age range, with disproportionately high rates among girls and LGBQ+ youth, often linked to misogyny and homophobia – issues that schools should address directly.
Regarding homicide, domestic arguments account for a substantial percentage of female deaths, as documented by the Australian Institute of Criminology. While any loss of life is tragic, directing significant government resources towards extremely rare causes linked to LLMs, while long-standing, major societal problems persist, is an ineffective strategy.
Overblown Economic Concerns: Fraud & Unemployment
The use of LLMs by criminals for fraud and extortion is a serious and growing problem. Scams like those targeting older adults or "Nigerian" scams will likely become more prevalent and sophisticated as the cost of generating convincing content decreases. This necessitates structural societal changes, including a discussion about balancing financial freedom with protection from exploitation. Banks, ironically, could leverage ML systems to better detect fraud, rather than relying on flawed current methods.
Claims about "AI" making vast numbers of jobs obsolete are often outlandish. While some mundane tasks, like summarizing documents or basic journalism, may be automated, this is not inherently negative. Automation of undesirable work is historically a positive trend, freeing humans for more complex or fulfilling roles. The more pressing economic issues are low minimum wages and high living costs, forcing people into multiple jobs – long-standing problems that predate widespread LLM adoption and require legislative solutions.
Academic Cheating & Security Scares
Academic cheating is an age-old problem. LLMs merely offer another, albeit often lower-quality and less stealthy, method. Solutions might involve shifting assessment methods, such as increased reliance on oral exams, the cost of which would be marginal compared to the already inflated tuition fees in many countries. As for computer security, experts like Bruce Schneier suggest that while "AI" can aid attackers, it also empowers defenders, making the net impact unclear rather than apocalyptic. While the situation warrants attention, it's not a civilization-ending threat.
Environmental & Resource Impacts: Spidering & RAM Prices
The increased web crawling by "AI" companies can strain server resources, and high RAM prices impact hobbyists and small projects. However, these are largely temporary and manageable issues. Web hosting can be scaled, and market forces will eventually increase RAM production to meet demand. Environmental concerns regarding data center power consumption are also diminishing as renewable energy sources become more prevalent and hardware continues to advance in efficiency.
It's plausible that some exaggerated fears are even deliberately amplified by AI companies as a PR tactic, diverting scrutiny from more immediate and complex problems.
The Real Threats of LLMs and AI
While some criticisms are overblown, genuine and pervasive threats posed by LLMs demand our immediate attention.
The Deluge of Deception: Spam & Fake Content
The ability of LLMs to generate vast amounts of contextually relevant text at minimal cost threatens to overwhelm communication channels with sophisticated spam. This isn't just annoying; it's a denial-of-service attack on society. Imagine targeted fraud campaigns, fake websites offering bogus medical or psychological advice, or cults masquerading as legitimate services. LLMs can transform quick, simple attacks into large-scale, personalized fraud, targeting individuals and organizations ill-equipped to defend themselves. As David Brin notes, this tsunami of AI-generated content risks destroying platforms like YouTube.
The Deepfake Dilemma: Eroding Trust & Personal Harm
Deepfakes extend beyond mere fake news. The creation of non-consensual fake photos and videos, particularly fake porn, is a serious issue that can lead to immense personal harm, including suicide. Beyond pornography, deepfakes can be used to fabricate evidence of professional misconduct, potentially ruining careers, especially for public figures, even with proof of fakery. This technology could fundamentally alter public discourse and trust, forcing individuals into an impossible position where their digital identity can be weaponized against them.
Justice System Bias & Entrenched Inequality
Integrating "AI" systems into law enforcement and the justice system carries profound risks. Training algorithms on historically biased data can entrench and amplify existing prejudices, making them harder to detect and challenge. Unlike human decisions, AI's rationale can be opaque, masking inappropriate factors and generating plausible but false justifications. This could lead to a future where systemic racism, for example, is encoded into unexplainable algorithms, impacting everything from loan approvals to parole decisions, without tangible evidence of discriminatory policy. The current amusing cases of lawyers using LLMs for factually incorrect filings are merely a prelude to a much more insidious problem.
The Looming Financial Bubble & Propaganda Risks
A significant portion of the "AI" ecosystem appears to be a financial scam, with inflated valuations and unsustainable business models. The inevitable correction could trigger a global financial crisis, impacting every company and government, effectively taxing ordinary citizens to cover the losses of a few. Furthermore, AI technologies lower the barrier to creating sophisticated propaganda, empowering authoritarian governments and other malicious actors to manipulate public opinion and control narratives without needing traditional artistic skills, leading to abuses that may not be overtly fascist but are deeply corrosive to democracy.
The Danger of AI Sycophants
Bruce Schneier highlights the issue of sycophantic chatbots. If individuals, especially those in positions of power, are constantly affirmed by AI systems, it could foster a dangerous echo chamber, reinforcing poor decisions and eroding critical thinking, mirroring the historical pitfalls of unchecked power and flattery.
The Upside: Practical Benefits of Machine Learning
It's crucial to acknowledge that Machine Learning (ML), including but not limited to LLMs, offers significant practical benefits. Tools like ChatGPT can be invaluable for brainstorming, improving writing structure, and identifying overlooked aspects in content. Beyond language, ML systems are effectively used in diverse fields:
- Safety Monitoring: Analyzing driver performance to detect drowsiness or phone use, enhancing road safety.
- Fraud Prevention: Identifying suspicious patterns in financial transactions or employee behavior to prevent crime.
These applications demonstrate ML's potential when deployed responsibly, with human oversight, to solve real-world problems and improve efficiency.
Conclusion: Navigating the Pervasive Impact of AI
Society's track record in adapting to technological change, exemplified by the slow adoption of car seatbelts, is often reactive and insufficient. The challenges posed by current "AI" technologies, however, are far more pervasive. Unlike car safety, where individual luck might offer protection, the ramifications of LLMs – from influencing elections to eroding trust in information – are inescapable for everyone.
We must move beyond speculative fears and address the genuine, systemic issues that LLMs introduce. This requires proactive legislation, thoughtful societal adaptation, and a clear-eyed focus on the problems that truly affect our collective future. The time for a serious, informed discussion, leading to effective action, is now.