Some Uncomfortable Truths
This section examines the externalities of artificial intelligence—the wide-ranging effects AI systems have on society and the environment beyond their intended use. While much attention is given to technical issues such as errors or so-called "hallucinations," the impact of AI reaches further. These systems influence labor markets, shape power dynamics, consume significant natural resources, and introduce social challenges. Many of these consequences are diffuse, indirect, or hard to quantify, yet they are no less important. Understanding these externalities is essential for assessing the full cost and responsibility of integrating AI into public and private life.
Environmental Costs:
If you are worried about the planetary state, you will surely not be surprised to read that AI further increases the speed by which we breach our planetary boundaries. We begin with essential facts and figures. For instance, AI-driven data currently accounts for approximately 2–3% of total electricity use in the United States—a figure projected to triple within the next five years. This increase could generate greenhouse gas emissions equivalent to 16 million gas-powered cars. Moreover, AI demands not only considerable energy but also significant water resources; data centers consume between 0.18 and 1.1 liters of water per kWh for cooling, a process particularly challenging in arid regions. In parallel, the production of AI hardware depends on a suite of critical minerals—such as cobalt, tungsten, and lithium—whose extraction often entails substantial environmental degradation and is linked to conflict in certain regions. These environmental challenges inevitably also impact human communities, but AI’s influence on society goes beyond climate—it extends into the very fabric of our social and economic life.
Social cost
Beyond environmental issues, AI systems depend on extensive, often underrecognized human labor. Many tasks that underpin AI—such as data labeling and content moderation—are carried out by low-paid workers, sometimes under conditions that pose significant psychological risks. Another commonly debated theme is that AI-driven automation could replace jobs across various sectors, affecting low-wage workers disproportionately and contributing to wealth concentration among a select group of corporations, thereby exacerbating economic inequality. For instance, a recent McKinsey report noted that automation by 2030 could displace 400-800 million individuals, emphasizing the social cost of job losses.
Ethical Concerns: Bias, Transparency, and Accountability
AI systems also introduce ethical challenges related to fairness, accountability, and control. The toxicity of the commons emerges as a critical challenge when curating open-source pre-training data, particularly given that many closed models rely on proprietary data harvested without the creators' consent. This opaque process raises ethical concerns regarding data ownership and transparency and increases the risk of perpetuating harmful biases and toxic content. For example, facial recognition tools have shown disproportionately high error rates for marginalized populations, leading to serious real-world consequences such as false arrests. Without rigorous oversight and clear standards for data sourcing, AI systems risk reinforcing negative societal patterns, undermining trust, and compromising the integrity of the technologies that are increasingly integral to public and private decision-making.
These concerns do not end with data ownership or bias—they extend to how AI systems are designed, used, and interacted with in everyday contexts. The overuse of AI dialogue systems introduces ethical concerns (and cognitive impacts explained next), including increased potential for academic dishonesty, plagiarism, and the propagation of biased or inaccurate information. These systems, while efficient, often generate responses without clear insight into their decision-making processes, raising transparency issues. Large language models and other AI systems often operate as “black boxes,” producing responses through complex, non-intuitive processes that are difficult for users to trace or interpret. Without insight into the model's reasoning or data sources, verifying the reliability and trustworthiness of the content becomes difficult. This opacity further challenges abilities to critically evaluate sources and cross-reference data, weakening their overall research rigor and independent analytical skills.
Cognitive Development: Rethinking Learning and Critical Thinking
A robust cognitive framework is essential for synthesizing complex ideas, evaluating evidence, and constructing well-reasoned arguments. However, when students (and non-students) depend excessively on AI for research and content generation, they risk undermining their ability to engage deeply with material. The convenience of AI-driven outputs may reduce the time spent on meticulous research and critical analysis, fostering complacency and diminishing originality. Moreover, the lack of comprehensive explanations accompanying AI-generated paraphrases can impede students’ understanding of context and hinder the verification of information accuracy. This process of "cognitive offloading" shifts the mental effort of critical engagement onto machines, reducing opportunities for cognitive friction. As Ethan Mollick highlights, the use of AI in educational settings can hinder deep learning and critical thinking by removing the intellectual struggle and cognitive friction that are vital for deep understanding. In both academic and professional contexts, over-reliance on AI tools can obstruct the gradual, reflective integration of knowledge that underpins meaningful learning. Furthermore, if AI is used as a substitute for human mentors or educators, it risks weakening students' development of critical thinking and domain-specific expertise, essential components of a mature cognitive framework.
Psychological Well-Being: Mental Health in the AI Age
AI’s influence also extends to psychological well-being. Social media algorithms engineered to maximize engagement can inadvertently foster addictive behaviors, polarization, and other mental health challenges. Furthermore, the proliferation of AI-generated content—ranging from deepfakes to misinformation—has led to an erosion of trust in traditional knowledge systems. As synthetic media becomes increasingly indistinguishable from authentic content, society faces mounting challenges in maintaining a shared understanding of truth and fostering informed decision-making. Studies have also found correlations between AI exposure and rising workplace loneliness, insomnia, and stress levels.
—
As we have seen, AI is not an artificial, nor is it objective, neutral and universal computational technique. It is, though, deeply embedded in the social, political, cultural and economic reality of those that build, use and mostly control it. By unpacking these less visible costs, we can develop a more complete picture of what it means to integrate artificial intelligence into society responsibly. Understanding these issues isn’t merely an academic exercise—it’s essential for shaping future governance, technological design, and ethical use.
In the next section, we turn our attention to the constructive side of this conversation. Building on the insights presented here, Part 4 will explore what responsible and wise use of AI might look like—grounded in an understanding of what large language models are, what they can and cannot do, and how their capabilities and limitations intersect with the externalities we’ve just examined.