PART 3: AI’s Uncomfortable Truths

Our research project ‘SIxAI’ set out to explore how the continually expanding capabilities of artificial intelligence might be harnessed to augment the practice of systemic investing. But this objective conceals an assumption. By asking how we could use AI for SI, we skip over the question of whether AI should be used at all. After all, these are technologies mired in controversy. Are they really as powerful as many claim them to be? And even if they are, what risks do they pose, and what costs do they incur? Which agendas and worldviews do they advance? And which do they hinder or obscure? This article is our effort to engage with these questions, temporarily setting aside our belief that there is in fact exciting potential for AI to be applied to SI, in order to reflect more deeply on what AI really is, and how it should be approached and deployed in general, if at all. Our objective here is to ground all of our subsequent research in a robust and critical engagement with those thorny underlying philosophical questions of AI that often go un-asked. Here we present a series of 4 articles, introducing the technology itself, unpacking its capabilities and their limitations, discussing some of the uncomfortable truths underlying these capabilities, and finally proposing a set of principles, both for the responsible use of AI, and for its wise use too. As always, we welcome your comments.

 

Some Uncomfortable Truths

This section examines the externalities of artificial intelligence—the wide-ranging effects AI systems have on society and the environment beyond their intended use. While much attention is given to technical issues such as errors or so-called "hallucinations," the impact of AI reaches further. These systems influence labor markets, shape power dynamics, consume significant natural resources, and introduce social challenges. Many of these consequences are diffuse, indirect, or hard to quantify, yet they are no less important. Understanding these externalities is essential for assessing the full cost and responsibility of integrating AI into public and private life.

Environmental Costs:

If you are worried about the planetary state, you will surely not be surprised to read that AI further increases the speed by which we breach our planetary boundaries. We begin with essential facts and figures. For instance, AI-driven data currently accounts for approximately 2–3% of total electricity use in the United States—a figure projected to triple within the next five years. This increase could generate greenhouse gas emissions equivalent to 16 million gas-powered cars. Moreover, AI demands not only considerable energy but also significant water resources; data centers consume between 0.18 and 1.1 liters of water per kWh for cooling, a process particularly challenging in arid regions. In parallel, the production of AI hardware depends on a suite of critical minerals—such as cobalt, tungsten, and lithium—whose extraction often entails substantial environmental degradation and is linked to conflict in certain regions. These environmental challenges inevitably also impact human communities, but AI’s influence on society goes beyond climate—it extends into the very fabric of our social and economic life.

Social cost 

Beyond environmental issues, AI systems depend on extensive, often underrecognized human labor. Many tasks that underpin AI—such as data labeling and content moderation—are carried out by low-paid workers, sometimes under conditions that pose significant psychological risks. Another commonly debated theme is that AI-driven automation could replace jobs across various sectors, affecting low-wage workers disproportionately and contributing to wealth concentration among a select group of corporations, thereby exacerbating economic inequality. For instance, a recent McKinsey report noted that automation by 2030 could displace 400-800 million individuals, emphasizing the social cost of job losses.

Ethical Concerns: Bias, Transparency, and Accountability

AI systems also introduce ethical challenges related to fairness, accountability, and control. The toxicity of the commons emerges as a critical challenge when curating open-source pre-training data, particularly given that many closed models rely on proprietary data harvested without the creators' consent. This opaque process raises ethical concerns regarding data ownership and transparency and increases the risk of perpetuating harmful biases and toxic content. For example, facial recognition tools have shown disproportionately high error rates for marginalized populations, leading to serious real-world consequences such as false arrests. Without rigorous oversight and clear standards for data sourcing, AI systems risk reinforcing negative societal patterns, undermining trust, and compromising the integrity of the technologies that are increasingly integral to public and private decision-making.

These concerns do not end with data ownership or bias—they extend to how AI systems are designed, used, and interacted with in everyday contexts. The overuse of AI dialogue systems introduces ethical concerns (and cognitive impacts explained next), including increased potential for academic dishonesty, plagiarism, and the propagation of biased or inaccurate information. These systems, while efficient, often generate responses without clear insight into their decision-making processes, raising transparency issues. Large language models and other AI systems often operate as “black boxes,” producing responses through complex, non-intuitive processes that are difficult for users to trace or interpret. Without insight into the model's reasoning or data sources, verifying the reliability and trustworthiness of the content becomes difficult. This opacity further challenges abilities to critically evaluate sources and cross-reference data, weakening their overall research rigor and independent analytical skills. 


Cognitive Development: Rethinking Learning and Critical Thinking

A robust cognitive framework is essential for synthesizing complex ideas, evaluating evidence, and constructing well-reasoned arguments. However, when students (and non-students) depend excessively on AI for research and content generation, they risk undermining their ability to engage deeply with material. The convenience of AI-driven outputs may reduce the time spent on meticulous research and critical analysis, fostering complacency and diminishing originality. Moreover, the lack of comprehensive explanations accompanying AI-generated paraphrases can impede students’ understanding of context and hinder the verification of information accuracy. This process of "cognitive offloading" shifts the mental effort of critical engagement onto machines, reducing opportunities for cognitive friction. As Ethan Mollick highlights, the use of AI in educational settings can hinder deep learning and critical thinking by removing the intellectual struggle and cognitive friction that are vital for deep understanding. In both academic and professional contexts, over-reliance on AI tools can obstruct the gradual, reflective integration of knowledge that underpins meaningful learning. Furthermore, if AI is used as a substitute for human mentors or educators, it risks weakening students' development of critical thinking and domain-specific expertise, essential components of a mature cognitive framework.

Psychological Well-Being: Mental Health in the AI Age

AI’s influence also extends to psychological well-being. Social media algorithms engineered to maximize engagement can inadvertently foster addictive behaviors, polarization, and other mental health challenges. Furthermore, the proliferation of AI-generated content—ranging from deepfakes to misinformation—has led to an erosion of trust in traditional knowledge systems. As synthetic media becomes increasingly indistinguishable from authentic content, society faces mounting challenges in maintaining a shared understanding of truth and fostering informed decision-making. Studies have also found correlations between AI exposure and rising workplace loneliness, insomnia, and stress levels.

As we have seen, AI is not an artificial, nor is it objective, neutral and universal computational technique. It is, though, deeply embedded in the social, political, cultural and economic reality of those that build, use and mostly control it. By unpacking these less visible costs, we can develop a more complete picture of what it means to integrate artificial intelligence into society responsibly. Understanding these issues isn’t merely an academic exercise—it’s essential for shaping future governance, technological design, and ethical use.

In the next section, we turn our attention to the constructive side of this conversation. Building on the insights presented here, Part 4 will explore what responsible and wise use of AI might look like—grounded in an understanding of what large language models are, what they can and cannot do, and how their capabilities and limitations intersect with the externalities we’ve just examined. 

Do you want to collaborate with us?

There is an urgent need to rethink the way we deploy financial capital for transformative impact in human and natural systems. The field of systemic investing has garnered significant momentum, and now is the time to scale deep and scale out. So we invite challenge owners, systems thinkers, innovation practitioners, investment professionals, ecosystem shapers, and creative voices to join us in figuring out how to redeploy financial capital in service of a prosperous and sustainable future for all.

How is systemic investing relevant to

Foundations

...because the pots of capital operating under a philanthropic logic are orders of magnitude smaller than those operating under an investment logic, so systemic investing is a way for foundations to leverage their capital in the systems they care about.

Corporations

...because their supply chains are becoming increasingly fragile and societal expectations of business are growing. This requires companies to deploy all the tools in their finance toolbox (incl. direct investments, advanced purchase agreements, and supply-chain financing) and partner more strategically with governments, foundations, and NGOs.

Impact Investors

...because single technologies, start-ups, or social enterprises—no matter how ingenious their solutions and how brilliant their teams—are unlikely to change systems by themselves. So what matters is that these single-point solutions are synergistically nested within a broader systems change effort.

Institutional Investors

...because mainstream ESG investing doesn’t benefit places and communities at the pace, scale, and quality required, so institutional investors must channel more capital into real-economy assets in a strategic and collaborative manner.

MDBs and DFIs

...because sustainable development in a VUCA world requires portfolio approaches to systems innovation, and those need to be funded with a different investment paradigm than those dominant in development finance institutions today. And because the public sector cannot finance sustainability transitions alone, so systemic investing is a way to crowd-in private-sector capital in a smart way.

Engage with us

Which option best describes your interest in systemic investing?

Your message has been sent!
Something went wrong – try again
About

Who We Are

The TransCap Initiative is a think-and-do-tank operating at the nexus of real-economy systems change, sustainability, and finance. We operate as a multi-stakeholder alliance coordinated by a backbone team and comprised of wealth owners, innovation leaders, system thinkers, research institutes, and financial intermediaries. Our community is open to anyone committed to our cause and values.

Why We Exist

We exist to improve the way sustainable finance is purposed, designed, and managed so that money can become a transformative force in building a low-carbon, climate-resilient, just, and inclusive society. We believe that the key to accomplishing this vision is to inspire and enable investors to leverage the insights and tools of systems thinking and complex systems science for addressing the most pressing societal challenges of the 21st century.

What We Do

Our mission is to build the field of systemic investing. This means developing, testing, and scaling an investment logic at the intersection of systems thinking and finance. We do that by convening a multi-stakeholder alliance to develop a knowledge and innovation base, test novel concepts and approaches, and build a community of practice.

Our core ideas borrow from the disciplines of systems thinking and complex systems science, challenge-led innovation, human-centred design, new economic frameworks, and financial innovation. Our experiments are contextualised in those place-based systems that matter most for human prosperity—such as cities, landscapes, and coastal zones—as well as in value chains and other real-economy systems. We hope that our work produces knowledge and insights, methods and tools, and a self-organising community of inspired and enabled change makers.

The places and value chains we intend to transform act as centres of gravity for our work. In each of these systems, we will work with challenge owners, communities, innovators, investors, and other stakeholders to design, structure, and finance strategic investment portfolios nested within a broader systems intervention approach.