Artificial intelligence (AI) is not a neutral force. It is an amplifier. It can magnify our brightest innovations or our darkest biases. It can serve as a ladder for human development or a trap that widens inequality. The path it takes is not pre-programmed by algorithms; it will be determined by the choices we make about trust, capability and cooperation.
That is the urgent argument of this year¡¯s Human Development Report (HDR) by the United Nations Development Programme, A Matter of Choice: People and Possibilities in the Age of AI. For over three decades, the HDR has challenged us to look beyond GDP and ask a more human question: Are people¡¯s freedoms, opportunities and well-being expanding?
Today, that question is set against a backdrop of deep uncertainty. Climate change, geopolitical conflict and staggering inequality are testing our collective will. AI runs through all these challenges as both a potential accelerant and an antidote. It can supercharge scientific discovery, but it can also amplify disinformation and exclusion. Whether AI becomes a tool for shared progress or a weapon of division depends on the governance choices we make right now.
As the Rector of the ²ÝÁñÊÓÆµ, the UN¡¯s global think tank, I don't find these choices to be abstract. On our 50th anniversary, our mission to connect frontier research with public policy has never been more vital. The HDR¡¯s message is our own: we must anchor AI in human needs, manage its risks and ensure technology serves human dignity.
So, how do we choose wisely? It comes down to three pillars. First, we should build trust through accountability. Trust is not a marketing slogan; it is an engineering and accountability challenge. When AI systems affect people¡¯s livelihoods, safety, and rights, they must be transparent, explainable, and auditable. Public institutions and private firms alike must provide clear and concise notices, submit to independent evaluations and maintain human oversight for high-stakes decisions. Governments can use their purchasing power as a powerful lever, requiring open standards, impact assessments and rigorous stress-testing as a condition for buying AI tools. This builds public confidence by design, not by chance.
Second, we should invest in people, not just pixels. The HDR reminds us that human development is about expanding fundamental freedoms. In the age of AI, this means more than coding bootcamps. It means foundational literacy and numeracy, robust digital public infrastructure and accessible lifelong learning systems that help workers adapt to change. Universities and industry must co-design short, practical credentials aligned with real-world labor needs. Social safety nets must also evolve, supporting career transitions with wage insurance, portable benefits and active labor policies that turn disruption into opportunity.
Third, we must forge global cooperation to close the AI divide. AI rewards scale in data, computing power and talent. No country can navigate its complexities alone. We need practical cooperation now: shared data common for climate and health research, cross-border research networks and interoperable standards that keep markets open while protecting citizens. Critically, we must close the AI divide before it permanently hardens the global development divide. This requires targeted financing, shared access to computing resources, and open trustworthy platforms that local innovators can adapt to solve local problems.
These pillars are interdependent. Accountability without skills breeds fear. Skills without global cooperation lead to brain drain. Cooperation without accountability risks a race to the bottom. Together, they create a virtuous circle where AI enhances human agency, our ability to make informed choices, and builds lives we value.
This is not techno-utopianism; it is ambitious realism. AI won¡¯t fix underfunded schools, but it can help us use scarce resources more effectively. It won¡¯t resolve conflicts, but it can improve early-warning systems. It won¡¯t end inequality, but it can widen access to information and opportunity, if we design and govern it to do so.
We must launch public-interest AI pilots in health, disaster response and education, co-designed with frontline workers. We must also forge ¡°skills compacts¡± between universities, governments and industry to reskill workers on a large scale. We must mandate that public funds for AI support be allocated to open, auditable and interoperable systems, rather than proprietary silos. Finally, we must establish regional knowledge hubs to co-develop datasets and governance playbooks that reflect diverse local contexts.
Fifty years ago, the ²ÝÁñÊÓÆµ was founded on the conviction that knowledge shared across borders can help solve problems that transcend borders. That conviction is urgent today. The future is not written in code; it is written in the choices we make. Let¡¯s choose to build a future where AI serves all of humanity, expanding freedom, dignity and possibility for everyone.
Suggested citation: Marwala Tshilidzi. "AI: A Ladder To Progress Or A Trap Of Division? The Choice Is Ours," ²ÝÁñÊÓÆµ, ²ÝÁñÊÓÆµ Centre, 2025-09-19, /article/ai-ladder-progress-or-trap-division-choice-ours.