The Conversion Trap
Imagine this.
The model knows what the patient needs. The doctor knows too. The treatment exists somewhere in the world.
But the clinic has no oxygen. Or the hospital can no longer pay for the software. Or the medicine is stuck somewhere in the supply chain. Or the machine broke and nobody came to fix it.
That is the pattern I worry about most with AGI.
Not a world where poor countries are locked out of intelligence.
A world where intelligence shows up, but outcomes do not.
That is the conversion trap.
In 2021, we saw a version of it in real life. Scientists built COVID vaccines in record time. By the end of that year, around 9 billion doses had been administered around the world. The science worked.
Delivery did not.
Rich countries got to high vaccination rates fast. Poor countries were still below 10 percent.
That gap was not mainly about knowledge. The formulas existed. The protocols existed. The manufacturing know-how existed in a few places. What decided who got protected was everything around the science: procurement power, manufacturing concentration, export restrictions, cold chain, electricity, local health systems, and trust.
That is the core idea of this essay.
AI may create a world where answers are everywhere, but outcomes are not. A student can get an AI tutor and still go to a bad school. A farmer can get better advice and still lack irrigation, storage, credit, and access to buyers. A nurse can get decision support and still work in a clinic without oxygen, diagnostics, or medicine in stock.
The same thing could happen with more futuristic breakthroughs too. Imagine AI helps create a real longevity treatment, a powerful cancer cure, or some kind of serious cognitive enhancement. The science may be real. But the benefits still go first to the places that control testing, regulation, manufacturing, insurance, specialists, and distribution. In that world, the rich do not just get treated first. They get upgraded first.
The problem is not just access to intelligence. It is the ability to turn intelligence into reality.
How the trap works
For intelligence to become development, a whole chain has to hold: power, internet, compute, local context, skills, devices, trust, payments, logistics, maintenance, procurement, and institutions that actually function.
The World Bank has a simple way to describe this: connectivity, compute, context, and competency. Intelligence does not arrive alone. It needs rails.
If too many of those rails are weak, intelligence stops cashing out into daily life.
That is the trap. Poor countries may be able to use world-class models, AI tools, and knowledge, while still depending on foreign infrastructure and still failing to get real gains in medicine, education, state capacity, or income.
The model knows.
The system cannot deliver.
Access without conversion becomes a new form of dependency.
COVID was the rehearsal
COVID made this brutally clear.
With vaccines, humanity solved the knowledge problem before it solved the delivery problem. By the end of 2021, G20 countries had secured enough vaccine supply to cover more than double their populations. The African Union, by contrast, could secure only around one fifth of its population through bilateral deals. COVAX was built to close that gap, but it delivered less than half of its original target.
The answer existed, but the conversion layer broke.
Oxygen was even harsher. Doctors already knew what many COVID patients needed. In early 2021, WHO estimated that more than half a million COVID patients in low- and middle-income countries needed oxygen every day.
People still died.
Not because the knowledge was missing. Because oxygen is not one thing. It is a whole system: production, refilling, transport, storage, cylinders, regulators, concentrators, electricity, maintenance, technicians, and hospital coordination.
The doctors knew. The system failed.
The pilot graveyard
This is not only a COVID story. It is already happening with AI itself.
In global health, AI tools already follow a familiar pattern: strong trial results, donor attention, and then decay before they become part of the health system. The grant ends. The server bill arrives. The subscription becomes too expensive. Trained staff leave. Integration with national systems never happens. Policymakers move on.
The tool worked.
The system could not keep it alive.
That is why the real question is not just whether AI can help low-resource settings. In many cases it clearly can. The harder question is whether the surrounding institution can absorb the tool, pay for it, regulate it, maintain it, and make it normal.
Latin America already showed the split
This is not just a story about rich countries and poor countries. The real divide runs through implementation capacity.
Latin America did not move as one block. Chile handled vaccination far better than Peru, not because Chile understood the vaccine better, but because it could deliver it better. It secured supply earlier, had stronger vaccination infrastructure, and could move doses through systems that worked.
Peru showed the other side of the pattern. It had a more fragmented health system, worse coordination, harder geography, and deeper political instability. The oxygen crisis made this visible in brutal form. Peru ended up with one of the worst mortality outcomes in the world. Before the pandemic, two business groups supplied almost all the medical oxygen contracted by the state. The country also kept a purity rule stricter than the WHO standard, which limited competition and slowed access. Later, many of the oxygen plants installed across the country ended up unused, broken, or abandoned.
That was not a failure to understand oxygen.
It was a failure to convert knowledge into delivery.
Peru had already seen a version of this before AI. The One Laptop per Child rollout put devices into rural schools and still found no meaningful gains in reading or math. The artifact arrived. The surrounding system did not change enough to turn access into outcomes.
Why AI could deepen dependency
The optimistic story says intelligence will become cheap and benefits will spread naturally. That assumes the main bottleneck is knowledge. In many poor countries, it is not. It is implementation.
If the models, cloud, chips, medicine supply chains, robotics, and core platforms are owned elsewhere, then poor countries may gain access to intelligence mostly as renters.
They can query the future without being able to build it.
Recent work from groups like UNCTAD and the OECD points in this direction. AI capability, compute, and corporate R&D are highly concentrated, while preparedness and productivity gains are much stronger in advanced economies than in poorer ones. Intelligence can spread faster than value capture does.
That is the real danger. AI does not need to exclude poor countries to deepen inequality. It only needs to include them on dependent terms.
They get better answers, but the value capture happens elsewhere.
They get better tools, but the infrastructure remains foreign.
They get diagnosis support, but not the medicine.
They get educational help, but not stronger schools.
They get business intelligence, but not easier credit, better logistics, or local industrial capacity.
This may still help at the margin.
But that is not the same thing as development.
The positive counterexample
This is not an argument that poor countries can never convert digital tools into broad gains. They can.
M-PESA in Kenya is a useful reminder. Mobile money reduced friction in daily economic life and produced real welfare gains because it rode on rails that people could actually use. The technology fit the context, solved a real bottleneck, and became part of everyday life instead of staying as an external demo.
That is what successful conversion looks like.
Not magic.
Not just access.
A tool meeting functioning rails.
The real question
The central development question in the AI era is not whether poor countries can access intelligence.
It is whether they can build the capacity to convert intelligence into broad gains.
That means infrastructure, institutions, local talent, logistics, manufacturing, energy, procurement, and state capacity.
If those do not improve, AI may not close the development gap.
It may reorganize it.
The model knows.
The system cannot deliver.
AGI could scale that pattern far beyond public health.
The future may not be divided by who can ask.
It may be divided by who can convert answers into reality.