
Localizing Intelligence: Africa’s Fight for Inclusive AI
When African computer scientist Atika Elshazli tested a breast cancer detection model, it failed her patients—not from error, but exclusion. Most AI training data comes from the West, leaving African realities invisible in the code. Across the continent, a new movement is rising to change that—anchoring artificial intelligence in ethics, data sovereignty, fairness, and access. If Africa grounds AI in its own values like Ubuntu, it won’t just catch up to the future; it will redefine its moral frontier.
When Sudanese computer scientist Atika Elshazli ran a breast cancer detection model through its paces, the results were troubling. The algorithm repeatedly misread tumours in African patients, not because it was poorly built, but because it hadn’t been built with them in mind. Most of the training data came from North America and Europe. African bodies, in effect, were missing from the machine’s worldview. It was a quiet exclusion, but one with real consequences: diagnosis delayed, treatment compromised, lives at risk.
This isn’t an isolated flaw. It’s a pattern. Africa is home to nearly a fifth of the world’s people, yet contributes less than 1% of global AI research. Governments across the continent are racing to develop national AI strategies. Startups are growing in Kigali, Nairobi, and Lagos. But the code that underpins these futures often arrives pre-written, shaped by datasets and assumptions that reflect other realities. Without clear safeguards for privacy, fairness, and access, Africa risks adopting powerful tools that deepen, rather than narrow, inequality.
There is a familiar ring to this. In another era, foreign powers extracted Africa’s minerals. Today, the prize is data. But unlike before, Africa has the chance to set its own terms. Across Accra, Cape Town and beyond, a new generation of researchers, policymakers, and entrepreneurs is working to define what ethical AI should look like, drawing on local values like Ubuntu to build systems that are not just technically sound, but socially grounded.
The Pillars of Ethical AI in African Contexts
Building ethical AI in Africa is not about mimicking global frameworks. It means grounding technology in the continent’s lived realities: its histories, social values, and structural disparities. Four foundational pillars are emerging in this effort:
1. Ethics That Reflect African Realities: African thought systems offer more than cultural richness, they provide a compass for responsible technology. Concepts like Ubuntu, which centres community, dignity, and mutual responsibility, offer a different lens from the hyper-individualism often baked into Western tech norms. These values can guide how consent is sought, how algorithms weigh trade-offs, and how communities are represented in design choices. Ethical AI is not just about preventing harm, it’s about shaping systems that reflect who they serve.
2. Data Sovereignty and Privacy: Most African countries still lack strong legal protections for digital data. The African Union’s Malabo Convention, signed a decade ago, has yet to be widely ratified. In the meantime, data collection, often by foreign firms, goes largely unregulated. Without safeguards, personal information becomes just another resource to be extracted. A pan-African framework, backed by national laws and actual enforcement, is needed to ensure that data ownership rests where it should: with the people it describes.
3. Fairness in the Code: Bias in AI doesn’t just emerge, it’s built in when training datasets fail to represent real diversity. Many models struggle with African languages, dialects, facial structures, and medical indicators. This isn’t just a technical flaw, it’s exclusion on a massive scale. Initiatives like South Africa’s SWiP project, which is building open datasets in local languages, and the Lacuna Fund, which supports African-led data collection, are critical. Without these efforts, AI risks reinforcing the very gaps it claims to close.
4. Equal Access to the Future: If AI is to serve everyone, it must be built by more than a few. But today, most AI investment is clustered in a handful of urban hubs, leaving rural areas and marginalised groups behind. Connectivity gaps, low digital literacy, and underfunded schools all contribute to a growing divide. Institutions like the African Institute for Mathematical Sciences (AIMS) and Rwanda’s AMMI programme are early efforts to change that, training the next generation of African AI scientists not just to adapt to global tools, but to build their own.
Innovation with Guardrails
Across Africa, promising models are proving that speed and ethics are not incompatible. These early efforts offer more than inspiration, they show how AI can be developed with accountability built in from the start.
In Uganda, data-labelling company Sama has adopted a “human-in-the-loop” model, where workers annotate training datasets under ethical labour standards. Staff are paid above market rates, receive digital skills training, and are offered mental health support. The model is not perfect, but it shows that AI supply chains don’t have to be extractive. Ethical AI can be a viable business, not just a talking point.
In Rwanda, the government has partnered with the World Economic Forum to create AI policy sandboxes, controlled spaces for real-world testing of AI systems. The goal: to balance experimentation with public oversight. Rwanda is also drafting a national framework to safeguard biometric data, aiming to prevent the misuse of digital ID systems in areas like healthcare and social protection.
In South Africa, the SWiP project is building open datasets in underrepresented languages such as isiZulu, Setswana, and Xhosa. This isn’t just about inclusion, it’s about usability. When voice assistants or translation apps fail to understand local speech, the result isn’t inconvenience; it’s exclusion. By anchoring AI in the continent’s linguistic diversity, SWiP ensures the tools are built for real-world users.
Elsewhere, institutions like the African Institute for Mathematical Sciences (AIMS) and AMMI in Rwanda are training new AI researchers to solve African problems. From drought prediction to public health analytics, their focus is applied science with social impact. These centres are helping to rebalance the global talent pipeline: less import and more ownership.
Toward a People-Centred AI
Africa’s AI future will not be shaped by code or capital alone, but by the choices it makes, about inclusion, protection, and principles. The risk is not that the continent gets left behind in the AI race, but that it rushes forward on someone else’s terms. To avoid becoming a passive adopter of imported systems, Africa must steer AI development toward its own priorities, and it must do so now.
First, make ethics enforceable. Pilot projects and discussion papers are not enough. Countries need real laws with real consequences. Ratifying the Malabo Convention should be a beginning, not the end goal. Governments must adopt data protection frameworks that go beyond paper, laws that guarantee transparency, define accountability in high-risk AI applications, and create independent ethics oversight. Agility in innovation is vital, but it must be matched by public trust.
Second, widen the AI conversation. AI literacy should not be confined to elite labs or tech hubs. Secondary schools should teach its basics, while vocational programmes can open doors for those outside formal academia. Civil society, community groups, and the media all have a role in demystifying AI, from explaining how face recognition works to questioning who controls the data behind digital ID systems. A democratic AI future depends on a well-informed public.
Third, think continentally. The African Union’s draft Continental AI Strategy is a starting point. But it must translate into aligned national actions. A shared approach to bias, surveillance, access, and transparency would not only strengthen local ecosystems, it would create a moral centre for AI governance globally. Coordinated public investment, particularly in infrastructure and talent development, can amplify Africa’s negotiating power in international AI debates.
Finally, put dignity at the core. AI should not just automate services, it should advance rights. Whether it’s health diagnostics or welfare targeting, systems must be designed with people, not just efficiency, in mind. That means consulting communities before deploying tools, building in channels for grievances, and setting hard limits on surveillance. Sovereignty in the digital age begins with agency over how people are seen, measured, and served by machines.
Africa doesn’t need to reinvent the algorithm. But it does need to write its own instructions. Justice over scale. Inclusion over speed. Context over universality. If it gets that balance right, the continent won’t just catch up to the AI future. It will help shape its ethical frontier.
Israel Olaniyan is a Regional Fellow of the Africa Program at the Sixteenth Council



Trump Can’t Take Greenland – and the World Knows Why