
Elizabeth Osanyinro belongs to a new generation of African tech professionals who see artificial intelligence not only as a tool for innovation but as a responsibility. With a career spanning digital marketing, business analysis, and now data science, she has built a reputation for bridging technical excellence with ethical impact. In this conversation, she speaks about the urgency of fairness in AI, Africa’s unique opportunity to build responsibly from the ground up, and why inclusive communities hold the key to the future of technology. Daniel Obi brings the excerpts
Your background spans digital marketing, business analysis, and now data science. How has this multidisciplinary path shaped your approach to building ethical and inclusive AI systems?
My multidisciplinary background is one of my biggest inspirations for pursuing Ethical and Responsible AI. After working on several projects, I realised that many teams focus heavily on the technical side but overlook the human impact (most times unconsciously). My journey through digital marketing, business analysis, and data science has taught me to see problems from multiple perspectives, the customer’s needs, the business’s strategy, the technology’s capabilities and the product’s impact on lives. It’s made me deeply aware that AI isn’t just a technical tool, it directly affects crucial areas of human life such as healthcare, education, and hiring. An algorithm that goes wrong can ruin the lives of an entire group of people. Ethical and inclusive AI requires more than just algorithms, it needs empathy, context, and a clear understanding of real-world consequences. My background helps me value collaboration and design systems that are not only accurate, but also fair, transparent, and accessible to all.
As a Business Analyst, how do you bridge the gap between technical innovation and strategic business outcomes, particularly in data-driven environments?
When I wear my Business Analyst hat, I see myself as a translator between two worlds — the language of data and the language of decision-making. Bridging that gap begins with understanding the “why” behind a business challenge, then mapping strategies to ensure the data is telling the right story, and finally shaping technical solutions that align with that vision. The key is to understand the business goals and objectives first, because that clarity reveals the real needs at that moment. I focus on turning raw data into clear, actionable strategies that executives can implement and measure. For me, it’s about making sure innovation doesn’t sit in isolation but directly delivers measurable business impact.
AI ethics is a growing global concern. In your view, what does fairness in AI look like, and where are we falling short?
Fairness in AI means building systems whose outcomes are equitable, transparent, and free from systemic bias — whether in hiring, healthcare, lending, or policymaking. The reality is that many AI systems today mirror the biases in the data and the societies they come from. We’ve seen this when Amazon’s experimental recruitment algorithm began penalising CVs from women because it was trained on male-dominated hiring data. Similarly, a 2019 study published by UC Berkeley and the University of Chicago revealed that a widely used US healthcare risk prediction tool underestimated the needs of Black patients because it used past healthcare spending — shaped by systemic inequities — as a proxy for health needs. Fixing this bias in the algorithm could more than double the number of Black patients automatically admitted to these programs, showing the extent of harm that can occur when bias goes unchecked.
Where we’re falling short is in recognising that fairness is not a one-time checklist but an ongoing commitment. Most times, fairness is treated as a late-stage compliance step rather than a design principle embedded from the start. Many AI projects lack diverse datasets, inclusive development teams, and clear accountability structures, all of which are essential to prevent bias rather than fix it after deployment. Without these foundations, even well-intentioned AI can end up amplifying inequality instead of reducing it.
You’ve worked on diverse projects from credit card fraud detection to blockchain-based digital verification. Which project has been the most defining for you and why?
The most defining project for me was my AI-driven comparative analysis of customer satisfaction and service quality for Tesco Bank and Tesco Stores. I analysed 50,000 Trustpilot reviews using sentiment analysis and topic modelling to uncover the top service pain points, track sentiment trends over five years, and recommend strategic improvements.
What made it defining wasn’t just the scale of the dataset or the technical complexity — it was a project that truly stretched me. I drew on so much of what I’d learned and read about during my MSc program and applied it to a real-world problem. It was also genuinely fun to take raw customer feedback and turn it into actionable, strategic recommendations for two very different sectors under the same brand. The experience taught me how to bring together end-to-end technical execution — from data collection to advanced modelling — with insights that make sense to the business and can drive meaningful change.
What inspired you to establish PyData Bradford, and how do you see grassroots communities reshaping access to AI knowledge and opportunities, especially for under-represented voices?
Bradford is a growing tech community in the UK, and I started PyData Bradford because I understand the importance of community in shaping careers and creating opportunities. Most of the life-changing opportunities I’ve had were either inspired or pushed by communities I was part of, so I know first-hand how powerful they can be.
I could see so much curiosity and talent around me, but not enough spaces where people of different levels could learn, connect, and grow together. I wanted to create a local hub where students, professionals, and enthusiasts could talk about AI and data without feeling intimidated, a place where no question is too basic and no idea is too ambitious.
Grassroots communities like ours reshape access by breaking down the invisible barriers that keep people out, whether that’s lack of exposure or limited networks, When people have more access they start to see themselves not just as learners, but as contributors to the field. That shift in mindset can be the spark that transforms an entire career.
Many believe Africa has a unique opportunity to build AI responsibly from the ground up. What are the key enablers or blockers you see in achieving that vision?
Africa has the unique opportunity to build AI responsibly because the concept of Artificial is pretty young here compared to more developed continents we’re not burdened by the same legacy systems or entrenched biases that more mature AI ecosystems often have to undo. We can design with context, culture, and inclusivity in mind from day one.
The key enablers are our young, tech-savvy population, the rapid growth of innovation hubs across the continent, and our ability to leapfrog outdated systems. We already have mature markets as playbooks to study from, so we can adopt best practices early and avoid some of the mistakes seen in more mature markets.
Blockers are numerous. The access to high-quality, representative datasets is still limited, and much of the data used about Africa is collected outside the continent without local context or consent. Also, there’s a big infrastructure gap in sub-saharan Africa from internet connectivity to computing resources. Lastly, Africa’s voice is still under-represented in global AI governance, meaning policies are often written without our perspective.
If we invest in infrastructure development, local AI research, and talent development while ensuring African voices help shape policy the sky is our starting point.
As someone passionate about inclusive technology, how do you ensure that human-centered design and ethical considerations are embedded from the start of a project?
I approach every project with the belief that ethical and inclusive design starts long before the first line of code. For me, it begins with who you include. I love to involve diverse stakeholders early, especially the people most affected by the technology, and make sure their voices are part of the data collection and decision-making process. Next is how you collect the data using balanced sampling, thorough documentation, and ethical methods that prioritise consent, privacy, and transparency.
Finally, I’m deliberate about what to watch for. I check for proxy variables that might introduce hidden bias, and review for historical bias embedded in the data.
What do you believe the next generation of African data scientists need most to lead globally in AI innovation?
Three things: access, mentorship, and representation. Access to quality education, datasets, tools, and infrastructure so they can innovate at the same level as their global peers. Mentorship from experienced professionals who can guide them through both technical and career challenges. And representation in the rooms where decisions about AI are made, from research boards to policy committees, so African perspectives shape the technology, not just consume it. Skills are essential, but visibility and influence are what will enable African data scientists to set the agenda globally.
Looking ahead, what legacy do you hope to leave in the field of AI and data science, both within the Nigerian tech ecosystem and globally?
I want my legacy to be about opening doors and creating a path for others to follow. I want to be a role model, especially for women and under-represented people in tech showing that they belong in AI and data science even at the highest levels. Beyond my own work, I want to mentor, share knowledge, and build communities that empower people to thrive. If I can help create an environment where more diverse voices enter, grow, and lead in technology, then I’ll know I’ve made a lasting contribution both in Nigeria and globally.
The tech industry has long faced challenges around gender diversity. How do you see the role of women in shaping the future of AI, and what needs to change to bring more women into the field?
Women bring perspectives and lived experiences that are essential for building AI systems that work for everyone. Yet, they remain under-represented in research, technical and leadership roles. My goal is to help change that by mentoring, building communities, and showing women they have a place in AI and data science. To change this, we need to start early: encouraging girls to see tech as a viable, exciting career path, providing mentorship, and making learning environments inclusive. More women in tech isn’t just about inclusion and diversity; it leads to stronger, fairer, and more innovative solutions.