News

April 22, 2023

Leading with data: Chidera Okafor on driving ethical AI and scalable impact

Leading with data: Chidera Okafor on driving ethical AI and scalable impact

By Olugbenga Okeowo

Chidera Okafor has carved a bold path through the worlds of finance, consulting, and tech – armed with data and driven by purpose.

Whether she’s building AI systems that safeguard Intellectual Property at Meta or shaping strategy with analytics at Deloitte and JP Morgan, Chidera brings sharp technical chops, a deep sense of responsibility, and an unwavering commitment to impact.

In this candid interview, Okafor unpacks the highs, the hurdles, and the human side of leading data-driven transformation at scale. Excerpts:

You’ve held impactful roles across Meta, Deloitte, and JP Morgan. How did your career journey evolve across these diverse industries, and what guiding principles have shaped your path?

I’ve always been drawn to work that allows me to solve meaningful problems and help people thrive. My career has evolved around that purpose. It all started at Deloitte, where I gained early exposure to the power of data in shaping strategy across industries. After graduate school, I returned to Deloitte with deeper technical skills and a sharper focus on transformation through analytics. My internship at JP Morgan, within the Wholesale Credit Analytics and Solutions team, gave me insight into the rigor and scale of data in financial systems.

Joining Meta was a leap into a fast-moving, high-impact environment where data informs decisions that affect billions. Across every role, my growth has been shaped by curiosity, a strong sense of responsibility, and a desire to make a real difference. I’m passionate about building inclusive systems, telling stories with data, and leading work that drives impact across industries and communities.

At Meta, you drove significant operational efficiencies and savings. What project are you most proud of, and what were the biggest challenges you had to navigate?

One of the projects I’m most proud of was also my first at Meta – developing an algorithm that introduced a completely new approach to detecting copyright violations on Facebook. It was a deeply immersive experience that required me to dive into unfamiliar systems and data, learn quickly, and apply complex analytics techniques to build a methodology that hadn’t previously been conceptualised.

I collaborated closely with legal, product, and engineering teams, and interviewed operations specialists to understand the nuances of existing workflows and enforcement mechanisms. The result was a solution that significantly improved our ability to accurately detect violations and safeguard platform integrity.

It was both technically and strategically challenging – especially earning trust in the approach and integrating it into broader operational workflows at scale. But seeing the measurable impact it had on content enforcement and platform safety made it incredibly rewarding.

Can you share a time when a data-driven solution you implemented directly influenced a high-stakes business decision? How did you approach it?

One example was during a period of heightened scrutiny on platform enforcement decisions, especially concerning civic organizations. When nonprofit and civic groups were mistakenly penalized by automated systems, I led a data-driven investigation that resulted in new policies and internal controls to safeguard over 2,000 civic actors. This reduced false enforcement rates to near zero during the U.S. election cycle and reinforced the importance of using data not just for optimisation, but also for social responsibility.

 You specialise in explainable AI and responsible data governance – how do you balance innovation with ethical AI practices in high-impact industries like technology and finance?

To me, innovation and ethics go hand in hand. I believe the long-term viability of AI systems depends on trust. That means ensuring models are interpretable, outcomes are fair, and that clear governance frameworks are in place. I embed checks like fairness audits and post-deployment monitoring into my workflows and advocate for cross-functional reviews that include policy, legal, and user experience teams.

How do you approach designing human-centered AI systems that remain transparent and accessible while delivering strong technical performance?

It starts with empathy and understanding the user journey. From there, I work to simplify model outputs and integrate explainability features – whether through feature importance visualisations, confidence intervals, or clear documentation. The goal is to ensure stakeholders feel empowered and can trust the system without needing a data science degree or someone to interpret every detail.

In your experience, what are the biggest barriers to cross-functional data collaboration in large organisations, and how have you overcome them?

Misaligned incentives and communication gaps are the most common barriers. I’ve found storytelling to be a powerful tool in bridging those divides – framing insights in terms that matter most to each stakeholder. I also focus on building strong relationships and co-creating solutions from the outset, so alignment is built into the process and everyone feels a sense of ownership.

You’ve led cross-functional teams and executive-level reporting initiatives. What leadership traits do you find most essential for success in data science and analytics roles today?

Empathy, clarity, and courage. Empathy to truly understand team dynamics and stakeholder needs – not just strategically, but personally. Clarity to communicate complex ideas simply. And courage to advocate for what’s right, whether that means pushing for ethical data use or standing by a strategic recommendation grounded in evidence.

 How do you cultivate buy-in from non-technical stakeholders when advocating for data-driven initiatives or AI adoption?

I anchor the conversation in outcomes they care about – revenue, user experience, risk mitigation – and translate technical concepts into relatable analogies or stories. I also bring prototypes or dashboards into the room early so they can engage with the insights directly and build trust in the solution.

Your background combines consulting, finance, technology, and AI. How do you see the convergence of these fields shaping business strategy in the next five years?

We’re going to see more decisions driven by real-time data, powered by AI systems that can reason, not just predict. Strategy will shift from annual planning cycles to adaptive systems that respond dynamically to market shifts. Those who can bridge technical depth with business fluency will define the next generation of resilient, customer-focused organisations.

What advancements in real-time analytics or decision intelligence excite you most right now, particularly in dynamic environments like social media?

I’m most excited about decision intelligence platforms that blend real-time user behavior data with predictive insights to optimise content delivery, integrity enforcement, and user experience at scale. The ability to respond to emerging trends and risks within milliseconds is a game changer.

As an active member of professional organisations like Women in Data and the Institute of Analytics, how has community involvement influenced your career growth?

These communities have been invaluable for knowledge sharing, mentorship, and staying at the cutting edge. But more personally, they’ve helped me feel seen – reminding me I’m part of something bigger. Being surrounded by like-minded, purpose-driven professionals keeps me grounded in the “why” behind my work. This field isn’t just about code or models; it’s about people, connection, and impact.

What advice would you give to young professionals aspiring to work at the intersection of AI, business strategy, and operational analytics?

First, find your “why” – it’s what will drive excellence and keep you grounded through challenges. Then, master the fundamentals: statistics, SQL, and storytelling. Cultivate curiosity – learn how businesses run, how decisions get made, and how to ask the right questions. And finally, stay anchored in ethics and impact. The future of AI belongs to those who use it responsibly.

 What motivates you personally in your work, and how do you stay ahead of trends in such a rapidly evolving field?

I’m deeply motivated by the idea that my work can make life better – whether it’s empowering a team, protecting vulnerable groups online, or helping a business grow sustainably. Growing up in Nigeria, my parents instilled a strong sense of purpose and impact in me, and I carry that into everything I do. I stay ahead by being a lifelong learner – reading, connecting with peers, and engaging in the wider AI and data ethics discourse. Ultimately, I want to use technology not just to optimise, but to uplift.