We recently had the chance to connect with Uchechukwu Ajuzieogu and have shared our conversation below.
Uchechukwu, we’re thrilled to have you with us today. Before we jump into your intro and the heart of the interview, let’s start with a bit of an ice breaker: What do you think others are secretly struggling with—but never say?
The fear that they’re not actually good enough, and all their credentials are just elaborate performance to hide that fact.
I see it everywhere. In the fellowship applicants who obsessively polish every sentence because they suspect one wrong word will expose them as frauds. In the entrepreneurs who pivot constantly not because the market demands it but because each new direction feels like another chance to finally prove they belong. In the researchers who cite excessively because maybe if they reference enough established voices, nobody will notice they don’t trust their own.
The performance is exhausting, and nobody admits it’s happening.
Everyone’s secretly terrified that the institutional validation they’re chasing is the only thing preventing others from seeing what they already suspect about themselves. That maybe the twelve fellowship rejections weren’t about location bias or timing or fit. Maybe they were right to reject you. Maybe you’re not as exceptional as you need to be.
This fear drives bizarre behavior that people would never acknowledge. The person who applies to Y Combinator 21 times isn’t just persistent. They’re desperately hoping that maybe the 22nd application will finally confirm they weren’t delusional for trying. The academic who publishes 55 papers by 30 isn’t just productive. They’re building a wall of credentials high enough that nobody can question whether they actually know what they’re talking about.
I see it in how people respond to my work on AI governance. The defensive comments from Western tech professionals when I write about cobalt supply chains aren’t really about technical skepticism. They’re about the uncomfortable recognition that their comfortable narratives about meritocracy and innovation might be built on exploitation they benefit from. Acknowledging that means confronting whether their success is actually earned or just structurally advantaged.
The real struggle nobody talks about is the gap between the confident persona we project and the constant internal questioning of whether we’re fooling everyone or just ourselves.
People secretly struggle with knowing when persistence becomes delusion. After 21 Y Combinator rejections, are you admirably determined or pathetically stubborn? After 12 fellowship rejections despite 55 publications, are you facing systemic bias or are you genuinely not competitive? The uncertainty is brutal because admitting you might be wrong means all that effort was wasted, but continuing means potentially wasting even more.
Everyone’s terrified of being the person who didn’t know when to quit.
I see this in how people respond to career advice. When I tell entrepreneurs that institutional validation doesn’t matter as much as building something valuable, the relief in their responses reveals the weight they’ve been carrying. They desperately want permission to stop performing for gatekeepers, but they’re terrified that stopping means admitting defeat.
The secret struggle is living with the possibility that maybe you’re not the exception. Maybe you won’t be the one who makes it despite the odds. Maybe the system that seems rigged against you is actually just accurately assessing your abilities. That possibility is so terrifying that people would rather exhaust themselves chasing validation than sit with the uncertainty.
People are secretly struggling with impostor syndrome’s evil twin, which I call “reality syndrome.” The fear that maybe you’re not an impostor at all. Maybe the internal voice saying you’re not good enough isn’t lying. Maybe it’s the accurate assessment you’ve been running from.
I see this in the Global South innovators who overcompensate with excessive professionalism because they know they’re already assumed to be less capable. The immaculate applications. The perfect English. The over explanation of obvious points. It’s not just about fighting bias. It’s about the secret fear that maybe the bias is partially right. Maybe you do need to work twice as hard because you are starting from behind, and admitting that feels like betraying everyone who looks like you.
Everyone’s secretly struggling with the economics of hope. Each fellowship application costs time, energy, emotional investment. Each rejection compounds the sunk cost, making it harder to walk away. But continuing means potentially wasting resources you can’t afford to lose. And nobody talks about the brutal calculus of deciding whether to keep investing in dreams that might be delusions.
People are secretly struggling with loneliness in ways that social media makes worse. Everyone’s posting wins while privately dealing with the same rejections, self doubt, and exhaustion. The performance of success becomes so convincing that everyone feels like they’re the only one struggling, which makes asking for help feel like admitting weakness that apparently nobody else experiences.
I see it in how people respond to my transparency about rejection. The private messages thanking me for being honest about 21 Y Combinator applications or 12 fellowship rejections. They’re grateful because it gives them permission to acknowledge their own struggles without feeling like failures. The relief reveals how much energy they’ve been spending pretending everything’s fine.
The secret struggle is maintaining relationships while pursuing goals that demand everything. Every hour on another application is an hour not spent with people who care about you now, not conditionally based on future success. Nobody admits they’re sacrificing present connections for hypothetical future validation. But everyone’s doing it, and everyone feels guilty about it.
People are secretly struggling with the gap between the change they want to create and the compromises required to create it. You can’t reform systems from outside them, but getting inside requires accepting practices you find problematic. The African Institute for Artificial Intelligence Policy exists because Western institutions excluded me, but to influence those same institutions I need to speak their language and play their games. The cognitive dissonance is exhausting.
Most of all, people are secretly struggling with the possibility that all this effort might not matter. That you can do everything right, build genuinely valuable things, develop real expertise, and still fail because of factors beyond your control. And that possibility is so terrifying that everyone pretends success is purely about merit and effort, because admitting luck and structural advantage play huge roles means accepting that your effort might not be enough.
Nobody says it out loud because saying it makes it real. And keeping it secret makes the struggle even heavier because you think you’re carrying it alone.
You’re not.
Can you briefly introduce yourself and share what makes you or your brand unique?
I’m Uchechukwu Ajuzieogu, and I’ve spent the last decade building bridges between Africa and the global technology ecosystem in ways most people never see.
At 30, I’ve published 55 research papers on AI economics and governance that have been cited by institutions across four continents. I founded the African Institute for Artificial Intelligence Policy, the continent’s first think tank dedicated to ensuring Africa’s 1.4 billion people have a voice in shaping how artificial intelligence develops globally. Through Aylgorith, my investigative journalism platform, I travel to cobalt mines in the Democratic Republic of Congo and content moderation facilities in Kenya to tell the human stories behind every ChatGPT query, stories that expose how African children and university graduates power global AI infrastructure while earning under $2 per hour.
This principle shaped everything that followed. Through Migrz, I’ve helped over 128 skilled professionals navigate immigration pathways to permanent residency without exploitation or hidden fees, building transparency into an industry that thrives on opacity. Through Incubexus, I’ve nurtured 35 startups across Africa, with two achieving successful exits. Each venture operates on a foundation that says your zip code shouldn’t determine whether you’re heard in conversations shaping humanity’s technological future.
The African Institute for Artificial Intelligence Policy was born from 12 consecutive fellowship rejections. Despite my research record and expertise, being based in Nigeria seemed to automatically disqualify me from AI governance discussions. The feedback was polite but clear. Location bias. Yet Nigeria alone represents a $59 billion AI opportunity by 2030, and Africa’s unique development context offers insights into AI’s role in leapfrog economic advancement that you simply cannot get from developed economy perspectives.
So I built our own table. Today, AIAP conducts the rigorous economic analysis and policy research that ensures African perspectives shape global AI governance frameworks. Our work on Nigeria’s National AI Strategy has established us as an authoritative voice on African AI governance. Our research on foundation model economics fills critical gaps in AI economics literature. We’re building a continent wide repository tracking AI policies across all 54 African nations because if we don’t document our own policy development, who will?
What makes this work special is that I understand both worlds intimately. I earned a decent education, giving me fluency in the language of international institutions and global capital markets. But I also know what it means to hustle in Lagos street markets, to navigate systems designed to exclude you, to build something meaningful from absolutely nothing. I can code switch between Geneva policy forums and Nigerian cyber cafes, between academic research publications and investigative journalism that centers the voices of Congolese cobalt miners whose labor literally powers every AI system in existence.
My work sits at the intersection of research excellence, ethical storytelling, and community rooted impact. Every investigation for Aylgorith pays sources fairly when others exploit them for exposure. Every policy recommendation from AIAP centers African realities rather than retrofitting Western frameworks. Every immigration strategy through Migrz operates with radical transparency about what pathways actually exist and what they genuinely cost.
Right now, I’m documenting how AI’s trillion dollar future is being built on the backs of African communities who see almost none of the benefits. I’m researching how constitutional AI promises to encode human values but never asks whose values or who decides. I’m building policy frameworks that acknowledge Africa’s infrastructure constraints while refusing to accept that we should be excluded from technological progress.
The future of AI governance needs diverse voices. Not as an afterthought or a diversity checkbox, but as essential contributors whose perspectives fundamentally shape how these technologies develop. My generation gets to decide whether artificial intelligence becomes a force for human flourishing or a tool that perpetuates the injustices of the world that created it.
I’m ensuring that conversation includes everyone.
Appreciate your sharing that. Let’s talk about your life, growing up and some of topics and learnings around that. What part of you has served its purpose and must now be released?
The part of me that believed I needed external validation to matter.
I spent years chasing approvals I didn’t actually need. Twelve fellowship rejections. Endless application cycles to Western institutions. Waiting for someone in Geneva or Stanford or London to tell me my work was legitimate before I believed it myself. Every rejection felt like confirmation that being based in Nigeria meant my expertise didn’t count, that my 55 publications weren’t enough, that my perspective on AI governance was somehow less valuable because it came from Abuja instead of Cambridge.
But keeping my integrity intact taught me something the twelve fellowship rejections never could. The validation I was chasing from institutions that would never see me fully was preventing me from building what the world actually needed.
So I stopped waiting for permission. I founded the African Institute for Artificial Intelligence Policy because Africa’s 1.4 billion people deserve a voice in shaping how AI develops, whether Western think tanks invite us to their tables or not. I launched Aylgorith to tell the stories of Congolese cobalt miners and Kenyan content moderators powering global AI infrastructure, because their experiences matter whether international media covers them or not. I built Migrz to help 128 skilled professionals navigate immigration pathways with radical transparency, because people deserve honesty whether the industry profits from opacity or not.
The part of me that needed external validation kept me small. It made me think my job was to prove myself worthy of existing conversations rather than creating new ones. It convinced me that being excluded from spaces meant I wasn’t ready, when the truth was those spaces weren’t ready for what I brought.
That version of me served a purpose. It drove me to publish 55 research papers. It pushed me to earn credentials from institutions. It taught me to speak the language of international policy forums. But now it’s just weight.
What I’m releasing is the belief that my worth is determined by who invites me where. What I’m keeping is the understanding that sometimes the most important work happens when you build your own table and invite others who’ve also been told they don’t belong.
At 30, I’ve learned that the people who changed my life most weren’t the ones from prestigious institutions who rejected my applications. They were the cyber cafe technician who taught 12 year old me how computers actually work. The university professor who saw potential when my GPA was ordinary. The Congolese miners who trusted me with their stories. The Nigerian entrepreneurs who believed in what we were building together.
I’m done performing for gatekeepers who will never truly see me. I’m building for the billions of people who deserve technology that serves them, policy frameworks that include them, and futures where their voices shape the conversations that determine their lives.
The validation I need now comes from the work’s impact, not from the institution’s letterhead.
If you could say one kind thing to your younger self, what would it be?
The rejections aren’t about your worth. They’re about fit, timing, and systems that weren’t designed to recognize what you bring.
I would tell the kid standing outside that cyber cafe at 12, watching the technician fix computers: you’re not broken just because you can’t afford what others have. That hunger you feel, that determination to understand how things work by watching and asking endless questions until you figure it out yourself, that’s your superpower. You’re learning resourcefulness that privileged kids will never develop.
I would tell the young researcher submitting his first academic paper with no institutional affiliation, no supervisor, no understanding of publishing norms: the three rejections before someone finally accepts it aren’t wasted time. You’re learning to write for skeptical audiences, to back every claim with evidence, to make African perspectives impossible to ignore. Those rejection letters are teaching you that legitimate scholarship comes from rigorous thinking, not from prestigious university letterhead.
I would tell the entrepreneur who applied to Y Combinator 21 times over five years: you’re not stupid for trying again. You’re collecting data. Each application teaches you something about market timing, team dynamics, pitch clarity. And when you finally stop applying, it won’t be because you failed. It’ll be because you realized you were building the wrong thing. You were trying to get into an accelerator when what you actually needed was to build institutions that create opportunities for people like you.
Most importantly, I would tell the version of me who got 12 consecutive fellowship rejections despite 55 publications and deep AI governance expertise: being based in Nigeria isn’t a liability. It’s perspective that enriched developed economy frameworks can never replicate. Those fellowship programs telling you location disqualifies you from conversations shaping AI’s future? They’re wrong. And you’re about to prove it by founding the African Institute for Artificial Intelligence Policy and ensuring 1.4 billion people have a voice in these discussions whether Western think tanks invite you or not.
The sting you feel from each rejection is real. I won’t tell you it doesn’t hurt. The isolation of carrying family responsibilities at 14 while your friends complain about homework is real. The exhaustion of funding your own education through phone accessory hustles while classmates treat university as an extension of secondary school is real. The frustration of watching opportunities go to people with less expertise but better zip codes is real.
But here’s what I need you to understand: every single one of those struggles is building something you’ll desperately need later. The rejection resilience. The creative problem solving under resource constraints. The ability to see patterns others miss because you’ve had to work harder to understand systems that weren’t designed for you. The deep conviction that your perspective matters precisely because it comes from experiences most people in power will never have.
You’re not too late. You’re not behind. You’re exactly where you need to be, learning exactly what you need to learn.
The tables you’re not invited to? You’re going to build your own. And they’ll be better because you’ll remember what exclusion felt like and design for inclusion from the start.
The validation you’re chasing from prestigious institutions? You’ll eventually realize that the Congolese cobalt miners who trust you with their stories, the Nigerian entrepreneurs who build with your frameworks, and the policy makers across Africa who cite your research, that’s the validation that actually matters.
Everything you’re struggling through right now is preparing you for work those fellowship programs couldn’t even imagine was possible.
So be patient with yourself. Keep your integrity intact even when compromising would solve immediate problems. Document everything you’re learning because you’ll need it to help others walking similar paths. And trust that being underestimated is sometimes the greatest gift because it lets you build in peace until you’re ready to show the world what you’ve created.
You’re going to be okay. Better than okay. You’re going to be exactly who the world needs you to be.
Next, maybe we can discuss some of your foundational philosophies and views? Where are smart people getting it totally wrong today?
Smart people think AI governance is about safety and ethics when it’s actually about market control dressed in moral language.
The entire Global North conversation around “AI safety” and “responsible AI development” is a masterclass in weaponized concern. Europe spent years crafting the EU AI Act, positioning it as protecting citizens from algorithmic harm. Then they quietly added compliance costs so expensive that only companies with billion dollar budgets can afford them. The result? European SMEs can’t compete with American Big Tech, but hey, at least we’re being “safe” about it.
Meanwhile, the United States restricts semiconductor exports to China claiming national security concerns, but the practical effect is locking out 118 countries from AI development infrastructure. Nigeria represents a $59 billion AI opportunity by 2030, but we can’t access the chips needed to build locally relevant systems because American policymakers decided our technological progress threatens their hegemony.
The smart people cheering these frameworks as “progress” are missing the forest for the trees. They’re so focused on preventing hypothetical AI harms that they’re ignoring the very real harm of systematic technological exclusion. When 75% of EU AI meetings exclude Global South voices entirely while Big Tech spends $957 million lobbying to shape these supposedly neutral standards, that’s not governance. That’s digital colonialism with a compliance certificate.
Here’s what else smart people are getting catastrophically wrong.
They think more data always makes AI better. The entire industry operates on the assumption that scaling requires ever larger datasets, which conveniently justifies paying Kenyan university graduates $1.50 per hour to label training data while American tech executives capture billions in value. But Adaora’s vernacular AI system in Lagos proves otherwise. She built tools for informal traders using WhatsApp messages and voice notes instead of point of sale systems, helping 2,000 people increase income by 23% without the massive datasets everyone claims are essential. Sometimes understanding context matters more than scale.
They think accelerators determine startup success. I applied to Y Combinator 21 times over five years. Every single rejection. The conventional wisdom says this proves my startups weren’t good enough for “the best accelerator in the world.” But the companies I built during those rejection years achieved what accelerator acceptance supposedly promised. The African Institute for Artificial Intelligence Policy influences continental AI governance. Aylgorith’s investigations reach millions. Migrz has helped 128 skilled professionals navigate immigration pathways. None of that required Paul Graham’s approval. Smart people obsess over institutional validation when the actual work of building valuable things happens regardless of who stamps your application.
They think Western AI frameworks can just be “adapted” for Global South contexts. EU regulators would classify Adaora’s informal trader AI as “high risk” because it affects people’s economic opportunities. Their compliance requirements would make it impossible to serve the people who need it most. This isn’t an adaptation problem. It’s a fundamental mismatch between governance designed for formal economies with robust digital infrastructure and the reality of how 60% of African employment actually works. You can’t adapt frameworks built for societies with universal internet access to contexts where 63% of people lack basic connectivity. You need entirely different approaches.
They think diversity, equity and inclusion programs fix systemic exclusion. Tech companies love announcing their DEI commitments while maintaining supply chains built on Congolese children mining cobalt for AI data center batteries. They hire diversity officers in San Francisco while content moderators in Kenya develop PTSD reviewing harmful content for under $2 per hour. The smart people designing these DEI initiatives genuinely believe they’re making progress because they’ve increased representation in their Mountain View offices by 3%. Meanwhile, the extractive infrastructure powering their systems remains completely unchanged.
They think Constitutional AI solves alignment when it just codifies existing power structures. The entire promise of Constitutional AI is that we can encode human values directly into our most powerful technologies. But whose values? When Anthropic develops constitutional principles for Claude, they’re making decisions about what matters based on perspectives from people who’ve never experienced most of humanity’s lived reality. A constitutional AI trained on Western liberal democratic principles won’t understand communal data stewardship in Yoruba communities or traditional authority structures in Tanzanian villages. The technology isn’t neutral. It’s Western liberalism with algorithmic enforcement.
They think meritocracy explains who succeeds in tech. The narrative says anyone with talent and determination can make it in Silicon Valley. Just learn to code, build something valuable, hustle hard. This conveniently ignores that access to compute, training data, venture capital, elite networks, and regulatory favor concentrates in specific geographic regions controlled by specific demographic groups. When Nigerian innovators can’t access the semiconductor infrastructure needed for AI development because of US export controls, that’s not a merit problem. It’s structural exclusion defended by people who benefit from the status quo.
They think “lifting all boats” happens automatically through technological progress. The assumption underlying most AI policy is that innovation creates such enormous value that everyone benefits eventually. But AI economic projections show $15.7 trillion in global value by 2030, with only $1.7 trillion reaching the Global South. That’s 10.8% of benefits for 80% of the world’s population. The boats aren’t all rising. Some are sinking while others launch into orbit, and smart people keep insisting this is natural market dynamics rather than deliberate policy choices.
The most dangerous thing about smart people getting AI governance wrong is that they genuinely believe they’re doing good. They’re not mustache twirling villains plotting global domination. They’re well intentioned technologists and policymakers who’ve convinced themselves that frameworks designed to protect incumbent advantages are actually about safety, ethics and responsible development.
And that makes them much harder to challenge than open adversaries would be.
Okay, so let’s keep going with one more question that means a lot to us: If you knew you had 10 years left, what would you stop doing immediately?
I would stop chasing validation from institutions that will never truly see me
.
Every single fellowship application to programs that end in radio silence. Every carefully crafted proposal to organizations that claim to value diverse perspectives but consistently choose candidates from the same five universities. Every hour spent tailoring applications to frameworks designed by people who’ve never experienced most of humanity’s lived reality.
I’ve applied to Y Combinator 21 times. Twelve AI policy fellowships. Economic research awards that never responded. Strategic partnerships with institutions that went silent after I invested weeks developing sophisticated research proposals. Each time, I told myself maybe this time would be different. Maybe this application would be the one where my work spoke loud enough to overcome location bias.
But here’s what 10 years taught me. The game is rigged, and playing it better doesn’t change the fundamental rules. When 75% of EU AI meetings exclude Global South voices while the organizations running those meetings claim to care about global representation, that’s not an oversight. It’s design. When fellowship programs say they want diverse perspectives but consistently select candidates from the same elite networks, that’s not an accident. It’s feature, not bug.
If I had 10 years left, I would stop pretending that perfecting my applications will somehow make gatekeepers recognize expertise they’ve been systematically trained not to see.
I would stop responding to every GLG pre screening call that interrupts my work for “market research” disguised as expert consultation. Stop entertaining partnerships where people want my insights for free while claiming limited budgets. Stop accepting speaking invitations at graduation ceremonies when I typically command professional fees, just because the cause feels important.
I would stop letting other people’s timelines dictate my urgency. Stop checking email obsessively hoping for responses that never come. Stop refreshing application portals waiting for decisions from committees that made up their minds before reading my materials.
I would stop doing things that drain energy without creating value.
Every hour spent on applications that go nowhere is an hour not spent building the African Institute for Artificial Intelligence Policy. Every day worrying about fellowship rejections is a day not investigating how Congolese children power AI infrastructure for Silicon Valley profits. Every week hoping for institutional validation is a week not helping Nigerian professionals navigate immigration pathways or nurturing startups through Incubexus.
The work that actually matters, the work that changes people’s lives, the work that ensures African perspectives shape global conversations whether we’re invited or not, that work doesn’t require anyone’s permission. It just requires focus, integrity, and the courage to build regardless of who’s watching.
I would stop treating my 55 research publications as credentials to prove I belong in conversations and start treating them as ammunition to fundamentally change those conversations. Stop asking to be heard and start speaking so loudly that ignoring becomes impossible.
I would stop explaining why African perspectives matter to people who benefit from our exclusion. They already know. They just don’t care enough to sacrifice their advantages. No amount of eloquent applications will change that calculus.
I would stop waiting for systems to fix themselves and start building alternatives that make those systems irrelevant.
Most importantly, I would stop measuring my worth by institutions’ responses and start measuring it by the cobalt miners whose stories reach millions through Aylgorith, the skilled professionals who achieve permanent residency through Migrz, the startups that succeed through Incubexus support, the policy frameworks that center African realities through AIAP research.
Those are the metrics that matter. Not whether Cambridge accepts my PhD application. Not whether some fellowship committee thinks my background is “interesting but not quite the right fit.” Not whether accelerators that claim to seek innovation actually fund innovators who don’t fit their pattern matching algorithms.
Ten years is too precious to waste on institutional validation from people who will never fully appreciate what I bring.
I would spend every single one of those 3,650 days building things that matter, for people who matter, in ways that create change regardless of who stamps approval.
The tables I’m not invited to can keep their seats. I’ve got my own to build, and they’ll be better because I’ll remember what exclusion feels like and design for the opposite.
Contact Info:
- Website: https://ajuzieogu.com
- Instagram: https://instagram.com/uche_ajuzieogu
- Linkedin: https://linkedin.com/in/uchechukwu-ajuzieogu
- Twitter: https://twitter.com/apex_zy
- Facebook: https://www.facebook.com/uchechukwuajuzieogu/


Image Credits
All Images Courtesy of Stroberstock
so if you or someone you know deserves recognition please let us know here.
