
2025 marked the Tech Governance Project's first full year of operations. In it, we have focused on piloting our approach, curating knowledge-based resources, and engaging in public policy discussions to promote inclusive and fair artificial intelligence (AI) and encourage responsible technological innovation across Africa.
In this review, we reflect on our progress in 2025 and outline our plans and priorities for 2026. We hope our review provides valuable insights for those interested in our work and similar organizations.
2025 Plan
In 2025, our main goal was to enhance African stakeholders' capacity in AI risk and governance engagement through seminars, roundtables, research, and other strategic initiatives. We aimed for our seminars and roundtables to empower stakeholders to engage and cooperate effectively on AI governance. Our research was intended to provide information and guidance to support better understanding and decision-making among stakeholders. Similarly, our strategic engagement sought to facilitate knowledge exchange, foster cooperation, and contribute to public policy discussions on AI governance. We have made significant progress toward our goals; our accomplishments include:
Seminar: We designed a 4-week seminar program for African stakeholders (policymakers, journalists, civil society, and government officials) on AI risks and governance, and successfully piloted it with Nigerian stakeholders. The program was designed to empower and amplify the impact of African stakeholders in AI governance, and the inaugural cohort of Nigerian participants includes civil servants, advisors, academics, and civil society actors within the evolving global discourse on AI governance. Participant consistently expressed a significant improvement in their understanding of AI governance, likelihood to recommend the program, and plans to act, including ongoing research on AI governance, integrating learning into their existing efforts, and engagement with their country's AI governance efforts, among others. Likewise, the program received significant public endorsement from participants (e.g., Kemfon, Miracle, Uche, Lois). See the full retrospective.

Publications and Guidance:
We published the "AI Diplomacy Guide for Africa," designed to provide African diplomats and policymakers with practical guidance for international AI governance.
We co-authored "Examining AI Safety as a Global Public Good" with the Oxford Martin AI Governance Initiative, Carnegie Endowment, and Concordia AI. The paper examines whether and how applying the "public good" framework can help us better understand and address the challenges posed by advanced AI systems, and defines "public good" as a commodity available to all that can be used without reducing its availability to others.
We made publications on AI readiness in low- and middle-income countries where we explored the idea that investing in foundational digital capacity, such as digital skills, internet access, mobile device penetration, and reliable electricity, could be one of the most effective and scalable ways to support both economic development and global AI inclusion.
Convening and Policy:
We contributed to the co-creation of a vision and research roadmap for responsible open-source AI (workshop with Meta, Microsoft, MIT, Google, etc.), organized by the BAIR Responsible AI Initiative at UC Berkeley.
Coordinated a strategic advocacy and policy dialogue with the Nigerian National Center for Artificial Intelligence and Robotics on the Nigerian AI strategy draft.
We joined the AIxBio global forum, contributing to its discussions and public statement on biosecurity risks at the intersection of AI and the life sciences.

International Dialogue: We contributed to the Paris AI Action Summit discourse, focusing on inclusive and responsible AI. Our commentary on the importance of inclusive AI governance frameworks received significant votes in the AI Governance Priorities survey conducted by Future Society and Renaissance Numérique.
Left photo: After the Council of Europe side event, which we participated in. The event explores opportunities to strengthen African engagement in AI governance. Right Photo: Our Executive Director speaking at the Empowering Civil Society in Global AI Governance side event.
Collaboration with IBBIS: We collaborate with the International Biosafety and Biosecurity Initiative for Science (IBBIS) to raise awareness among African stakeholders on strengthening nucleic acid synthesis governance and in the development of a global DNA synthesis map. In this partnership:
We presented our research progress on policies and governance approaches to nucleic acid synthesis in Africa at the SynBio Africa conference in July 2025.
We contributed to the strategic meeting and dialogue convened by the IBBIS and the Nigerian National Biosafety Management Agency in Abuja from 26-27 August 2025, under the theme "Safeguarding the Nigerian Bioeconomy." The meeting convened government officials, regulators, scientists, security experts, and representatives from academia and industry to examine how Nigeria’s bioeconomy can be advanced in a manner that fosters innovation while ensuring security. The discussions reflected Nigeria’s ambition to harness biotechnology for national development while embedding safeguards that meet both domestic needs and international obligations. Particular attention was paid to DNA synthesis screening as a cornerstone measure to prevent misuse of biotechnology, strengthen governance, and build public trust.
Under our collaboration and consultancy arrangement. We contributed to developing a global DNA synthesis map that illustrates how synthetic nucleic acids are produced, shipped, and regulated. Our specific focus was on mapping and synthesizing policies that guide nucleic acid synthesis and its supply chain in African countries. The map acts as a key resource for policymakers, industry, and practitioners, filling a crucial information gap about current screening practices and providing a foundation for advocacy to address biosecurity vulnerabilities.
Left Photo: Group shot at SynBio Africa 2025, where we presented our research project. Right Photo: Group shot at the NBMA/IBBIS strategic meeting in Abuja, where we took part.
2025 Goals and Progress
As outlined in our 2025 plan, these were our goals. Below each, we provide our assessment of its completion based on our progress.
Goal 1: Conduct two series of seminar programs with 30 African stakeholders.
Status: Partially Met (50% completion). We successfully conducted one of the two planned seminar series, with 15 of the target stakeholders engaged. The second round in the fall of 2025 was not possible due to unforeseen funding constraints (see the challenges section for more on funding constraints).
Goal 2: Host/co-host two roundtable/briefing sessions at a regional/international forum/conference with 30 African stakeholders for knowledge sharing and cooperation on AI governance.
Status: Not completed. This is partly due to funding constraints. We engaged the side event coordinator of the Paris AI Action Summit to organize a briefing for African stakeholders, aiming to facilitate discussion and knowledge sharing about the summit to enable their active participation and foster networking and collaboration among attendees. However, this did not progress due to a lack of funding. We contributed to some summit events, but not as the host. See the retrospective and highlights of our engagement at the summit. In addition to the Paris AI Action Summit, we considered other events, such as the 3rd UNESCO Global Forum on the Ethics of AI and the AI for Developing Countries Vienna Summit, but we could not pursue them further because we could not cover the costs of organizing side events at those events.
Goal 3: Attend two AI-related conferences/events on inclusive AI.
Status: Completed. See our accomplishments section (Paris AI Action Summit, UC Berkeley Open Source Workshop).
Goal 4: Conduct and publish two research on Inclusive AI that are relevant to policy, and disseminate the reports through a conference, strategic workshop, website, or reputable journal.
Status: Completed. Co-authored “Examining AI Safety as a Global Public Good,” published on AI readiness in low- and middle-income countries, and AI Diplomacy Guide for Africa.
Goal 5: Attend two EAG, EAGx for partnership/fundraising/relationship-building.
Status: Completed. Our team attended the EAGLDN 2025, EAGxNigeria25, and EAG NYC. This results in onboarding two interns, speaking with three funders, and connecting with numerous like-minded organizations and individuals.
Goal 6: Develop two strategic partnerships with semi-reputable organizations or individuals on AI governance.
Status: Completed. We partnered with the Machine Intelligence Research Group at the University of Lagos to deliver our seminar program in Nigeria. We co-authored a publication with the Oxford Martin AI Governance Initiative, Carnegie Endowment, and Concordia AI.
Goal 7: Secure two funding for our AI program.
Status: Partially Met (50% completion). We secured one funding commitment in late 2025, which is aimed at our 2026 budget.
Challenges
In 2025, one of our key challenges was funding. While we explored funding options aligned with our priorities, not all materialized, especially at the stage we expected. There appear to be divergent factors contributing to some outcomes; however, not all cases are cross-applicable. The lack of concrete feedback from these funds in most cases, makes it somewhat hard to determine where things are falling short. However, from reflection, we can note the following as some of the factors we observed:
We think we pursue some funding too early, especially considering their priority (risk-averse vs risk tolerance). In early 2025, we explored two funding opportunities to deliver our 2nd seminar program and two roundtables/briefings while we were still delivering our first seminar program. It seems we would have a better chance if we approached these opportunities after completing our first seminar, as it would demonstrate our progress and likely influence funders' decisions.
Aside from progress and track records. It seems there are variations in the target audiences between the one we are focusing on (African stakeholders) and the one a funding we explored in early 2025 was focusing on (early to mid-careers), though both are in the spectrum of capacity building; the target group differs, and we think this is a significant factor.
Visibility and new to the space. While we maintain a strong network within the AI governance ecosystem. We were not yet in the mainstream, as we are a new actor in the space. We think this lack of visibility in the mainstream ecosystem contributes to some level of uncertainty when evaluating us.
Aside from funding, another issue we initially encountered, which is somewhat prominent, is our difficulty connecting with senior ministry officials. Email is not an ideal channel, but we opted to align with the existing stakeholders to ensure efficient facilitation of these engagements.
Operational Updates
We operate under the fiscal sponsorship of Players Philanthropy Fund (PPF), a Maryland-based charitable trust. We maintained a team capacity of 1.2 FTE for most of 2025, which was reduced in the fall of 2025 due to funding constraints on program implementation. Our total turnover in 2025 was $124,269.00, raised very late in the year and intended to cover most of our 2026 budget. Our total expenses for 2025 were $34,558.91, funded by remaining seed funds from 2024. This included personnel salaries, program costs, software, miscellaneous expenses, and the fiscal sponsorship fee.
Our funding for 2024 and 2025 came from the AI Safety Tactical Opportunities Fund (AISTOF). In 2026 and the coming years, we are putting more effort into diversifying our funding base.
We are incredibly grateful to AISTOF for enabling our work, as our impact is AISTOF's impact.
Our Plan for 2026
Looking ahead, in 2026, our priorities will focus on deepening engagement and cooperation of African policymakers on critical AI issues (frontier and dual-use AI risks, such as bio and loss of control) governance, and promoting the implementation of global governance measures at the national level. Our approach will include:
Researching on ecosystem and leverage to map and analyze international forums, processes, engagements, and frameworks working on establishing governance regimes for bio and loss-of-control risks, or that offer advocacy leverage on these issues. Evaluating stakeholders’ roles and influence on these issues, identifying existing affiliations and interests, and using these insights to prioritize our stakeholders' engagement for capacity-building toward enhancing cooperation.
Organizing regional capacity building workshops with target stakeholders to raise awareness, build consensus, and promote cooperation.
Strategic engagement through conferences/summits to amplify advocacy on these issues and contribute to discourse on meaningful stakeholder engagement and cooperation on these issues (reflecting on our delivered workshops and engagement of stakeholders on these issues).
Operationally, we plan to expand our team capacity to 2 FTE across programming and operations. We encourage anyone interested in joining our team or who knows a suitable candidate to complete our general expression of interest form.
Connect with Us
We are pretty optimistic about our impact going into 2026. For instance, we secured funding that will cover most of our 2026 budget, and we have expanded our network within the AI governance ecosystem in Africa and beyond, both of which are crucial to our work.
To stay informed about our activities and programs, subscribe to our newsletter and follow us on LinkedIn.
If you have feedback or find flaws in our thinking and reasoning, please don't hesitate to reach out or leave a comment.
_edited.jpg)











