Humanitarian Frontiers in AI Podcast Por Chris Hoffman and Nasim Motalebi arte de portada

Humanitarian Frontiers in AI

Humanitarian Frontiers in AI

De: Chris Hoffman and Nasim Motalebi
Escúchala gratis

“Humanitarian Frontiers in AI” is a groundbreaking podcast series designed to explore the strategic and practical aspects of artificial intelligence (AI) in the humanitarian sector. This series aims to bring together thought leaders from academia, humanitarian innovation, and the tech industry to discuss the opportunities, risks, and real-world applications of AI in enhancing humanitarian efforts. Over a series of ten episodes, the project will delve into specific topics relevant to decision-makers and influencers within the sector, providing insights into how AI can be effectively and ethically integrated into humanitarian work. This podcast is graciously funded by Innovation Norway. https://en.innovasjonnorge.no/


This Podcast is not affiliated with UNWFP and the views expressed by the co-hosts are solely their own personal views and does not represent the views of UNWFP

© 2025 Copyright © 2024 Humanitarian Frontiers in AI
Ciencia Política Economía Gestión Gestión y Liderazgo Política y Gobierno
Episodios
  • Where to Next?
    May 14 2025

    During the tenth and final episode of Humanitarian Frontiers in AI, we discuss how the changes we have seen in the past year might influence the year to come. This broad conversation covers tech advancements and adoptions in the humanitarian sector, what is fuelling the need for partnerships, and how context-specific work can support the effective use of community-driven technologies. We also get into false perceptions about open source, the risk AI poses to open source, and why traditional ways of work are irrelevant to evolving tech. Next, we discuss what our sector can do to improve its relationship to technology and leverage it to achieve more, including shifting some of the perceptions that have informed its approach in the past. Join us as we wrap up a 101 in humanitarian AI relevant to listeners from all backgrounds. Thanks for listening!


    Key Points From This Episode:

    • Welcome to the tenth and final episode of Humanitarian Frontiers in AI.
    • Why the conversation around AI and innovation in humanitarian work is so relevant.
    • How Nasim’s experiences over the past year may lead to future advancements.
    • Tech advancements and adoptions in the humanitarian sector.
    • The missing lexicon that highlights the need for partnerships.
    • Context-specific work and supporting community-driven technologies.
    • Why it’s important to distinguish between open source and zero cost.
    • How risks from AI are threatening open source.
    • The problem of applying traditional ways of working to AI.
    • How the humanitarian sector can improve its relationship to technology.
    • Distinguishing between humanitarian and international mandates.
    • The stumbling block posed by in-between spaces.
    • How we will continue this podcast’s mission in the future.


    Links Mentioned in Today’s Episode:

    AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks

    Nasim Motalebi
    Nasim Motalebi on LinkedIn
    Chris Hoffman on LinkedIn

    Más Menos
    42 m
  • 3Ps: Policy, Product, Pragmatism: You Only Know What You Know
    Apr 30 2025

    What happens when the worlds of policy, product development, and pragmatic decision-making collide in the race to create responsible AI? In this episode of Humanitarian Frontiers in AI, we are joined by a panel of experts, Sabrina Shih, Hadassah Drukarch, Gayatri Jayal, and Jigyasa Grover, for an in-depth discussion of responsible AI development in humanitarian contexts. Together, they unpack the realities of applying AI technologies in crisis-affected settings and grapple with issues around trust, speed, cultural adaptation, and ethical responsibility. They unpack how “human-in-the-loop” models must adapt depending on the context, how affected populations should be involved in AI design, and how to navigate scaling technologies quickly versus building them responsibly. They also explore the challenges of building context-specific tools, the evolving definitions of responsible AI, and how humanitarian organizations can stay rooted in people and processes, not just technology. Join us to discover insights into the crucial role of people and AI design in reshaping humanitarian work. Tune in now!


    Key Points From This Episode:

    • Introduction to today’s guests and their perspectives on the role of AI in humanitarianism.
    • Learn about the risks and opportunities of using AI for decision-making in humanitarian work.
    • Why AI is a “double-edged sword” and how organizations can set effective guardrails.
    • What “human-in-the-loop” means and why it depends on autonomy, context, and design.
    • Explore the role of affected populations in AI development, lifecycle, and implementation.
    • Challenges of balancing speed, cost, and responsible AI deployment in humanitarian work.
    • Unpack the colonial undercurrents of AI development and the power imbalances it causes.
    • How to identify the needs of an affected population and the potential AI-based solutions.
    • Measuring the cost and return of humanitarian AI solutions versus private-sector models.
    • Hear about the future of AI, how it will enable experts, and best practices for developing AI.


    Links Mentioned in Today’s Episode:

    Sabrina Shih on LinkedIn

    Hadassah Drukarch on LinkedIn

    Responsible AI Institute

    Gayatri Jayal on LinkedIn

    Dimagi

    Jigyasa Grover

    Jigyasa Grover on LinkedIn

    Nasim Motalebi
    Nasim Motalebi on LinkedIn
    Chris Hoffman on LinkedIn

    Más Menos
    47 m
  • AI Regulations: Trickling up, Pouring Down, or Nowhere to Be Seen?
    Apr 14 2025

    Who sets the rules for AI and who gets left behind? In this episode of Humanitarian Frontiers in AI, we’re joined by Gabrielle Tran, Senior Analyst at the Institute for Security and Technology (IST), and Richard Mathenge, Co-founder of Techworker Community Africa (TCA), to explore the global landscape of AI regulation and its humanitarian impact. From the hidden labor behind AI models to the ethical and political tensions in governance, this conversation unpacks the fragmented policies shaping AI’s future, from the EU’s AI Act to the U.S.'s decentralized approach. Richard sheds light on the underpaid, invisible workforce behind AI moderation and training, while Gabrielle examines the geopolitical power struggles in AI governance and whether global policies can ever align. We also tackle AI’s high-risk deployment in humanitarian work, the responsibilities of NGOs using AI in the Global South, and potential solutions like data trusts to safeguard vulnerable populations. If you care about the future of AI in humanitarian efforts, this episode breaks down the challenges, risks, and urgent questions shaping the path forward. Tune in to understand what’s at stake (and why it matters)!


    Key Points From This Episode:

    • The hidden labor of AI: how AI models rely on underpaid human moderators.
    • AI ethics versus the ethics of AI and how ethical concerns are framed as technical fixes.
    • Insight into the sometimes murky origins of training datasets.
    • Contrasting the EU’s AI Act with America’s decentralized approach.
    • The risks of AI deployment in humanitarian work, particularly in crisis zones.
    • Accountability in AI supply chains: how new EU policies may enforce transparency.
    • Reasons that AI governance is a low priority in many African nations.
    • Why tech giants typically only comply with AI policy when it benefits them.
    • AI for surveillance versus humanitarian use: the double-edged sword of AI governance.
    • An introduction to the concept of data trusts to safeguard humanitarian AI data.
    • Ensuring informed consent for workers when building and monitoring AI tools.
    • The role of humanitarian organizations like the UN in enforcing “digital rights."
    • What goes into building an ethical future for AI in humanitarian work.


    Links Mentioned in Today’s Episode:

    Richard Mathenge on LinkedIn

    Techworker Community Africa (TCA)

    Gabrielle Tran on LinkedIn

    Gabrielle Tran on X

    Institute for Security and Technology (IST)

    EU AI Act

    National Institute of Standards and Technology (NIST)

    AI Risk Management Framework (RMF)

    Occupational Safety and Health Administration (OSHA)

    The Alignment Problem

    Nasim Motalebi
    Nasim Motalebi on LinkedIn
    Chris Hoffman on LinkedIn

    Innovation Norway

    Más Menos
    45 m
Todavía no hay opiniones