Episodios

  • A Question of Humanity with Pia Lauritzen, PhD
    Jul 9 2025

    Pia Lauritzen questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required.


    Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger’s eerily precise predictions, the skill of critical thinking, and why it’s not really about the questions at all.


    Pia Lauritzen, PhD is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of Qvest and a Thinkers50 Radar Member Pia is on a mission to democratize the power of questions.


    Related Resources

    • Questions (Book): https://www.press.jhu.edu/books/title/23069/questions
    • TEDx Talk: https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions
    • Question Jam: www.questionjam.com
    • Forbes Column: forbes.com/sites/pialauritzen
    • LinkedIn Learning: www.Linkedin.com/learning/pialauritzen
    • Personal Website: pialauritzen.dk

    A transcript of this episode is here.

    Más Menos
    56 m
  • A Healthier AI Narrative with Michael Strange
    Jun 25 2025

    Michael Strange has a healthy appreciation for complexity, diagnoses hype as antithetical to innovation and prescribes an interdisciplinary approach to making AI well.

    Michael and Kimberly discuss whether AI is good for healthcare; healthcare as a global system; radical shifts precipitated by the pandemic; why hype stifles nuance and innovation; how science works; the complexity of the human condition; human well-being vs. health; the limits of quantification; who is missing in healthcare and health data; the political-economy and material impacts of AI as infrastructure; the doctor in the loophole; the humility required to design healthy AI tools and create a resilient, holistic healthcare system.

    Michael Strange is an Associate Professor in the Dept of Global Political Affairs at Malmö University focusing on core questions of political agency and democratic engagement. In this context he works on Artificial Intelligence, health, trade, and migration. Michael directed the Precision Health & Everyday Democracy (PHED) Commission and serves on the board of two research centres: Citizen Health and the ICF (Imagining and Co-creating Futures).

    Related Resources

    • If AI is to Heal Our Healthcare Systems, We Need to Redesign How AI Is Developed (article): https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/
    • Beyond ‘Our product is trusted!’ – A processual approach to trust in AI healthcare (paper) https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539
    • Michael Strange (website): https://mau.se/en/persons/michael.strange/

    A transcript of this episode is here.

    Más Menos
    1 h
  • LLMs Are Useful Liars with Andriy Burkov
    Jun 11 2025

    Andriy Burkov talks down dishonest hype and sets realistic expectations for when LLMs, if properly and critically applied, are useful. Although maybe not as AI agents.

    Andriy and Kimberly discuss how he uses LLMs as an author; LLMs as unapologetic liars; how opaque training data impacts usability; not knowing if LLMs will save time or waste it; error-prone domains; when language fluency is useless; how expertise maximizes benefit; when some idea is better than no idea; limits of RAG; how LLMs go off the rails; why prompt engineering is not enough; using LLMs for rapid prototyping; and whether language models make good AI agents (in the strictest sense of the word).

    Andriy Burkov holds a PhD in Artificial Intelligence and is the author of The Hundred Page Machine Learning and Language Models books. His Artificial Intelligence Newsletter reaches 870,000+ subscribers. Andriy was previously the Machine Learning Lead at Talent Neuron and the Director of Data Science (ML) at Gartner. He has never been a Ukrainian footballer.

    Related Resources

    • The Hundred Page Language Models Book: https://thelmbook.com/
    • The Hundred Page Machine Learning Book: https://themlbook.com/
    • True Positive Weekly (newsletter): https://aiweekly.substack.com/

    A transcript of this episode is here.

    Más Menos
    47 m
  • Reframing Responsible AI with Ravit Dotan
    May 28 2025

    Ravit Dotan, PhD asserts that beneficial AI adoption requires clarity of purpose, good judgment, ethical leadership, and making responsibility integral to innovation.


    Ravit and Kimberly discuss the philosophy of science; why all algorithms incorporate values; how technical judgements centralize power; not exempting AI from established norms; when lists of risks lead us astray; wasting water, eating meat, and using AI responsibly; corporate ethics washing; patterns of ethical decoupling; reframing the relationship between responsibility and innovation; measuring what matters; and the next phase of ethical innovation in practice.


    Ravit Dotan, PhD is an AI ethics researcher and governance advisor on a mission to enable everyone to adopt AI the right way. The Founder and CEO of TechBetter, Ravit holds a PhD in Philosophy from UC Berkeley and is a sought-after advisor on the topic of responsible innovation.

    Related Resources

    • The AI Treasure Chest (Substack): https://techbetter.substack.com/
    • The Values Embedded in Machine Learning Research (Paper): https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083

    A transcript of this episode is here.

    Más Menos
    1 h
  • Stories We Tech with Dr. Ash Watson
    May 14 2025

    Dr. Ash Watson studies how stories ranging from classic Sci-Fi to modern tales invoking moral imperatives, dystopian futures and economic logic shape our views of AI.


    Ash and Kimberly discuss the influence of old Sci-Fi on modern tech; why we can’t escape the stories we’re told; how technology shapes society; acting in ways a machine will understand; why the language we use matters; value transference from humans to AI systems; the promise of AI’s promise; grounding AI discourse in material realities; moral imperatives and capitalizing on crises; economic investment as social logic; AI’s claims to innovation; who innovation is really for; and positive developments in co-design and participatory research.


    Dr. Ash Watson is a Scientia Fellow and Senior Lecturer at the Centre for Social Research in Health at UNSW Sydney. She is also an Affiliate of the Australian Research Council (ARC) Centre of Excellence for Automated Decision-Making and Society (CADMS).


    Related Resources:

    • Ash Watson (Website): https://awtsn.com/
    • The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies (Article): https://doi.org/10.1111/1467-9566.13840
    • An imperative to innovate? Crisis in the sociotechnical imaginary (Article): https://doi.org/10.1016/j.tele.2024.102229

    A transcript of this episode is here.

    Más Menos
    48 m
  • Regulating Addictive AI with Robert Mahari
    Apr 16 2025

    Robert Mahari examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration.

    Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn’t negate accountability; AI’s negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research.

    Robert Mahari is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs.

    A transcript of this episode is here.

    Additional Resources:

    • The Allure of Addictive Intelligence (article): https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/
    • Robert Mahari (website): https://robertmahari.com/
    Más Menos
    54 m
  • AI Literacy for All with Phaedra Boinodiris
    Apr 2 2025

    Phaedra Boinodiris minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers.

    Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn’t enough; and the hard work required to develop good AI.

    Phaedra Boinodiris is IBM’s Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all.

    A transcript of this episode is here.

    Additional Resources:

    Phaedra’s Website - https://phaedra.ai/

    The Future World Alliance - https://futureworldalliance.org/

    Más Menos
    43 m
  • Auditing AI with Ryan Carrier
    Mar 19 2025

    Ryan Carrier trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective.

    Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets.

    A transcript of this episode is here.

    Ryan Carrier is the Executive Director of ForHumanity, a non-profit organization improving AI outcomes through increased accountability and oversight.

    Más Menos
    53 m