• OpenAI's o1 System Card, Literally Migraine Inducing

  • Dec 23 2024
  • Length: 1 hr and 17 mins
  • Podcast

OpenAI's o1 System Card, Literally Migraine Inducing

  • Summary

  • The idea of model cards, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characterized by OpenAI's o1 system card. To demonstrate the adversarial stance we believe is necessary to draw meaning from these press-releases-in-disguise, we conduct a close read of the system card. Be warned, there's a lot of muck in this one.Note: All figures/tables discussed in the podcast can be found on the podcast website at https://kairos.fm/muckraikers/e009/(00:00) - Recorded 2024.12.08 (00:54) - Actual intro (03:00) - System cards vs. academic papers (05:36) - Starting off sus (08:28) - o1.continued (12:23) - Rant #1: figure 1 (18:27) - A diamond in the rough (19:41) - Hiding copyright violations (21:29) - Rant #2: Jacob on "hallucinations" (25:55) - More ranting and "hallucination" rate comparison (31:54) - Fairness, bias, and bad science comms (35:41) - System, dev, and user prompt jailbreaking (39:28) - Chain-of-thought and Rao-Blackwellization (44:43) - "Red-teaming" (49:00) - Apollo's bit (51:28) - METR's bit (59:51) - Pass@??? (01:04:45) - SWE Verified (01:05:44) - Appendix bias metrics (01:10:17) - The muck and the meaningLinkso1 system cardOpenAI press release collection - 12 Days of OpenAIAdditional o1 CoverageNIST + AISI [report] - US AISI and UK AISI Joint Pre-Deployment TestApollo Research's paper - Frontier Models are Capable of In-context SchemingVentureBeat article - OpenAI launches full o1 model with image uploads and analysis, debuts ChatGPT ProThe Atlantic article - The GPT Era Is Already EndingOn Data Labelers60 Minutes article + video - Labelers training AI say they're overworked, underpaid and exploited by big American tech companiesReflections article - The hidden health dangers of data labeling in AI developmentPrivacy International article = Humans in the AI loop: the data labelers behind some of the most powerful LLMs' training datasetsChain-of-Thought Papers CitedPaper - Measuring Faithfulness in Chain-of-Thought ReasoningPaper - Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought PromptingPaper - On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language ModelsPaper - Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language ModelsOther Mentioned/Relevant SourcesAndy Jones blogpost - Rao-BlackwellizationPaper - Training on the Test Task Confounds Evaluation and EmergencePaper - Best-of-N JailbreakingResearch landing page - SWE BenchCode Competition - Konwinski PrizeLakera game = GandalfKate Crawford's Atlas of AIBlueDot Impact's course - Intro to Transformative AIUnrelated DevelopmentsCruz's letter to Merrick GarlandAWS News Blog article - Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performanceBleepingComputer article - Ultralytics AI model hijacked to infect thousands with cryptominerThe Register article - Microsoft teases Copilot Vision, the AI sidekick that judges your tabsFox Business article - OpenAI CEO Sam Altman looking forward to working with Trump admin, says US must build best AI infrastructure
    Show more Show less
activate_Holiday_promo_in_buybox_DT_T2

What listeners say about OpenAI's o1 System Card, Literally Migraine Inducing

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.