Filtrar por género
- 271 - 233 - Guest: J. Craig Wheeler, Astrophysics Professor
This and all episodes at: https://aiandyou.net/ . We are going big on the show this time, with astrophysicist J. Craig Wheeler, Samuel T. and Fern Yanagisawa Regents Professor of Astronomy, Emeritus, at the University of Texas at Austin, and author of the book The Path to Singularity: How Technology will Challenge the Future of Humanity, released on November 19. He is a Fellow of the American Physical Society and Legacy Fellow of the American Astronomical Society, has published nearly 400 scientific papers, authored both professional and popular books on supernovae, and served on advisory committees for NSF, NASA, and the National Research Council. His new book, spanning the range of technologies that are propelling us towards singularity from robots to space colonization, has a foreword by Neil DeGrasse Tyson, who says, “The world is long overdue for a peek at the state of society and what its future looks like through the lens of a scientist. And when that scientist is also an astrophysicist, you can guarantee the perspectives shared will be as deep and as vast as the universe itself.” We talk about the evolution of homo sapiens, high reliability organizations, brain computer interfaces, and transhumanism among other topics. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 02 Dec 2024 - 41min - 270 - 232 - Special Panel: Educators on AI, part 2
This and all episodes at: https://aiandyou.net/ . We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools. Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District. Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals. Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership. And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles. In the conclusion, we talk about whether students need to read as much as they used to now they have AI, fact checking, some disturbing stories about the use of AI detectors in schools, where the panel sees these trends evolving to, what they’re doing to help students learn better in an AI world, and… Iron Man. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 25 Nov 2024 - 35min - 269 - 231 - Special Panel: Educators on AI, part 1
This and all episodes at: https://aiandyou.net/ . We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools. Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District. Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals. Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership. And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles. We talk about how much kids were using GenAI without our knowing, how to turn GenAI in schools from a threat to an opportunity, the issue of cheating with ChatGPT, the discrepancy between how many workers are using AI and how many teachers are using it, how rules get made, confirmation bias and AI, using tools versus gaining competencies, and whether teachers will quit. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 18 Nov 2024 - 34min - 268 - 230 - Guest: Caroline Bassett, Digital Humanities Professor, part 2
This and all episodes at: https://aiandyou.net/ . Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter. Professor Caroline Bassett is the first Director of Cambridge Digital Humanities, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities. In the conclusion, we talk about how technology shapes our psychology, how it enables mass movements, science fiction, the role of big Silicon Valley companies, and much more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 11 Nov 2024 - 30min - 267 - 229 - Guest: Caroline Bassett, Digital Humanities Professor, part 1
This and all episodes at: https://aiandyou.net/ . Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter. Professor Caroline Bassett is the first Director of Cambridge Digital Humanities, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities. In part 1 we talk about what digital humanities is, how it intersects with AI, what science and the humanities have to learn from each other, Joseph Weizenbaum and the reactions to his ELIZA chatbot, Luddites, and how passively or otherwise we accept new technology. Caroline really made me see in particular how what she calls "technocratic rationality," a way of thinking borne out of a technological culture accelerated by AI, reduces the novelty which we can experience in the world in a way we should certainly preserve. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 04 Nov 2024 - 41min - 266 - 228 - Guest: John Laird, Cognitive architect, part 2
This and all episodes at: https://aiandyou.net/ . Cognitive architecture deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of John Laird. Is cognitive architecture the gateway to artificial general intelligence? John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his PhD from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems. We talk about relationships between cognitive architectures and AGI, where explainability and transparency come in, Turing tests, where we could be in 10 years, how to recognize AGI, metacognition, and the SOAR architecture. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 28 Oct 2024 - 34min - 265 - 227 - Guest: John Laird, Cognitive architect, part 1
This and all episodes at: https://aiandyou.net/ . Cognitive architecture deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of John Laird. Is cognitive architecture the gateway to artificial general intelligence? John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his Ph.D. from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems. We talk about decision loops, models of the mind, symbolic versus neural models, and how large language models do reasoning. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 21 Oct 2024 - 36min - 264 - 226 - Guest: Sir Anthony Seldon, Historian, Author, Educator
This and all episodes at: https://aiandyou.net/ . My guest today founded the United Kingdom's AI in Education initiative, but Sir Anthony Seldon is known to millions more there as the author of books about prime ministers, having just published one about Liz Truss. Sir Anthony is one of Britain’s leading contemporary historians, educationalists, commentators and political authors. For 20 years he was a transformative headmaster (“principal” in North American lingo) first at Brighton College and then Wellington College, one of the country’s leading independent schools. From 2015 to 2020 he served as Vice-Chancellor of the University of Buckingham. He is now head of Epsom College. He is the author or editor of over 35 books on contemporary history, including insider accounts on the last six prime ministers. In 2018 he wrote The Fourth Education Revolution, which anticipates stunning, unprecedented effects of AI on education. He was knighted in 2014 for services to education and modern political history. Managing to avoid nearly all the potential Truss references, I talked with him about how teachers should think about the size of the impact of AI on education, the benefits of AI to students and teachers, what the AI in Education initiative is doing, and what the best role of teachers in the classroom is in the AI age. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog.
Mon, 14 Oct 2024 - 22min - 263 - 225 - Guest: Ravin Jesuthasan, Bestselling Futurist, part 2
This and all episodes at: https://aiandyou.net/ . How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with Ravin Jesuthasan, co-author with Tanuj Kapilashrami of the new book, The Skills-Powered Organization: The Journey to The Next Generation Enterprise, released on October 1. Ravin is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the bestselling books Work without Jobs, as well as the books Transformative HR, Lead the Work, and Reinventing Jobs. He was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by Tech News, and as one of the top 100 HR influencers by HR Executive. In the conclusion, we talk about how AI is reshaping HR functions, including hiring, staffing, and restructuring processes, the role of AI in mentoring and augmenting work, the relationship between the future of work and the future of education, the real value of a degree today, and how AI affects privilege and inequality in the new work environment. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog.
Mon, 07 Oct 2024 - 28min - 262 - 224 - Guest: Ravin Jesuthasan, Bestselling Futurist, part 1
This and all episodes at: https://aiandyou.net/ . How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with Ravin Jesuthasan, co-author with Tanuj Kapilashrami of the new book, The Skills-Powered Organization: The Journey to The Next Generation Enterprise, released on October 1. Ravin is a futurist and authority on the future of work, human capital, and AI, and is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the Wall Street Journal bestseller Work without Jobs, as well as the books Transformative HR, Lead the Work, and Reinventing Jobs. Ravin was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by Tech News, and as one of the top 100 HR influencers by HR Executive. In this first part, we talk about the impact of AI on work processes, the role of HR in adapting to these changes, and the evolving organizational models that focus on agility, flexibility, and skill-based work transitions. We also discuss AI's role in healthcare, and the importance of transferable skills in an AI-driven world. All this plus our usual look at today's AI headlines! Transcript and URLs referenced at HumanCusp Blog.
Mon, 30 Sep 2024 - 34min - 261 - 223 - Guest: Craig A. Kaplan, AGI Expert, part 2
This and all episodes at: https://aiandyou.net/ . Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of iQ Company, where he invents advanced intelligence systems.He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode. Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon. In the conclusion of the interview, we talk about the details of the collective intelligence architecture of agents, why Craig says it’s safe, morality of superintelligence, the risks of bad actors, and leading indicators of AGI. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 23 Sep 2024 - 35min - 260 - 222 - Guest: Craig A. Kaplan, AGI Expert, part 1
This and all episodes at: https://aiandyou.net/ . Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of iQ Company, where he invents advanced intelligence systems.He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode. Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon. We talk about his work with Herb Simon, bounded rationality, connectionist vs symbolic architectures, jailbreaking large language models, collective intelligence architectures for AI, and a lot more! All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 16 Sep 2024 - 43min - 259 - 221 - Guest: Markus Anderljung, AI Regulation Researcher, part 2
This and all episodes at: https://aiandyou.net/ . We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist. I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call. In the conclusion, we talk about verification processes, ingenious schemes to verify hardware platforms, the frontier AI safety commitments, and who should set safety standards for the industry. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 09 Sep 2024 - 29min - 258 - 220 - Guest: Markus Anderljung, AI Regulation Researcher, part 1
This and all episodes at: https://aiandyou.net/ . We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist. I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call. We talk about just what the Centre is, what it does and how it does it, and definitions of artificial general intelligence insofar as they affect governance – just what is the difference between training a system with 1025 and 1026 flops, for instance? And also in this part Markus will talk about how monitoring and verification might specifically work. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 02 Sep 2024 - 37min - 257 - 219 - Guest: Sophie Kleber, Human-AI Relationship Expert, part 2
This and all episodes at: https://aiandyou.net/ . Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. How we interact with them is the problem. And that's where Sophie Kleber comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow. In the conclusion of our interview, we talk about about how she got into the user experience field, the emergence of a third paradigm of user interfaces, the future of smart homes, privacy, large language models coming to consumer devices, and brain-computer interfaces. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 26 Aug 2024 - 36min - 256 - 218 - Guest: Sophie Kleber, Human-AI Relationship Expert, part 1
This and all episodes at: https://aiandyou.net/ . Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. How we interact with them is the problem. And that's where Sophie Kleber comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow. We talk about the Uncanny Valley and how we relate to computers as though they were human or inhuman, and what if they looked like Bugs Bunny. We talk about the environments and situations where some people have intimate relationships with AIs, gender stereotyping in large language models, and where emotional interactions with computers help or hinder. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 19 Aug 2024 - 35min - 255 - 217 - AI in Education
This and all episodes at: https://aiandyou.net/ . Teachers all over the world right now are having similar thoughts: Is AI going to take my job? How do I deal with homework that might have been done by ChatGPT? I know, because I've talked with many teachers, and these are universal concerns. So I'm visiting the topic of AI in education - not for the first time, not for the last. There are important and urgent issues to tackle; they become most acute at the high school level, but this episode will be useful for all levels. The reason it's so important for me to work with schools so much as an AI change management consultant is that there's no need for teachers to fear for their jobs. They are doing the most important job on the planet right now because they are literally educating the generation that is going to save the world. And generative AI has not created a learning problem: it's created learning opportunities. It's not created a teaching problem; it's created teaching opportunities. It has, however, created an assessment problem, and I'll talk about that. Kids need their human teachers more than ever before to model for them how to deal with disruption from technology, because change will never again happen as slowly as it does today, and all of their careers will be disrupted far more than anyone's is today. No student is going to remember something ChatGPT said for the rest of their life. The teacher’s job is to focus on the qualities that the AI cannot embody—the personal interactions that occur face to face when the teacher makes that lasting impression that inspires the student. Let's have honest, deep, and productive conversations about these issues now. A new school year is approaching and this is the time. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 12 Aug 2024 - 27min - 254 - 216 - Guest: John Danaher, Law Professor in AI Ethics, part 2
This and all episodes at: https://aiandyou.net/ . Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, Automation and Utopia: Human Flourishing in a World without Work, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do. John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of Robot Sex: Social And Ethical Implications from MIT Press, and his work has appeared in The Guardian, Aeon, and The Philosopher’s Magazine. In the conclusion of the interview we talk about generative AI extending our minds, the Luddite Fallacy and why this time things will be different, the effects of automation on class structure, and… Taylor Swift. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 05 Aug 2024 - 37min - 253 - 215 - Guest: John Danaher, Law Professor in AI Ethics, part 1
This and all episodes at: https://aiandyou.net/ . Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, Automation and Utopia: Human Flourishing in a World without Work, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do. John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of Robot Sex: Social And Ethical Implications from MIT Press, and his work has appeared in The Guardian, Aeon, and The Philosopher’s Magazine. In the first part of the interview we talk about how much jobs may be automated and the methodology behind studies of that, the impact of automation on job satisfaction, what’s happening in academia, and much more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 29 Jul 2024 - 31min - 252 - 214 - Guest: Lord Tim Clement-Jones, Government AI Advisory Chair, part 2
This and all episodes at: https://aiandyou.net/ . Helping the British Government understand AI since 2016 is our guest, Lord Tim Clement-Jones, co-founder and co-chair of Britain's All-Party Parliamentary Group on Artificial Intelligence since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the House of Lords Select Committee on Artificial Intelligence which reported in 2018 with “AI in the UK: Ready Willing and Able?” and its follow-up report in 2020 “AI in the UK: No Room for Complacency.” His new book, "Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future" came out in the UK in March, with a North American release date of July 18. In the second half, we talk about elections, including the one just held in the UK, and disinformation, what AI and robots do to the flow of capital, the effects of AI upon education and enterprise culture, privacy and making AI accountable and trustworthy. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 22 Jul 2024 - 31min - 251 - 213 - Guest: Lord Tim Clement-Jones, Government AI Advisory Chair, part 1
This and all episodes at: https://aiandyou.net/ . Helping the British Government understand AI since 2016 is our guest, Lord Tim Clement-Jones, co-founder and co-chair of Britain's All-Party Parliamentary Group on Artificial Intelligence since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the House of Lords Select Committee on Artificial Intelligence which reported in 2018 with “AI in the UK: Ready Willing and Able?” and its follow-up report in 2020 “AI in the UK: No Room for Complacency.” His new book, "Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future" came out in the UK in March, with a North American release date of July 18. In this first part, Tim gives a big picture of how #AI regulation has been proceeding on the global stage since before large language models were a thing, giving us the context that took us from the Asilomar Principles to today’s Hiroshima principles and the EU AI Act and the new ISO standard 42001 for AI. And we talk about long-term planning, intellectual property rights, the effects of the open letters that called for a pause or moratorium on model training, and much more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 15 Jul 2024 - 38min - 250 - 212 - Guest: Antonina Burlachenko, AI Regulatory Consultant
This and all episodes at: https://aiandyou.net/ . As the European Union AI Act rolls out, there are so many questions about what it will mean to businesses trying to navigate the incredibly volatile and complex field of AI regulation. Here to answer those questions is Antonina Burlachenko, Head of Quality and Regulatory Consulting at Star Global Consulting, calling from Poland. She explains what the Act really means for businesses and consumers, comparing it with GDPR, and providing some technical information around standards and regulations and other aspects of what it’s like for businesses to engage with the Act at a practical level. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 08 Jul 2024 - 34min - 249 - 211 - Guest: Matt Beane, Future of Work Author, part 2
This and all episodes at: https://aiandyou.net/ . To help us get new and valuable insights into the future of work is Matt Beane, Assistant Professor in the Technology Management Program at the University of California, Santa Barbara. He has spent over a decade doing extensive field research on how workers, organizations and even AI defy norms and rules in the 21st century. His new book: The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, was just published by Harper Business, and he has given you a special deal as a listener, to get a free copy of the first chapter, by going to http://aiandyou.theskillcodebook.com. The book lays out a plan for us to protect our skills and by extension the human connection between experts and novices (which is the foundation of skill-building) even as AI continues to take hold in our lives. In the conclusion, we talk more about what AIs do to the mentoring and learning pipelines in the workplace, and how education should pivot to deal with the changes to the future of work. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 01 Jul 2024 - 39min - 248 - 210 - Guest: Matt Beane, Future of Work Author, part 1
This and all episodes at: https://aiandyou.net/ . To help us get new and valuable insights into the future of work is Matt Beane, Assistant Professor in the Technology Management Program at the University of California, Santa Barbara. He has spent over a decade doing extensive field research on how workers, organizations and even AI defy norms and rules in the 21st century. His new book: The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, was just published by Harper Business, and he has given you a special deal as a listener, to get a free copy of the first chapter, by going to http://aiandyou.theskillcodebook.com. The book lays out a plan for us to protect our skills and by extension the human connection between experts and novices (which is the foundation of skill-building) even as AI continues to take hold in our lives. In this first part, we talk about how Matt studied surgeons in operating rooms for his PhD thesis and saw the effects that the introduction of a robot surgical system had in stifling the time-honored process of mentoring new surgeons, and generalized this to other fields, and observed the rise of “shadow learning,” where people bend or break the rules to get the learning they need. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 24 Jun 2024 - 33min - 247 - 209 - Guest: William A. Adams, Technologist
This and all episodes at: https://aiandyou.net/ . My guest is William A. Adams, technologist, philanthropist, and recorded by the Computer History Museum as one of the first Black entrepreneurs in Silicon Valley. He was the first technical advisor to Microsoft’s CTO Kevin Scott and has founded and overseen global initiatives at Microsoft from XML technologies as early as 1998, to DE&I initiatives in 2015. The Leap program, with a focus on diverse hiring, was named Microsoft’s D&I Program of the year in 2020. We talk about William’s experience creating the Leap program, its impact, the relationship between AI and diversity, equity, and inclusion programs like Leap, and creating personalized chatbots. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 17 Jun 2024 - 36min - 246 - 208 - Guest: Oliver Burkeman, Philosophy Writer, part 2
This and all episodes at: https://aiandyou.net/ . Our relationship with time is dysfunctional. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book Four Thousand Weeks: Time Management for Mortals and former author of the psychology column “This Column Will Change Your Life” in The Guardian. Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, it’s not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI – which accelerates everything it touches - have on our work life? This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at OliverBurkeman.com. In the conclusion of the interview, we talk about whether this is Luddism, the influence of the Silicon Valley billionaires’ pursuit of immortality, the appropriate use of AI to save us time, and what will remain constant throughout any amount of technological evolution. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 10 Jun 2024 - 28min - 245 - 207 - Guest: Oliver Burkeman, Philosophy Writer, part 1
This and all episodes at: https://aiandyou.net/ . Our relationship with time is dysfunctional. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book Four Thousand Weeks: Time Management for Mortals and former author of the psychology column “This Column Will Change Your Life” in The Guardian. Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, it’s not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI – which accelerates everything it touches - have on our work life? This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at OliverBurkeman.com. In this first half of the interview we talk about the parable of the rocks in the jar and how it’s a pernicious lie, the psychology of perceiving life as finite, and how technology has not changed our work stress and may be making it worse through induced demand. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 03 Jun 2024 - 31min - 244 - 206 - Guest: Mounir Shita, AGI Researcher
This and all episodes at: https://aiandyou.net/ . Mounir Shita, CEO of Kimera Systems, is author of the upcoming book The Science of Intelligence, which contains some interesting and thought-provoking explorations of intelligence that had me thinking about Pedro Domingos’ book The Master Algorithm. We talk about theories of AGI, free will, egg smashing, and Mounir's prototype smartphone app that learned how to silence itself in a movie theater! All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 27 May 2024 - 34min - 243 - 205 - Guest: Gary Bolles, Future of Work author, part 2
This and all episodes at: https://aiandyou.net/ . There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show Gary Bolles, author of The Next Rules of Work and a co-founder of eParachute.com, helping job-hunters & career changers with programs inspired by the evergreen book “What Color Is Your Parachute?” written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the world’s largest gathering of impact entrepreneurs and investors. Gary is adjunct Chair for the Future of Work for Singularity University, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for “what’s next.” In the conclusion of the interview, we talk about unbossing and holacracies, how AI will impact organizational structures, fear, FOMO, and agency, and the Singularity University. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 20 May 2024 - 27min - 242 - 204 - Guest: Gary Bolles, Future of Work author, part 1
This and all episodes at: https://aiandyou.net/ . There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show Gary Bolles, author of The Next Rules of Work and a co-founder of eParachute.com, helping job-hunters & career changers with programs inspired by the evergreen book “What Color Is Your Parachute?” written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the world’s largest gathering of impact entrepreneurs and investors. Gary is adjunct Chair for the Future of Work for Singularity University, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for “what’s next.” In the first half of the interview, we talk about the gig economy, the new rules of work, what ChatGPT did to the job market, and an interesting concept called the community operating system. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 13 May 2024 - 32min - 241 - 203 - Guest: Eleanor Drage, AI and Feminism Researcher, part 2
This and all episodes at: https://aiandyou.net/ . My guest is the co-host of the Good Robot Podcast, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The Leverhulme Centre for the Future of Intelligence at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called The Good Robot: Why Technology Needs Feminism. In this conclusion of the interview, we talk about unconscious bias, hiring standards, stochastic parrots, science fiction, and the early participation of women in computing. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 06 May 2024 - 35min - 240 - 202 - Guest: Eleanor Drage, AI and Feminism Researcher, part 1
This and all episodes at: https://aiandyou.net/ . My guest is the co-host of the Good Robot Podcast, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The Leverhulme Centre for the Future of Intelligence at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called The Good Robot: Why Technology Needs Feminism. We talk about about all that, plus some quantum mechanics, saunas, ham, lesbian bacteria, and… well it’ll all make more sense when you listen. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 29 Apr 2024 - 26min - 239 - 201 - Guest: Fiona McEvoy, Tech Ethics Writer
This and all episodes at: https://aiandyou.net/ . My guest is a really good role model for how a young person can carve out an important niche in the AI space, especially for people who aren’t inclined to the computer science side of the field. Fiona McEvoy is author of the blog YouTheData.com, with a specific focus on the intersection of technology and society. She was named as one of “30 Influential Women Advancing AI in San Francisco” by RE•WORK, and in 2020 was honored in the inaugural Brilliant Women in AI Ethics Hall of Fame, established to recognize “Brilliant women who have made exceptional contributions to the space of AI Ethics and diversity.” We talk about her journey to becoming an influential communicator and the ways she carries that out, what it’s like for young people in this social cauldron being heated by AI, and some of the key issues affecting them. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 22 Apr 2024 - 34min - 238 - 200 - Guest: Jerome C. Glenn, Futurist for AI governance, part 2
This and all episodes at: https://aiandyou.net/ . At the end of February there was a landmark conference in Panama City and online, the Beneficial AGI Summit. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues. He is the co-founder and CEO of The Millennium Project on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the Enterprise, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy. He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science & Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more. In this second half we talk about approaches for actually controlling the development of AGI that were developed at the conference, the AI arms race, and… why Jerome doesn’t like the term futurism. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 15 Apr 2024 - 25min - 237 - 199 - Guest: Jerome C. Glenn, Futurist for AI governance, part 1
This and all episodes at: https://aiandyou.net/ . At the end of February there was a landmark conference in Panama City and online, the Beneficial AGI Summit. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues. He is the co-founder and CEO of The Millennium Project on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the Enterprise, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy. He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science & Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more. In this first half we talk about his recent work with groups of the United Nations General Assembly, and his decentralized approach to grassroots empowerment in both implementing AGI and working together to regulate it. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 08 Apr 2024 - 36min - 236 - 198 - Guest: Eve Herold, Science Writer on Robots, part 2
This and all episodes at: https://aiandyou.net/ . How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots. Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN. In this part we talk about how robots and AI can bring out the best and the worst in us, the responsibilities of roboticists, the difference between robots having emotions and our believing that they have emotions, and how this will evolve over the next decade or more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 01 Apr 2024 - 32min - 235 - 197 - Guest: Eve Herold, Science Writer on Robots, part 1
This and all episodes at: https://aiandyou.net/ . How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots. Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN. In this part we talk about how people – including soldiers in combat - get attached to AIs and robots, we discuss ELIZA, Woebot, and Samantha from the movie Her, and the role of robots in helping take care of us physically and emotionally, among many other topics. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 25 Mar 2024 - 31min - 234 - 196 - Guest: Roman Yampolskiy, AI Safety Professor, part 2
This and all episodes at: https://aiandyou.net/ . Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable. Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI. In this part we talk about how we should respond to the problem of unsafe AI development and how Roman and his community are addressing it, what he would do with infinite resources, and… the threat Roman’s coffee cup poses to humanity. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 18 Mar 2024 - 32min - 233 - 195 - Guest: Roman Yampolskiy, AI Safety Professor, part 1
This and all episodes at: https://aiandyou.net/ . Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable. Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It’s those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI. In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today’s AI is not safe and why it’s getting worse. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 11 Mar 2024 - 36min - 232 - 194 - Guest: Rachel St. Clair, AGI Scientist, part 2
This and all episodes at: https://aiandyou.net/ . Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI. That was last year. Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” In the conclusion, we talk about the role of sleep in human cognition, AGI and consciousness, and… penguins. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 04 Mar 2024 - 37min - 231 - 193 - Guest: Rachel St. Clair, AGI Scientist, part 1
This and all episodes at: https://aiandyou.net/ . Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI. That was last year. Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to “help create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.” In part 1 we talk about markers for AGI, distinctions between it and narrow artificial intelligence, self-driving cars, robotics, and embodiment, and… disco balls. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 26 Feb 2024 - 30min - 230 - 192 - Re-evaluating Existential Risk From AI
This and all episodes at: https://aiandyou.net/ . Since I published my first book on AI in 2017, the public conversation and perception of the existential risk - risk to our existence - from AI has evolved and broadened. I talk about how that conversation has changed from Nick Bostrom's Superintelligence, the "hard take-off" and what that means, and through to the tossing about of cryptic signatures like p(doom) and e/acc, which I explain and critique. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 19 Feb 2024 - 21min - 229 - 191 - Guest: Frank Sauer, AI arms control researcher, part 2
This and all episodes at: https://aiandyou.net/ . Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In part two we talk about psychology of combat decisions, AI and strategic defense, and nuclear conflict destabilization. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 12 Feb 2024 - 28min - 228 - 190 - Guest: Frank Sauer, AI arms control researcher, part 1
This and all episodes at: https://aiandyou.net/ . Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System. In this first part we talk about the ethics of autonomy in weapons systems and compare human to machine decision making in combat. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 05 Feb 2024 - 34min - 227 - 189 - Guest: Peter Norvig, AI professor/author/researcher, part 2
This and all episodes at: https://aiandyou.net/ . Literally writing the book on AI is my guest Peter Norvig, who is coauthor of the standard text, Artificial Intelligence: A Modern Approach, used in 135 countries and 1500+ universities. Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at USC, Stanford, and Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006. He’s also the author of the world’s longest palindromic sentence. In this second half of the interview, we talk about how the rise in prominence of AI in the general population has changed how he communicates about AI, his feelings about the calls for slowdown in model development, and his thinking about general intelligence in large language models; and AI Winters. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 29 Jan 2024 - 30min - 226 - 188 - Guest: Peter Norvig, AI professor/author/researcher, part 1
This and all episodes at: https://aiandyou.net/ . Literally writing the book on AI is my guest Peter Norvig, who is coauthor of the standard text, Artificial Intelligence: A Modern Approach, used in 135 countries and 1500+ universities. (The other author, Stuart Russell, was on this show in episodes 86 and 87.) Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at the University of Southern California, Stanford University, and the University of California at Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006. He’s also the author of the world’s longest palindromic sentence. In this first part of the interview, we talk about the evolution of AI from the symbolic processing paradigm to the connectionist paradigm, or neural networks, how they layer on each other in humans and AIs, and Peter’s experiences in blending the worlds of academic and business. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 22 Jan 2024 - 26min - 225 - 187 - Guest: Michal Kosinski, Professor of Psychology, part 2
This and all episodes at: https://aiandyou.net/ . The worlds of academia and political upheaval meet in my guest Michal Kosinski, who was behind the first press article warning against Cambridge Analytica, which was at the heart of a scandal involving the unauthorized acquisition of personal data from millions of Facebook users and impacting the 2016 Brexit and US Presidential election votes through the use of AI to microtarget people through modeling their preferences. Michal also co-authored Modern Psychometrics, a popular textbook, and has published over 90 peer-reviewed papers in prominent journals such as Proceedings of the National Academy of Sciences (PNAS), Nature Scientific Reports and others that have been cited over 18,000 times. Michal has a PhD in psychology from the University of Cambridge, as well as master’s degrees in psychometrics and social psychology In the second half of the interview, we pivot to the Theory of Mind – which is the ability of a creature to understand that another has a mind – and research around whether AI has it. Michal has amazing new research in that respect. He also says, "Without a question, GPT-4 and similar models are the most competent language users on this planet." All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 15 Jan 2024 - 32min - 224 - 186 - Guest: Michal Kosinski, Professor of Psychology, part 1
This and all episodes at: https://aiandyou.net/ . The worlds of academia and political upheaval meet in my guest Michal Kosinski, who was behind the first press article warning against Cambridge Analytica, which was at the heart of a scandal involving the unauthorized acquisition of personal data from millions of Facebook users and impacting the 2016 Brexit and US Presidential election votes through the use of AI to microtarget people through modeling their preferences. Michal also co-authored Modern Psychometrics, a popular textbook, and has published over 90 peer-reviewed papers in prominent journals such as Proceedings of the National Academy of Sciences (PNAS), Nature Scientific Reports and others that have been cited over 18,000 times. Michal has a PhD in psychology from the University of Cambridge, as well as master’s degrees in psychometrics and social psychology, positioning him to speak to us with authority about how AI has and may shape the beliefs and behaviors of people en masse. In this first part of the interview, we delve into just that, plus the role of social media, and Michal's take on what privacy means today. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 08 Jan 2024 - 34min - 223 - 185 - Special Panel: AI Predictions for 2024
This and all episodes at: https://aiandyou.net/ . In our now-traditional end-of-year episode, we look back on the year to date and forward to the year to be. I am joined by previous guest Calum Chace, co-host of the London Futurists podcast and author of The Economic Singularity, and Justin Grammens, founder of the AppliedAI conference and podcast. Together, we review what happened with AI in 2023 and make some predictions for 2024. We look back at the impact of large language models such as #ChatGPT and forward to how they will evolve and change the workplace, economy, and society. We also discuss the future of regulation, the EU AI Act, the 2024 US elections, disinformation, and the future of education. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 01 Jan 2024 - 57min - 222 - 184 - Guest: Tabitha Swanson, Creative Technologist/Filmmaker
This and all episodes at: https://aiandyou.net/ . Making movies about AI with AI is Tabitha Swanson, who comes to tell us how that works - and what it was like exhibiting it at the Venice Film Festival during the writers'/actors' strikes. Tabitha is a Berlin-based multi-disciplinary designer, creative technologist, and filmmaker. Her practice includes 3D, animation, augmented reality, digital fashion, graphic design, and UX/UI. She has worked with brands including Vogue Germany, Nike, Highsnobiety, Reebok, and Origins, and has exhibited at Miami Art Basel, Fotografiska, Transmediale, and Cadaf Arts among others. Her part of the White Mirror project saw her doing everything from writing to cinematography with the latest AI tools like Runway Gen-2, ChatGPT, and Stable Diffusion, lowering typical animation costs from $10,000/second to $10,000 per minute. She explains what those tools are good at and where their limitations are, and helps us understand how they will evolve and impact the roles of humans in the movie industry. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 25 Dec 2023 - 39min - 221 - 183 - Guest: Oren Etzioni, AI in Science, Professor Emeritus, part 2
This and all episodes at: https://aiandyou.net/ . At the intersection of scientific research and artificial intelligence lies our guest Oren Etzioni, professor emeritus of Computer Science at the University of Washington and most notably the founding CEO of the Allen Institute for Artificial Intelligence (AI2) in Seattle, founded by the late Paul Allen, co-founder of Microsoft. His awards include AAAI Fellow and Seattle’s Geek of the Year. Oren grew the institute to a team of over 200 researchers and created singularly important tools such as the Semantic Scholar, search engine that can understand scientific literature, and Mosaic, a knowledge base formed by extracting scientific knowledge from text. This is hugely important because of just how much the rate of research paper creation now outstrips the ability of researchers to read it. AI could transform the productivity of scientific research by unprecedented measures. In this conclusion of the interview we talk about AI2’s scientific assistance project called Aristo, Oren’s views on the concerns about AI and how to address them, and his Hippocratic Oath for AI practitioners. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 18 Dec 2023 - 30min - 220 - 182 - Guest: Oren Etzioni, AI in Science, Professor Emeritus, part 1
This and all episodes at: https://aiandyou.net/ . At the intersection of scientific research and artificial intelligence lies our guest Oren Etzioni, professor emeritus of Computer Science at the University of Washington and most notably the founding CEO of the Allen Institute for Artificial Intelligence in Seattle, founded by the late Paul Allen, co-founder of Microsoft. His awards include AAAI Fellow and Seattle’s Geek of the Year. Oren grew the institute to a team of over 200 researchers and created singularly important tools such as the Semantic Scholar, search engine that can understand scientific literature, and Mosaic, a knowledge base formed by extracting scientific knowledge from text. This is hugely important because of just how much the rate of research paper creation now outstrips the ability of researchers to read it. AI could transform the productivity of scientific research by unprecedented measures. In part 1 we talk about parallels between AI and the human brain, the Semantic Scholar, and the potential for AI accelerating research through understanding scientific literature. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 11 Dec 2023 - 28min - 219 - 181 - Guests: Pauldy Otermans and Dev Aditya, AI Teacher Creators, part 2
This and all episodes at: https://aiandyou.net/ . There is a global teacher shortage, and Pauldy Otermans and Dev Aditya, founders of the Otermans Institute, are addressing that with #AI through creating a digital human AI teacher, called Beatrice. Their mission is to upskill 750- million underserved students globally by 2030. Beatrice appears as an on-screen avatar that converses with students. Pauldy is a neuroscientist and psychologist with a PhD in cognitive psychology and cognitive neuroscience from Brunel University. She was named one of the “22 most influential women in the UK of 2022” by Start-Up Magazine UK. Dev is a Young Global Innovator and under 30 Social Entrepreneur, recognized by Innovate UK with research experience at the Alan Turing Institute and Brunel University, London. In the conclusion of the interview they describe how the AI teachers work, and their definitions of Teaching and Learning 1.0, 2.0, and 3.0. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 04 Dec 2023 - 28min - 218 - 180 - Guests: Pauldy Otermans and Dev Aditya, AI Teacher Creators, part 1
This and all episodes at: https://aiandyou.net/ . There is a global teacher shortage, and Pauldy Otermans and Dev Aditya, founders of the Otermans Institute, are addressing that with #AI through creating a digital human AI teacher, called Beatrice. Their mission is to upskill 750- million underserved students globally by 2030. Beatrice appears as an on-screen avatar that converses with students. Pauldy is a neuroscientist and psychologist with a PhD in cognitive psychology and cognitive neuroscience from Brunel University. She was named one of the “22 most influential women in the UK of 2022” by Start-Up Magazine UK. Dev is a Young Global Innovator and under 30 Social Entrepreneur, recognized by Innovate UK with research experience at the Alan Turing Institute and Brunel University, London. In this first half of the interview we talk about the teacher shortage and the socioeconomic consequences of addressing it via an AI teacher. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 27 Nov 2023 - 31min - 217 - 179 - Guest: Jaan Tallinn, AI Existential Risk Philanthropist, part 2
This and all episodes at: https://aiandyou.net/ . We're talking with Jaan Tallinn, who has changed the way the world responds to the impact of #AI. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the Centre for the Study of Existential Risk, in Cambridge, England, and the Future of Life Institute, in Cambridge, Massachusetts. In the conclusion of the interview, we talk about value alignment and how that does or doesn’t intersect with large language models, FLI and their world building project, and the instability of the world’s future. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 20 Nov 2023 - 24min - 216 - 178 - Guest: Jaan Tallinn, AI Existential Risk Philanthropist, part 1
This and all episodes at: https://aiandyou.net/ . The attention of the world to the potential impact of AI owes a huge debt to my guest Jaan Tallinn. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the Centre for the Study of Existential Risk, in Cambridge, England, and the Future of Life Institute, in Cambridge, Massachusetts. He's also a member of the board of sponsors of the Bulletin of the Atomic Scientists, and a key funder of the Machine Intelligence Research Institute. In this first part, we talk about the problems with current #AI frontier models, Jaan's reaction to GPT-4, the letter causing for a pause in AI training, Jaan's motivations in starting CSER and FLI, how individuals and governments should react to AI risk, and Jaan's idea for how to enforce constraints on AI development. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 13 Nov 2023 - 33min - 215 - 177 - Guest: Bart Selman, Professor for responsible AI use, part 2
This and all episodes at: https://aiandyou.net/ . Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence, a fellow of the American Association for the Advancement of Science, and a contributing scientist at the two Asilomar conferences on responsible AI development. In the conclusion of our interview we talk about self-driving cars, the capability of large language models to synthesize knowledge across many human domains, Richard Feynman, our understanding of language, Bertrand Russell, AIs as co-authors on research papers, and where Bart places us on a scale of artificial general intelligence ability. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 06 Nov 2023 - 30min - 214 - 176 - Guest: Bart Selman, Professor for responsible AI use, part 1
This and all episodes at: https://aiandyou.net/ . Giving us a long perspective on the impact of today's large language models and #ChatGPT on society is Bart Selman, professor of Computer Science at Cornell University. He’s been helping people understand the potential and limitations of AI for several decades, commenting on computer vision, self-driving vehicles, and autonomous weapons among other technologies. He has co-authored over 100 papers, receiving a National Science Foundation career award and an Alfred P. Sloan research fellowship. He is a member of the American Association for Artificial Intelligence and a fellow of the American Association for the Advancement of Science. In the first part of the interview we talk about common sense, artificial general intelligence, computer vision, #LLM and their impact on computer programming, and how much they might really be understanding. Bart will also give his take on how good they are, how to understand how they’re working, and his experiments in getting ChatGPT to understand geometry. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 30 Oct 2023 - 33min - 213 - 175 - AI and Education
This and all episodes at: https://aiandyou.net/ . The first area to see a dramatic impact from #ChatGPT was when it crushed term papers and sent teachers scurrying for ways to assess their students. Now that we've had nearly a year to evaluate the impact of #AI on #education, I look at how assessments and teaching have been affected and how schools might adapt to the incredible opportunities of generative AI. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 23 Oct 2023 - 26min - 212 - 174 - AI and Jobs
This and all episodes at: https://aiandyou.net/ . What effect will #AI, especially large language models like #ChatGPT, have on jobs? The conversation is intense and fractious. I attempt to shed some light on those effects, and discuss some of the different predictions and proposals for distributing the dividend from reducing costs and increasing markets through deploying AI. How will that capital get to where it is needed? All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 16 Oct 2023 - 38min - 211 - 173 -The UK AI Summit, Reflections
This and all episodes at: https://aiandyou.net/ . The United Kingdom government is holding a Summit on Artificial Intelligence at the storied Bletchley Park on November 1 and 2. Luminaries of #AI will be helping government authorities understand the issues that could require regulation or other government intervention. Our invitation to attend may have been lost in the post. But I do have reflections on the AI risks that will (or should) be presented at this event and some analysis and thought-provoking questions prompted by excellent events on these topics I recently attended by the London Futurists and MKAI. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 09 Oct 2023 - 39min - 210 - 172 - Guest: Matthew Lungren, Chief Medical Information Officer, part 2
This and all episodes at: https://aiandyou.net/ . Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it? To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at Nuance Communications, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up #radiology and #AI. He also has a pediatric radiology practice at UCSF and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains. In this interview conclusion, we talk about the details of how AI including large language models can be an effective part of a radiologist’s workflow how decisions about integrating AI into medicine can be made, and where we might be going with it in the future. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 02 Oct 2023 - 27min - 209 - 171 - Guest: Matthew Lungren, Chief Medical Information Officer, part 1
This and all episodes at: https://aiandyou.net/ . Radiology found itself in the crosshairs of the debate about AI automating jobs when in 2016 AI expert Geoffrey Hinton said that AI would do just that to radiologists. That hasn't happened - but will it? To get to the bottom of this, I talked with Matthew Lungren, MD, Chief Medical Information Officer at Nuance Communications, a Microsoft company applying AI to healthcare workflows, and the name that comes at the top of the list when you look up #radiology and #AI. He also has a pediatric radiology practice at UCSF and previously led the Stanford [University] Center for Artificial Intelligence in Medicine and Imaging. More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare. He has an impressive oeuvre of over 100 publications, including work on multi-modal data fusion models for healthcare applications, and new computer vision and natural language processing approaches for healthcare-specific domains. The basis for Hinton's assertion was that AI can be trained to find tumors, for instance, in CT scans, and we know how good AI is at image analysis when it’s got lots of labeled data to be trained on, and we certainly have that with CT scans. We get to find out what's real about AI in #medicine in this episode. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 25 Sep 2023 - 35min - 208 - 170 - Guest: Michael Sharpe, AI Agent Platform CEO
This and all episodes at: https://aiandyou.net/ . The superheated large language model (LLM) revolution is only accelerating as they are incorporated into #agents - systems that take independent action. Here to help us understand the state of that art is Michael Sharpe, CEO of Magick ML, a development environment that gives people a way of creating agents based upon generative #AI. Equally fascinating is Michael's previous job at Latitude, working on the virally popular on-line fantasy adventure game of Dungeon AI, a role-playing simulation where the story was made up by an #LLM on the fly, and we talk about that too. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 18 Sep 2023 - 44min - 207 - 169 - Guest: Hod Lipson, Roboticist, part 2
This and all episodes at: https://aiandyou.net/ . Robots - embedded AI - haven't gotten the adulation that large language models have received for their recent breakthroughs, but when they do, it will be thanks in large part to Hod Lipson, professor of Mechanical Engineering at Columbia University, where he directs the Creative Machines Lab, which pioneers new ways to make machines that create, and machines that are creative. He received both DARPA and NSF faculty awards as well as being named Esquire magazine’s “Best & Brightest”, and one of Forbes’ “Top 7 Data scientists in the world.” His TED talk on building robots that are self-aware is one of the most viewed on AI, and in January 2023 he was centrally featured by the New York Times in their piece “What’s ahead for AI.” He is co-author of the award-winning books “Fabricated: The New World of 3D printing” and “Driverless: Intelligent cars and the road ahead”. Hod is a deeply passionate communicator who is driven to help people understand what’s going on with #AI and #robotics. In the conclusion of the interview we talk about robot cannibals, self-replicating robots, novel form factors for robots, the impact of #ChatGPT on higher education, and more of Hod's expansive vision for the future. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 11 Sep 2023 - 38min - 206 - 168 - Guest: Hod Lipson, Roboticist, part 1
This and all episodes at: https://aiandyou.net/ . Robots - embedded AI - haven't gotten the adulation that large language models have received for their recent breakthroughs, but when they do, it will be thanks in large part to Hod Lipson, professor of Mechanical Engineering at Columbia University, where he directs the Creative Machines Lab, which pioneers new ways to make machines that create, and machines that are creative. He received both DARPA and NSF faculty awards as well as being named Esquire magazine’s “Best & Brightest”, and one of Forbes’ “Top 7 Data scientists in the world.” His TED talk on building robots that are self-aware is one of the most viewed on AI, and in January 2023 he was centrally featured by the New York Times in their piece “What’s ahead for AI.” He is co-author of the award-winning books “Fabricated: The New World of 3D printing” and “Driverless: Intelligent cars and the road ahead”. Hod is a deeply passionate communicator who is driven to help people understand what’s going on with #AI and #robotics. In part 1 we talk about our future with #robots that might be creative, self-aware, sentient, or generally intelligent. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 04 Sep 2023 - 32min - 205 - 167 - AI and Our Relationship with Time
This and all episodes at: https://aiandyou.net/ . In this special episode, we look at our relationship with time: how it's broken, what that means to us, and how AI might make that better - or worse. We've let technology call the shots for so long that we don't realize that we're running around a hamster wheel of our own making, chasing a carrot on a stick in front of our heads that we will never catch. Now with large language models like #ChatGPT available to everyone, are we going to use that to make the wheel spin faster - or get out of the cage? All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 28 Aug 2023 - 30min - 204 - 166 - Guest: Babak Pahlavan, AI Executive Assistant Builder
This and all episodes at: https://aiandyou.net/ . After years of show guests projecting their visions of an executive assistant AI, Babak Pahlavan is building one, over at Silicon Valley startup NinjaTech AI, and he comes on the show to tell us about the challenges of building that and what it will do. He has been working on AI since 2008, when he was the Founder and CEO of his first AI startup named CleverSense. CleverSense was acquired by Google in 2011, where it became an important personalization layer in Google Maps. Babak went on to spend 11 years at Google as a Senior Director of Product Management, where he led and scaled several large products and teams including Google Analytics, Enterprise Measurement Suite and others. He left Google in October of 2022 to found NinjaTech AI in partnership with SRI, which is the original home of Siri. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 21 Aug 2023 - 45min - 203 - 165 - Guest: Boaz Mizrachi, AV Platform founder
This and all episodes at: https://aiandyou.net/ . If you drive by the seat of your pants, listen to our guest Boaz Mizrachi, calling from Israel, where he is co-founder of Tactile Mobility, an autonomous vehicle platform developer that evaluates what a car feels. You base a lot of your driving decisions on how you sense the road through the wheels and transmission, so why shouldn't your AV do so too? This is important when dealing with skidding, for instance. Boaz tells us how that works in fascinating detail and where it sits in the current state of the art in AV platform integration. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 14 Aug 2023 - 38min - 202 - 164 - Guest: Alan D. Thompson, AI Consultant, part 2
This and all episodes at: https://aiandyou.net/ . A one-man powerhouse of AI knowledge and analyses, Alan D. Thompson, calling from Perth, Australia, advises intergovernmental organizations, companies, and international media in the fields of artificial intelligence and human intelligence, consulting to the award-winning series Decoding Genius for GE, Making Child Prodigies for ABC (with the Australian Prime Minister), 60 Minutes for Network Ten/CBS, and Child Genius for Warner Bros. His 2021-2022 experiments with Leta AI and Aurora AI have been viewed over a million times. He is the former chairman for the gifted families committee of Mensa International. He writes The Memo, a monthly newsletter with bleeding edge AI news that I’m personally finding to be highly useful. In the conclusion of the interview, we talk about the present and future of keeping up with AI news, the future of artificial general intelligence, what the large language models are about to do, and much more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 07 Aug 2023 - 34min - 201 - 163 - Guest: Alan D. Thompson, AI Consultant, part 1
This and all episodes at: https://aiandyou.net/ . A one-man powerhouse of AI knowledge and analyses, Alan D. Thompson, calling from Perth, Australia, advises intergovernmental organizations, companies, and international media in the fields of artificial intelligence and human intelligence, consulting to the award-winning series Decoding Genius for GE, Making Child Prodigies for ABC (with the Australian Prime Minister), 60 Minutes for Network Ten/CBS, and Child Genius for Warner Bros. His 2021-2022 experiments with Leta AI and Aurora AI have been viewed over a million times. He is the former chairman for the gifted families committee of Mensa International. He writes The Memo, a monthly newsletter with bleeding edge AI news that I’m personally finding to be highly useful. In this first part of the interview Alan compares the large language models like ChatGPT, relates human and artificial intelligence, and talks about superintelligence alignment. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 31 Jul 2023 - 32min - 200 - 162 - Guest: Ryan Donnelly, AI Governance Platform Founder
This and all episodes at: https://aiandyou.net/ . Giving us a peek behind the scenes of Number 10 Downing Street today is Ryan Donnelly, founder of Enzai, an AI governance platform that helps organizations manage AI risk through policy and organizational controls - allowing users to engender trust in, and scale, their AI systems. Before founding Enzai, Ryan worked as a corporate lawyer in London at some of the world’s leading law firms. Ryan was recently invited to 10 Downing Street to discuss AI and UK policy, along with some other very high-powered luminaries of AI. So we’re going to talk about what’s going on at that level of the UK government with respect to AI, and we'll learn about operationalizing AI risk management. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 24 Jul 2023 - 43min - 199 - 161 - Guest: Roman Yampolskiy, AI Safety Professor, part 2
This and all episodes at: https://aiandyou.net/ . What do AIs do with optical illusions... and jokes? Returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is one of the most eminent researchers in that space. He has written numerous papers and books, including Artificial Superintelligence: A Futuristic Approach in 2015 and Artificial Intelligence Safety and Security in 2018. Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us. In the conclusion of the interview we talk about wider-ranging issues of AI safety, just how the existential risk is being addressed today, and more on the recent public letters calling attention to AI risk. Plus we get a scoop on Roman's latest paper, Unmonitorability of Artificial Intelligence. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 17 Jul 2023 - 32min - 198 - 160 - Guest: Roman Yampolskiy, AI Safety Professor, part 1
This and all episodes at: https://aiandyou.net/ . With statements about the existential threat of AI being publicly signed by prominent AI personalities, we need an academic's take on that, and returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is a preeminent researcher in that space. He has written numerous papers and books, including Artificial Superintelligence: A Futuristic Approach in 2015 and Artificial Intelligence Safety and Security in 2018. Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us. In the first part of the interview we discussed the open letters about AI, how ChatGPT and its predecessors/successors move us closer to AGI and existential risk, and what Roman has in common with Leonardo DiCaprio. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 10 Jul 2023 - 32min - 197 - 159 - Guest: Louis Rosenberg, Human/AI Hybrid Intelligence Expert, part 2
This and all episodes at: https://aiandyou.net/ . What do honeybees have to teach us about AI? You'll find out from Louis Rosenberg on this episode. He's been working in AR and VR starting over 30 years ago at Stanford and NASA. 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 2004 he founded the early AR company Outland Research which was acquired by Google in 2011. He received a PhD from Stanford, was a tenured professor at California State University, and has been awarded over 300 patents. He's currently CEO and Chief Scientist of Unanimous AI, a company that amplifies human group intelligence using AI technology based on the biological principle of Swarm Intelligence, which is where the bees come in. The Swarm AI technology that he created has an extraordinary record of making predictions like Oscar winners. In the conclusion of the interview, we talk about ways AI threatens privacy, and Louis' philosophy of using AI to empower human cooperation. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 03 Jul 2023 - 27min - 196 - 158 - Guest: Louis Rosenberg, Human/AI Hybrid Intelligence Expert, part 1
This and all episodes at: https://aiandyou.net/ . What do honeybees have to teach us about AI? You'll find out from Louis Rosenberg on this episode. He's been working in AR and VR starting over 30 years ago at Stanford and NASA. 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 2004 he founded the early AR company Outland Research which was acquired by Google in 2011. He received a PhD from Stanford, was a tenured professor at California State University, and has been awarded over 300 patents. He's currently CEO and Chief Scientist of Unanimous AI, a company that amplifies human group intelligence using AI technology based on the biological principle of Swarm Intelligence, which is where the bees come in. The UNU Swarm Intelligence that he created has an extraordinary record of making predictions like Oscar winners. We talk about how AI can help humans cooperate instead of conflict, and we talk about threats to privacy, and the convergence of AI and AR/VR technology like Apple's new headset. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 26 Jun 2023 - 29min - 195 - 157 - Should AI Be AbleTo Feel?
This and all episodes at: https://aiandyou.net/ . Should AI be able to feel? It may seem like the height of hubris, recklessness, and even cruelty to suggest such a thing - and yet our increasing unease and fears of what #AI may do stem from its lack of empathy. I develop this reasoning in my third TEDx talk, recorded at Royal Roads University. From my research into Joseph Weizenbaum's ELIZA to what developers of #ChatGPT and other AI are missing, I explore this most sensitive of issues. This podcast episode is the bonus track, the director's cut if you will, that expands on those 12 minutes of talk to give you added value and even more questions to take away. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 19 Jun 2023 - 41min - 194 - 156 - Guest: Dorian Selz, Business AI CEO
This and all episodes at: https://aiandyou.net/ . Large language models like #ChatGPT have thoroughly disrupted business today, and here to help us understand what's going on there and how business leaders should view LLMs is Dorian Selz. He called from Zurich, where he is the CEO of Squirro, making it easier for businesses to start using #AI. We talked about everything from where to be wary of LLMs to EU regulation. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 12 Jun 2023 - 44min - 193 - 155 - Guest: Ben Whately, Language Tutoring with AI
This and all episodes at: https://aiandyou.net/ . With so much talk about how large language models like #ChatGPT have learned our languages, we can forget that humans also want and need to learn other human languages, and that's what happens at memrise.com. CSO and co-founder Ben Whately came on the show to help us understand how they use GPT #AI models to help people with that process, and the fascinating and unexpected ways that human memory plays its part. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 05 Jun 2023 - 43min - 192 - 154 - Turning Anxiety About ChatGPT into Resilience
This and all episodes at: https://aiandyou.net/ . If you're feeling on edge due to all the rapid-fire development around #ChatGPT and its companion AIs, you're not alone. In fact most people feel some degree of anxiety around not knowing where all this is going and the impact on their jobs, their world, and their lives. Our core mission on this show is to help people understand #AI and turn that stress into empowerment, so that's exactly what we do in this special episode. This rate of disruption is unprecedented, and a lot of people are taking advantage of the situation to suggest that you ought to be on top of everything that's going on. Spoiler alert: They aren't on top of it all, and neither is anyone else. This episode lays bare some of that angst and gives you some perspectives that are useful for feeling empowered, without sacrificing our trademark dedication to realism over optimism or pessimism. (Our episode image is helpfully generated by AI. But not this text.) All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 29 May 2023 - 38min - 191 - 153 - Guest: Frank Stephenson, Legendary Car Designer
This and all episodes at: https://aiandyou.net/ . Frank Stephenson is the legendary designer of the BMW Mini Cooper reboot, and the Maserati MC12 and Ferrari F430 among other models. He is now Head of Design at McLaren Automotive and designed the MP4-12C, the successor to the F1. His latest projects include electric Vertical Take-Off and Landing vehicles at the design studio that bears his name. What does Frank have to do with AI? He came on the show to talk about the impact of generative models on the field of car design and how he's using them. There's a lot to unpack here for designers of all kinds. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 22 May 2023 - 44min - 190 - 152 - Guest: Eric Daimler, AI Entrepreneur and Policymaker, part 2
This and all episodes at: https://aiandyou.net/ . Feeling inundated with data? If you're running a business, that's no joke, and it's getting worse. Helping people dig through a mountain of data is Eric Daimler, founder and CEO of Conexus. He has over 20 years of experience as an entrepreneur, investor, technologist, and policymaker where he served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of the President. He was the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI and robotics. We had a freewheeling, thought-provoking discussion about regulation, business, and state of the art AI. In the conclusion of our conversation, Eric helps us understand how a business should think about and interface with today's AI to leverage it successfully. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 15 May 2023 - 28min - 189 - 151 - Guest: Eric Daimler, AI Entrepreneur and Policymaker, part 1
This and all episodes at: https://aiandyou.net/ . Feeling inundated with data? If you're running a business, that's no joke, and it's getting worse. Helping people dig through a mountain of data is Eric Daimler, founder and CEO of Conexus. He has over 20 years of experience as an entrepreneur, investor, technologist, and policymaker where he served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of the President. He was the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI and robotics. We had a freewheeling, thought-provoking discussion about regulation, business, and state of the art AI. In this first part of our conversation, we touch on everything from self-driving cars to ChatGPT and China. And category theory as the solution to data deluge. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 08 May 2023 - 30min - 188 - 150 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Expert, part 2
This and all episodes at: https://aiandyou.net/ . Which companies are doing the best at adopting AI? That's a very easy question to ask and a very hard one to answer - well. But answering it today is Alexandra Mousavizadeh, who done this sort of thing before with the Global AI Index and Disinformation Index. Her new company, Evident, uses nearly 150 real-time indicators to measure the adoption of AI in each company, and their first iteration of the AI Adoption Index covers the banking industry. Alexandra is returning to the show and calling in from London, where she was a partner at Tortoise Media, where she ran Tortoise Intelligence, the Index and data business. Here, she was the architect of the groundbreaking Global AI Index, released in 2019, the first to benchmark the strength of national AI ecosystems. Before Tortoise, she held roles including sovereign analyst for Moody’s and Head of Country Risk Management at Morgan Stanley. She was CEO of ARC Ratings, a global emerging markets based ratings agency; and before joining ARC, she was the Director of the Legatum Institute’s Prosperity Index of nations. In the conclusion of the interview we talk about the methodology behind the Index, what it means for the flow of talent and capital, the banking industry reaction to ChatGPT, and surprises about the leading companies in the Index. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 01 May 2023 - 33min - 187 - 149 - Guest: Alexandra Mousavizadeh, Strategic Intelligence Expert, part 1
This and all episodes at: https://aiandyou.net/ . Which companies are doing the best at adopting AI? That's a very easy question to ask and a very hard one to answer - well. But answering it today is Alexandra Mousavizadeh, who has experience in the founding of the Global AI Index and Disinformation Index. Her new company, Evident, uses nearly 150 real-time indicators to measure the adoption of AI in each company. and their first iteration of the AI Adoption Index covers the banking industry. Alexandra is returning to the show and calling in from London, where she was a partner at Tortoise Media, where she ran Tortoise Intelligence, the Index and data business. Here, she was the architect of the groundbreaking Global AI Index, released in 2019, the first to benchmark the strength of national AI ecosystems. Before Tortoise, she held roles including sovereign analyst for Moody’s covering Russia, Central Asia and the Middle East, and Head of Country Risk Management at Morgan Stanley. In the first part of the interview we talk about the methodology, rationale, and customers for the index, some surprises about the modern banking sector, and the open letter calling for a pause on LLM training. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 24 Apr 2023 - 34min - 186 - 148 - Guest: Missy Cummings, Robotics Professor and Former Fighter Pilot, part 2
This and all episodes at: https://aiandyou.net/ . If you want straight talk about today's overheated AI in robotics applications, you would want someone as direct as, say, an F-18 pilot. And that's what we've got, in Missy Cummings, one of the US Navy's first female fighter pilots (yes, that Top Gun) and now professor researching AI in safety-critical systems at George Mason University and director of Duke University's Humans and Autonomy Laboratory. She recently spent a year as Safety Advisor at the National Highway Traffic Safety Administration where she made some very candid statements about Tesla. In part 2 of our interview, hear what Missy thinks about Tesla, ChatGPT, and Boston Dynamics; the truth behind that dogfighting AI, the possibility of complete automation of air travel, how AI would handle air emergencies, and more. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 17 Apr 2023 - 42min - 185 - 147 - Guest: Missy Cummings, Robotics Professor and Former Fighter Pilot, part 1
This and all episodes at: https://aiandyou.net/ . If you want straight talk about today's overheated AI in robotics applications, you would want someone as direct as, say, an F-18 pilot. And that's what we've got, in Missy Cummings, one of the US Navy's first female fighter pilots (yes, that Top Gun) and now professor researching AI in safety-critical systems at George Mason University and director of Duke University's Humans and Autonomy Laboratory. She recently spent a year as Safety Advisor at the National Highway Traffic Safety Administration where she made some very candid statements about Tesla. From aircraft safety to the true performance and economics of autonomous vehicles, Missy gives us her unvarnished views in this first half of an unmissable interview (see what I did there?). All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 10 Apr 2023 - 32min - 184 - 146 - Guest: Tigran Petrosyan, Annotation Expert
This and all episodes at: https://aiandyou.net/ . With the advent of GPT-4, annotation has come to the forefront of attention as the power of interpreting images becomes prominent. But what is annotation, how does it work, what does it mean, and what can you do with it? Getting us those answers is Tigran Petrosyan, founder and CEO of SuperAnnotate, and expert on annotation. Tigran holds a master's degree in Physics from ETH Zurich and has post-graduate experience in biomedical imaging. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 03 Apr 2023 - 35min - 183 - 145 - Guest: Elizabeth Croft, Professor of Robotics, part 2
This and all episodes at: https://aiandyou.net/ . Robots - embodied AI - are coming into our lives more and more, from sidewalk delivery bots to dinosaur hotel receptionists. But how are we going to live with them when even basic interactions - like handing over an object - are more complex than we realized? Getting us those answers is Elizabeth Croft, Vice-President Academic and Provost of the University of Victoria in British Columbia, Canada, and expert in the field of human-robot interaction. She has a PhD in robotics from the University of Toronto and was Dean of Engineering at Monash University in Melbourne, Australia. In the conclusion of our interview we talk about robot body language, how to deal with a squishy world, and ethical foundations for robots. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 27 Mar 2023 - 28min - 182 - 144 - Guest: Elizabeth Croft, Professor of Robotics, part 1
This and all episodes at: https://aiandyou.net/ . Robots - embodied AI - are coming into our lives more and more, from sidewalk delivery bots to dinosaur hotel receptionists. But how are we going to live with them when even basic interactions - like handing over an object - are more complex than we realized? Getting us those answers is Elizabeth Croft, Vice-President Academic and Provost of the University of Victoria in British Columbia, Canada, and expert in the field of human-robot interaction. She has a PhD in robotics from the University of Toronto and was Dean of Engineering at Monash University in Melbourne, Australia. In the first part of our interview we talk about how she got into robotics, and her research into what's really happening when you hand someone an object and what engineers need to know about that before that robot barista can hand you a triple venti. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 20 Mar 2023 - 34min - 181 - 143 - Guest: Melanie Mitchell, AI Cognition Researcher, part 2
This and all episodes at: https://aiandyou.net/ . How intelligent - really - are the best AI programs like ChatGPT? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us? Researching the answers to those questions is Melanie Mitchell, Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. She is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her recent book, Artificial Intelligence: A Guide for Thinking Humans is a thoughtful description of how to think about and understand AI seen partly through the lens of her work with the polymath Douglas Hofstadter, author of the famous book Gödel, Escher, Bach, and who made a number of connections between advancements in AI and the human condition. In this conclusion of our interview we talk about what ChatGPT isn't good at, how to find the edges of its intelligence, and the AI she built for making analogies like you'd get on the SAT. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 13 Mar 2023 - 29min - 180 - 142 - Guest: Melanie Mitchell, AI Cognition Researcher, part 1
This and all episodes at: https://aiandyou.net/ . How intelligent - really - are the best AI programs like ChatGPT? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us? Researching the answers to those questions is Melanie Mitchell, Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. She is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her recent book, Artificial Intelligence: A Guide for Thinking Humans is a thoughtful description of how to think about and understand AI seen partly through the lens of her work with the polymath Douglas Hofstadter, author of the famous book Gödel, Escher, Bach, and who made a number of connections between advancements in AI and the human condition. In this first part we’ll be talking a lot about ChatGPT and where it fits into her narrative about AI capabilities. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 06 Mar 2023 - 37min - 179 - 141 - Special Episode: Understanding ChatGPT
This and all episodes at: https://aiandyou.net/ . ChatGPT has taken the world by storm. In the unlikely event that you haven't heard of it, it's a large language model from OpenAI that has demonstrated such extraordinary ability to answer general questions and requests to the satisfaction and astonishment of people with no technical expertise that it has captivated the public imagination and brought new meaning to the phrase "going viral." It acquired 1 million users within 5 days and 100 million in two months. But if you have heard of ChatGPT, you likely have many questions: What can it really do, how does it work, what is it not good at, what does this mean for jobs, and... many more. We've been talking about those issues on this show since we started, and I've been anticipating an event like this since I predicted something very similar in my first book in 2017, so we are here to help. In this special episode, we'll look at all those questions and a lot more, plus discuss the new image generation programs. How can we tell an AI from a human now? What does this mean for the Turing Test, and what does it mean for tests of humans, otherwise known as term papers? Find out about all that and more in this special episode. Transcript and URLs referenced at HumanCusp Blog.
Mon, 27 Feb 2023 - 1h 09min - 178 - 140 - Guest: Risto Uuk, EU AI Policy Researcher, part 2
This and all episodes at: https://aiandyou.net/ . I'm often asked what's going to happen with AI being regulated, and my answer is that the place that's most advanced in that respect is the European Union, with its new AI Act. So here to tell us all about that is Risto Uuk. He is a policy researcher at the Future of Life Institute and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum, did research for the European Commission, and provided research support at Berkeley Existential Risk Initiative, all on AI. He has a master’s degree in Philosophy and Public Policy from the London School of Economics and Political Science. In part 2, we talk about the types of risk described in the act, types of company that could be affected and how, what it’s like to work in this field day to day, and how you can get involved. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 20 Feb 2023 - 31min - 177 - 038 - Guest: Beth Singler, Anthropologist and Filmmaker, part 1
This and all episodes at: https://aiandyou.net/ . When you combine anthropologist, filmmaker, and geek, you get Beth Singler, Research Fellow in Artificial Intelligence at the University of Cambridge. Beth explores the social, ethical, philosophical and religious implications of advances in artificial intelligence and robotics and has produced some dramatic documentaries about our relationship with AI: Pain in the Machine and its sequels, Friend in the Machine, Good in the Machine, and Ghost in the Machine. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 08 Mar 2021 - 34min - 176 - 037 - Guest: Steve Shwartz, AI entrepreneur/investor, part 2
This and all episodes at: https://aiandyou.net/ . Steve Shwartz is a serial software entrepreneur and investor, with a PhD from Johns Hopkins university in cognitive science and did postdoc research in AI at Yale. He is the author of the new book Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity, published by Fast Company Press on February 9. In part 2 of our interview, we talk about "artificial intelligence and natural stupidity" (we had to get that one in eventually, didn't we?), impacts on employment and Steve's take on the Oxford Martin study, and... common sense. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 01 Mar 2021 - 25min - 175 - 036 - Guest: Steve Shwartz, AI entrepreneur/investor, part 1
This and all episodes at: https://aiandyou.net/ . Steve Shwartz is a serial software entrepreneur and investor, with a PhD from Johns Hopkins university in cognitive science and did postdoc research in AI at Yale. He is the author of the new book Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity, published by Fast Company Press on February 9. We talk about bias, explainability, and other current problems with machine learning, plus... horse racing. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 22 Feb 2021 - 28min - 174 - 035 - Guest: Michael Wooldridge, Oxford University Professor, part 2
This and all episodes at: https://aiandyou.net/ . We continue the interview with Michael Wooldridge, head of the Oxford University Computer Science department and author of A Brief History of Artificial Intelligence, an introductory look at AI, published Jan 2021 by Flatiron Books. He's been working on AI for 30 years and specializes in multi-agent systems, which we talk about. He's written over 400 articles and nine books, including the Ladybird Expert Guide to Artificial Intelligence. We cover a huge amount of ground, from autonomous weapons and self-driving cars, to Michael's work on multi-agent systems and the potential for my Siri to talk to your Alexa. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 15 Feb 2021 - 40min - 173 - 034 - Guest: Michael Wooldridge, Oxford University Professor, part 1
This and all episodes at: https://aiandyou.net/ . My guest this week is Michael Wooldridge, head of the Oxford University Computer Science department and author of A Brief History of Artificial Intelligence, an introductory look at AI, published last month by Flatiron Books. He's been working on AI for 30 years and specializes in multi-agent systems, which we talk about. He's written over 400 articles and nine books, including the Ladybird Expert Guide to Artificial Intelligence. We cover a huge amount of ground, from the changes in AI to ways of judging artificial general intelligence, to challenges that AI faces in dealing with the real world. All this plus our usual look at today's AI headlines. Here's the link to my live class mentioned in the episode: https://bit.ly/UVicAIandYou Transcript and URLs referenced at HumanCusp Blog.
Mon, 08 Feb 2021 - 40min - 172 - 033 - What Is AI? A quick tour of the tech
This and all episodes at: https://aiandyou.net/ . "What is AI?" That question is one of the ones in the opening credits of this podcast, and in this episode, I'm going to give you a whistle-stop tour of what AI is. No computer experience required; if you've no idea how AI is built and what makes it tick, this will get you off to a good start. If you've already got some chops in computer software, then this episode may help you grasp how to explain AI to your friends. I'll go from the beginnings of GOFAI to the latest capsule networks, talking about how they're built and some of their limitations. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Mon, 01 Feb 2021 - 34min
Podcasts similares a Artificial Intelligence and You
- Global News Podcast BBC World Service
- Kriminálka Český rozhlas
- El Partidazo de COPE COPE
- Herrera en COPE COPE
- The Dan Bongino Show Cumulus Podcast Network | Dan Bongino
- Es la Mañana de Federico esRadio
- La Noche de Dieter esRadio
- Hondelatte Raconte - Christophe Hondelatte Europe 1
- Affaires sensibles France Inter
- La rosa de los vientos OndaCero
- Más de uno OndaCero
- La Zanzara Radio 24
- Espacio en blanco Radio Nacional
- Les Grosses Têtes RTL
- L'Heure Du Crime RTL
- El Larguero SER Podcast
- Nadie Sabe Nada SER Podcast
- SER Historia SER Podcast
- Todo Concostrina SER Podcast
- 安住紳一郎の日曜天国 TBS RADIO
- The Tucker Carlson Show Tucker Carlson Network
- 辛坊治郎 ズーム そこまで言うか! ニッポン放送
- 飯田浩司のOK! Cozy up! Podcast ニッポン放送
- 武田鉄矢・今朝の三枚おろし 文化放送PodcastQR