Chalk Radio

AI Literacy for All with Prof. Cynthia Breazeal

Episode Summary

Social robotics pioneer Prof. Cynthia Breazeal discusses artificial intelligence in our lives, digital citizenship, and AI education for all.

Episode Notes

When humans interact, they don’t just pass information from one to the other; there’s always some relational element, with the participants responding to each other’s emotional cues. Professor Cynthia Breazeal, MIT’s new Dean of Digital Learning, believes it’s possible to design this element into human-computer interactions as well. She foresees a day when AI won’t merely perform practical tasks for us, but also will provide us with companionship, emotional comfort, and even mental health support. But a future of closer human-AI collaborative relationships doesn’t only require technological development—it also requires us to learn what AI is capable of and how to interact with it in a more informed way. To further this goal, Professor Breazeal leads the Responsible AI for Social Empowerment and Education (RAISE) initiative at MIT, which runs an annual “Day of AI” program to promote better understanding of AI in the next generation of technology users and developers. In this episode, she describes those projects as well as her work developing the groundbreaking social robots Kismet and Jibo, prototypes of what she calls “warm tech”—AI-enabled devices designed to be engaging, expressive, and personal. 

Relevant Resources:

Day of AI

RAISE (Responsible AI for Social Empowerment and Education)

MIT OpenCourseWare

The OCW Educator Portal

Share your teaching insights

Professor Breazeal’s faculty page

Professor Breazeal named Dean of Digital Learning

Professor Breazeal introduces Jibo (YouTube video)

The Rise of Personal Robotics (TED talk by Professor Breazeal)

Music in this episode by Blue Dot Sessions

 

Connect with Us

If you have a suggestion for a new episode or have used OCW to change your life or those of others, tell us your story. We’d love to hear from you! 

Call us @ 617-715-2517

On our site

On Facebook

On Twitter

On Instagram

 

Stay Current

Subscribe to the free monthly "MIT OpenCourseWare Update" e-newsletter.

 

Support OCW

If you like Chalk Radio and OpenCourseWare, donate to help keep these programs going!

 

Credits

Sarah Hansen, host and producer 

Brett Paci, producer  

Dave Lishansky, producer 

Nidhi Shastri and Aubrey Calloway, scriptwriters 

Show notes by Peter Chipman

Episode Transcription

[MUSIC PLAYING] CHILD: Alexa, turn off the lights.

 

AMAZON ALEXA: OK.

 

CHILD: [GIGGLING] Alexa, turn on the lights.

 

AMAZON ALEXA: OK.

 

SARAH HANSEN: Today on Chalk Radio, robots and humans become collaborative partners.

 

CYNTHIA BREAZEAL: It's not about, let's build machines that look like people and act like people. It's, let's build machines that dovetail with us in a way that we can bring forth all of ourselves to help us achieve the goals that are really important to us.

 

SARAH HANSEN: I'm your host, Sarah Hansen. This week, we're talking with a researcher and educator who's working to bridge the gap between human well-being and artificial intelligence. From kindergarten to college classrooms, she's preparing the next generation of informed technology users and empathetic designers.

 

CYNTHIA BREAZEAL: I'm Cynthia Breazeal. I am a professor at MIT at the Media Lab. I am also director of a new MIT-wide initiative called Responsible AI for Social Empowerment and Education. We call it RAISE for short. And I very recently am the Dean for Digital Learning.

 

SARAH HANSEN: Cynthia's fascination with artificial intelligence began a long time ago in a galaxy far, far away.

 

CYNTHIA BREAZEAL: When I was 10, I saw Star Wars. And that forever shaped my, I guess, vision of what a robot could be for us. I mean, it wasn't just a mindless automaton. It was an entity that had-- in the movie, they were full-fledged characters. Right? They had friendships. They were mission-driven to save the universe. I mean, all of that. And, of course, they had relationships, not just with the robots but with people.

 

[MUSIC PLAYING]

 

SARAH HANSEN: Cynthia followed that vision of complex human-machine relationships all the way to a PhD program at MIT. There she designed Kismet, the world's first social robot.

 

CYNTHIA BREAZEAL: That kind of started this whole question of socially and emotionally intelligent interaction with machines, because we are profoundly socially and emotional creatures. If we want to design technologies that can interact with us and treat us, again, as people, we need to understand that dimension.

 

A lot of the early work was pushing on I would say the algorithmic side of that. We looked at a lot of models of emotion, and people, and animals, and what is the function of it? And how might you implement aspects of that for intelligent communication and interaction with machines?

 

And then we started thinking about, OK, so what's this really good for? So it's fascinating from a scientific perspective. It's fascinating from a technical perspective. But what does this mean about how we will interact with robots, with intelligent machines in our lives, in our future.

 

And so then we started thinking about, so what are these big societal challenges that we face where potentially an AI cut more from this kind of cloth or this philosophy might really make a difference?

 

SARAH HANSEN: After Kismet, Cynthia began working on a new social robot named Jibo.

 

CYNTHIA BREAZEAL: Jibo is actually these concentric rotating circles that are kind of off-axis. And with that, he can kind of create what's called an animational line of action. So he can change his posture. Right?

 

So he can look like he's upright. Or he can stoop over or lean to the side and cock his head. He can go through all of these kind of expressive postures that are really foundational to an animator when they want to express something through characters.

 

[MUSIC PLAYING]

 

Technologies tend to move in rectilinear trajectories. Living things move in arcs. And so all of Jibo's motions was based on arcs. Right? So he naturally moved in a way that just seemed much more lifelike and expressive, because he fundamentally was designed to move in arcs.

 

You have to see Jibo move. You have to actually watch them dance. Because it's incredible to watch this robot dance.

 

[MUSIC PLAYING]

 

SARAH HANSEN: Jibo didn't just look and move differently from other robots. He functioned differently too.

 

CYNTHIA BREAZEAL: I often use the language of there's the Digital Assistant. Like, this is what like Google and-- they're all called digital assistants. And they're kind of modeled after the executive assistant.

 

We talk about Jibo as being a helpful companion. So that speaks more to the social, emotional, the relational aspects of it. But Jibo is also helpful. He can actually do things.

 

But it's the way he does things are much more humanistic, I would say, human-centered in its design, less transactional. The way he expresses himself through movement and also just the language he uses, it's much more warm, I would say. Like there's cool tech. Jibo is warm tech.

 

[MUSIC PLAYING]

 

SARAH HANSEN: One exciting new application for Jibo that's currently being explored is how he might help provide mental health care.

 

CYNTHIA BREAZEAL: One of the studies we did, we literally designed an emotional wellness coach skill for Jibo. And we deployed Jibo in MIT student dorms. Because as you probably know, mental health is a huge issue. I mean, even before the pandemic, mental health, emotional wellness of students at a lot of top colleges was really concerning. So we decided to explore whether Jibo as an emotional wellness coach could actually help mitigate the stress and anxiety that students experience when they're at an elite university like MIT.

 

So we developed some best-in-breed therapeutic interactions that typically a human would do, like a coach would do with their client or whatever. So typically it's human-human. We took these proven methods and we just, now Jibo is just doing it with the person. And we published that work. And basically the punch line is we were able to show that, yeah, Jibo seemed to make a positive difference.

 

So in terms of the future of health, people could go into hospitals potentially and get world-quality care. But what happens when they go home? So again, something like a Jibo, or a social robot, or whatever, the idea that they can extend the quality of care and support from the doctors and nurses in the hospital to the home where patients are living every day, that's a huge unmet need.

 

And that's a role where potentially I think an intelligent kind of, again, collaborative allied agent with this different kind of interaction paradigm could make a big difference.

 

SARAH HANSEN: One of the fascinating design decisions behind Jibo was choosing just how human to make him.

 

CYNTHIA BREAZEAL: When we developed Jibo, we were very intentional in saying there's the character of Jibo. When you talk about the perception of how children or adults perceive these agents, Jibo as the character was very aware that Jibo was a robot. And if you tried to ask Jibo things that really a robot would have no business answering, like things around religion or whatnot, Jibo would say this, like hey, I'm just a robot. You know, that's probably something that'd be good to talk about another person with. But I'm just a robot. So Jibo was very upfront and transparent about his robot-ness. [LAUGHS]

 

SARAH HANSEN: Collaborative allied agents like Jibo have huge potential to improve the lives of humans. But as Cynthia pointed out, they can't replace the power of real human social networks.

 

CYNTHIA BREAZEAL: This gets into the ethics and the responsible design of these things. I think, first and foremost, people need people. We need our human social networks, our in-person, caring human social networks to come around us when we're in trouble. Social media companies will try to tell you your network of 5,000 people you don't know is just as important as your actual, real, in-person relationships. Real, in-person relationships are absolutely essential. They're essential for our well-being. They're essential for children's development. I mean, they're just critical.

 

And so it signals, I think we're living in this time when people's time and attention has become this incredibly precious resource where the fact that someone will stop and take that time out for you signals something of you're valued. You are really valued, and you're meaningful.

 

[MUSIC PLAYING]

 

SARAH HANSEN: As a parent, the importance of real social networks for children really feels true to me. But I also know that my child is growing up in a world where digital interactions are becoming more and more common and potentially fraught with ethical implications. So I shared with Cynthia a recording of my child talking with our home executive assistant.

 

CHILD: Alexa, what's your favorite color?

 

AMAZON ALEXA: I like ultraviolent. It glows with everything. And here's another fun fact about me. If you'd like, I can learn your voice to do all sorts of neat things like greeting you by name. You'll need your own profile. I can create one for you now. What's your name?

 

CHILD: Alexa, off.

 

AMAZON ALEXA: I'll create a new profile and send information about Alexa terms and our privacy notice to your app. Should I go ahead and create a profile with the name "Off?" Parent: "Alexa, off."

 

CYNTHIA BREAZEAL: Yeah. So as illustrated by that clip, kids are growing up in a time when they will have always been able to talk and interact with intelligent machines, and not just in this verbal way that we heard in the clip, but hidden ways. Right? So when you're on social media and it's trying to learn kind of what your preferences are, and giving you recommendations that are not trying to broaden your perspective but if anything narrow, and narrow, and narrow, and narrow.

 

Or of course what we're learning is, what gets your attention? What tends to get shared? Things that really upset you. [LAUGHS] Those are the things that really-- you know? So that's what it tends to feed you surprisingly quickly, right? So it's important for people of all ages, but I think it's important for kids.

 

Because the average age of when kids get their first smartphone, it's like 10 years old. And social interaction through technology becomes such a key way they interact with each other. So the digital mediated communication is, it just becomes such an important way for them to connect and express themselves.

 

And just understanding what's under the hood, how it could be biasing, shaping their attitudes, potentially shaping behaviors. This is what's really critical-- and why. Who's doing it and why?

 

SARAH HANSEN: That's where RAISE comes in. RAISE, which stands for Responsible AI for Social Empowerment and Education, is an MIT initiative focused on exactly these kinds of questions.

 

CYNTHIA BREAZEAL: It's kind of the cross section of AI and learning, AI and education. So how can we develop AI systems that help us learn, as well as how can we understand how to best empower people to learn about AI so that they can be informed citizens in how they use these technologies? And how do we cultivate the future designers of these technologies that are responsible and ethical in their designs, but also a far more diverse and inclusive future workforce?

 

We love to position the student as the designer of these technologies, so as the active person who's thinking about the design, the social implications, the goals, all of that. It really empowers students in a way that they lean in, and discuss and are invested in it.

 

I think you want children to grow up, students to grow up with the attitude that I actually can shape these things. This is a world that I can be in, and belong in, and shape.

 

We have both a research kind of objective. We have an outreach, k-12 outreach, objective. And a lot of that is bringing these materials to teachers and students of underrepresented, under-resourced schools and communities.

 

Because the field is not diverse or inclusive. We need to take action to correct that. And so we have an explicit diversity and inclusion mission in what we're doing through the outreach.

 

And we are trying to design the curriculum, the concepts, the activities to meet students where they are in things that they use and they think about so it's super relevant to them now. But then we also expose them to the way AI is transforming so many disciplines, and markets, and industries so they understand it's here and it's going to continue to shape our futures.

 

SARAH HANSEN: Part of that work involves offering an opportunity for young people and their teachers across the country to participate in the upcoming Day of AI on May 13th, 2022.

 

CYNTHIA BREAZEAL: We develop all kinds of curriculum of different formats and different lengths. So Day of AI is the introductory experience. And it's really an opportunity, if teachers or students are just curious about AI, it is literally a program intended to be for any kind of teacher, for students from elementary up to high school, where they can learn about AI from more of the lens of its implications on digital literacy and digital citizenship, but also demystifying it as well.

 

So all great bands kind of have this, what is AI introductory module? So for instance, like when I spoke about this AI and creativity curriculum, we've had students who are saying, I didn't think I was relevant or important to me because I want to be a photographer when I grow up. But they're like, now I realize, wow, AI could actually be a really cool tool that I can use creatively in my profession of photography when I grow up.

 

So a lot of this just to help kids to understand that chances are, this is something that's going to be empowering and relevant to you. So just be aware of it, being able to leverage it to help you achieve your goals and your ambitions in life.

 

SARAH HANSEN: Well. If there's a teacher out there like me who sometimes can't even get her agent to turn the lights on, is this still appropriate?

 

CYNTHIA BREAZEAL: Absolutely. So it's for all kinds of teachers. Right? This is not a technical, computer science, STEM curriculum at all. It's really much more around, again, digital citizenship and digital literacy as it pertains to AI, so again, AI in our lives, and understanding that, and having conversations about that, and designing some cool things around that. [LAUGHS]

 

SARAH HANSEN: Yeah. So where can teachers go to sign up? And is there a fee associated?

 

CYNTHIA BREAZEAL: So it's free. Again, one of the great things about MIT is we want to lower the barrier as much as humanly possible. So all the materials are free. You can go to DayofAI.org. You can look at the curriculum right there. We also offer free teacher training. You can register and you can sign up for an online training session.

 

We are collaborating with I2 Learning. This is a company that what they do is train teachers. This is what they do. And so we've worked with them to take our curriculum materials, and using kind of their experience to really tune it for what they think is great for, again, a broad, general teacher audience. And they are running the teacher training sessions. And again, it's completely free. So we would love for teachers to sign up. We are in almost all 50 states now. [LAUGHS]

 

SARAH HANSEN: RAISE and the Day of AI are increasing access to AI education. And they're also informing how we teach it.

 

CYNTHIA BREAZEAL: It is an open question still about what are the most effective ways of educating students at a variety of ages and backgrounds about what artificial intelligence is and how it works. We still have much we need to understand in terms of what grades, what preparation, what concepts, how can we demystify and these computational methods in a way that's grade and age-appropriate?

 

How we can prepare teachers-- there's a whole bunch of research questions there. What are the right approaches, methods of professional development for teachers who, I mean, I can tell you. They didn't major in AI in college, right? So this is new to them too. So how do we do that effectively? What are the longer impacts of this in terms of attitudes and dispositions that students may form about these technologies, and about themselves, and their relationship to these technologies?

 

And then as we do this, how do we again holistically develop the whole person as well? So teamwork, collaboration, communication, critical thinking, creative-- they can all be kind of woven into this tapestry of how we try to address AI literacy through positioning students as the designers, and creators, and makers of these solutions with these technologies.

 

So we design these activities. And then we evaluate them. Right? So we will reach out and recruit students and teachers. And we will engage them. We often engage in co-design with them. So even as we're developing these materials, we will work with teachers and students as we develop them to make sure that they're effective, and responding, and engaging.

 

And then we do rigorous evaluations of what are they understanding or not understanding? What is their pathway to understanding? And what is the implication of that for how we think about staging these concepts and ideas?

 

Are we seeing evidence that they are developing dispositions about AI that are empowering for them, or do they find it as being intimidating or not for them? So there's a lot of questions that we're trying to assess.

 

And then when we have something we think is solid, then we make it available for free. It's all Creative Commons. Our goal is just to help empower teachers, non-profits, get these materials out there and share them freely.

 

SARAH HANSEN: But again, no matter how good robots or AI get, they're no replacement for the real thing.

 

CYNTHIA BREAZEAL: I think a lot of AI has assumed when you're trying to build the social and emotional qualities into technologies, it's about the robot. And I'm like, no, it's about the people because this is what people need to engage. The more deeply we can engage of our holistic self, not just our cognitive selves, but our social selves, our emotional selves, our physical selves-- the more deeply we can engage, not surprisingly, the more successful we are, the better our outcomes.

 

So it's to say, it's not about let's build machines that look like people and act like people. It's, let's build machines that dovetail with us in a way that we can bring forth all of ourselves to help us achieve the goals that are really important to us.

 

And this is why when we talk about AI as pedagogical agents, I'm like, we will never get rid of human teachers because they are essential for children to know that in this community of human beings, adults invest their time in children's learning and development. Like, you matter. I'm taking time and energy to do this because it's that important. And you are that important. You know, you just can't underestimate that.

 

[MUSIC PLAYING]

 

SARAH HANSEN: To take part in the Day of AI on May 13, go to DayofAI.org. You can access free teacher training and other Creative Commons educational materials at RAISE.MIT.edu. Or you can access them through our website at OCW.MIT.edu. You can also find Cynthia's free and open teaching materials from her MIT courses on our website.

 

Thank you so much for listening. Until next time, signing off from a galaxy not too far away, I'm Sarah Hansen from MIT OpenCourseWare. Chalk Radio's producers include myself, Brett Paci, and Dave Lishansky. Scriptwriting assistance from Aubrey Calaway. Show notes for this episode were written by Peter Chipman.

 

The RAISE online publication was shared on MIT OpenCourseWare by Sharon Lin. We're funded by MIT Open Learning and supporters like you.

 

[MUSIC PLAYING]