How might software engineers use an ethics protocol to make more mindful decisions about the products they design?
In the previous episode we learned about a project undertaken as part of the Social and Ethical Responsibilities of Computing (SERC) initiative at MIT’s Schwartzman College of Computing. In this episode we hear about another SERC project, from Prof. Daniel Jackson and graduate teaching assistant Serena Booth, who have partnered to incorporate ethical considerations in Prof. Jackson and Prof. Arvind Satyanarayan’s course 6.170 Software Studio. Jackson and Booth explain that software can fail its users in three ways: First, it can simply work badly, failing to meet the purpose it was intended for. Second, it may do what the user wants it to, while simultaneously accomplishing some insidious purpose that the user is unaware of. Third, as Prof. Jackson puts it, it may “contribute to a computational environment that has subtly pernicious effects” on the individual or on society—effects unintended not only by the user but also by the software designer. In their revised syllabus for 6.170, Jackson and Booth attempt to address these second and third types of failure by introducing ethical concerns early in the course and by sharing an ethics protocol to scaffold students’ decision-making throughout the software design process.
Relevant Resources:
Social and Ethical Responsibilities of Computing (SERC) resource on OpenCourseWare
6.170 Software Studio ethics assignments
Professor Jackson’s faculty page
Serena Booth’s personal website
Music in this episode by Blue Dot Sessions
Connect with Us
If you have a suggestion for a new episode or have used OCW to change your life or those of others, tell us your story. We’d love to hear from you!
Call us @ 617-715-2517
Stay Current
Subscribe to the free monthly "MIT OpenCourseWare Update" e-newsletter.
Support OCW
If you like Chalk Radio and OpenCourseware, donate to help keep these programs going!
Credits
Sarah Hansen, host and producer
Brett Paci, producer
Dave Lishansky, producer
Show notes by Peter Chipman
[MUSIC PLAYING] DANIEL JACKSON: These ethical concerns don't come about because of simple binary choices between being good or evil. They come about largely because of lack of awareness.
SARAH HANSEN: Today on Chalk Radio, we're talking about how the ethical decisions of engineers shape the software we use every day.
SERENA BOOTH: Ultimately, through going through this concrete process, the goal is to get you to identify what we call value-laden decision points in your engineering work and to choose better, just through being more informed and deliberate.
SARAH HANSEN: I'm your host, Sarah Hansen. This week, we're in conversation with two MIT scientists who are thinking deeply about how software design impacts our experiences in the world. They're also encouraging students to examine the fault lines between humans and technology and helping students to consider the social and ethical responsibilities of computing. My guests are Daniel Jackson and Serena Booth.
DANIEL JACKSON: I'm a Professor of Computer Science, and I'm also an Associate Director of CSAIL, the Computer Science and AI Lab at MIT.
SERENA BOOTH: I'm entering my fourth year as a PhD student here at MIT.
SARAH HANSEN: Before collaborating in the classroom, Daniel and Serena were both conducting research at the intersection of complex technology and everyday human experiences.
DANIEL JACKSON: The focus of my work in the last few years has been to try and understand design of software not from the point of view of its internal structure, which by the way, is what people actually mean when they talk about designing software. They mean, how do you write the code, how do you organize the code, how do you fit things together, and so on. But rather, design in the sense that anyone out there in the world would use the term, namely design at the interface between machine and human being. And I've become very interested in the question of, essentially, whether we can come up with a kind of grand theory of design that can explain how to design software so that it's more usable, more secure, more safe, and more pleasant.
SERENA BOOTH: I really want to create useful general-purpose robots which collaborate with people. So it's important to me that robots don't replace people, they accentuate the things that we like in our lives and take away some of the chores and the boring things. I think we often forget about the need for the person to work and collaborate with that robot. That's where I'm really interested, trying to get humans to understand how robots work to be able to predict what they'll do next so that you can make this collaboration safe and useful to both human and robot.
SARAH HANSEN: As they've pursued their own research, they've both witnessed how different kinds of seemingly simple design decisions have serious social and ethical implications.
DANIEL JACKSON: I would say I can put them very crudely into three different categories. The first is that it's very easy to design and build a device or a system or an application that simply doesn't fulfill its intended purpose well. A really compelling example of this, for me, is the panoply of medical devices often designed by companies that have great technology but don't have sufficient expertise in software and human-computer interaction design to think about questions related to what kinds of mistakes the users can make.
So there are many horrendous stories of infusion pumps, for example, in which nurses enter incorrect dosage rates, and people are killed or severely injured because, for example, a chemotherapy drug is delivered at much too high a concentration or much too high a volume. But medical devices are just one category of systems that are often poorly implemented in a way that, essentially, doesn't appreciate how important it is to accommodate the kinds of errors that users make. And I should say, there's also a well known and very unfortunate tendency in society to blame users for the errors they make.
As just one example, a horrendous accident, a series of accidents, due to a radiotherapy machine in Panama led to all the nurses involved in it being prosecuted and, I think, some of them going to jail, despite the fact that it was clearly a system design flaw. So that's the first category, things not meeting their purposes.
The second category is that they are designed to meet certain purposes, but those purposes are insidious ones. I'll give you an example of one which is insidious only in a mild way, and that is that a lot of applications provide some kind of notification feature. And you might imagine that's to serve what the word suggests, to notify the users. So Facebook tells you, for example, when there might be posts of interest to you.
But if you dig a little deeper, and you look into the question of how easily you can actually configure those settings and decide what you want to be notified about, you'll discover that, in fact, you have very little control. And what that reveals is that the real purpose of this notification concept is, in fact, to encourage engagement in the site and to draw people to visit more frequently than is actually good for them. That's the second category.
The third category are basically the respects in which designs can contribute to a computational environment that has subtly pernicious effects. And those effects include, for example, marginalizing certain groups of people, advantaging some users over others, encouraging antisocial or undemocratic behaviors, creating damage to community, or even just personally reducing quality of life by making us less reflective or creative or independent by enslaving us to our devices, by sapping our attention, and all these things that we've all become very concerned with with the devices we use but which are not necessarily inherent in those devices but exacerbated by the design decisions we make.
SERENA BOOTH: I can give you an anecdote from my days at Google. So I'd just graduated from college. I was, like, 22 years old. I'd been at Google for a few weeks, and I was working on Search. And my team came to me, and we were having this crisis. We were redesigning the dinosaurs search page. And everyone wasn't sure. We'd gotten a lot of bugs from parents complaining about our dinosaurs search page because "pterodactyl" didn't appear there, and their children wanted it there.
And so we had this little crisis as a team. We're like, should we put "pterodactyl" on there? It's not really a dinosaur. It's technically a pterosaur, so we would be lying to the children. And because I was a product manager, they were like, OK, Serena, you decide. And so I just made a decision, and then we moved on.
And this is like totally unimportant, right? It doesn't really matter if the kids think pterodactyl is a dinosaur. But it just opens the door, the floodgates, to thinking about how much control we have over how people understand the world. And so, yeah, those sorts of decisions are things I think about a lot.
SARAH HANSEN: Serena and Daniel noticed a gap between the questions they were asking themselves as researchers and what students were being asked inside the classroom.
DANIEL JACKSON: I'd always been troubled by the way in which we had perhaps not only allowed but even encouraged our students to marginalize ethical concerns and sometimes think of them in rather trite terms. One thing that always comes to mind to me is that, when. Google was founded, there was this slogan, "Do no evil." And there was this sense, I think, amongst our students and amongst a lot of young technologists that we're good people.
And so it's understandable that we thought we didn't want to do evil. But the truth is, most people don't want to do evil. The people in charge of the companies that do some of the most terrible things to the planet are really not evil people. Most of them are people who love their families and friends, who want to do good in the world.
And these ethical concerns don't come about because of simple binary choices between being good or evil. They come about largely because of lack of awareness. And as we've seen with Google, for example, money corrupts everything. And once you start to be hugely financially successful, unless you have a very strong ethical lens, it's very hard to resist the temptations of money.
And I thought it was very important for us to be able to inoculate our students against some of those temptations and for them to develop the kind of sophistication in their social and ethical understanding of the world that they have in their technological understanding. I will say, also, there was a very simple pragmatic reason that I wanted to be involved with SERC, which was that I didn't know how to do this myself.
SARAH HANSEN: You may remember the acronym SERC from our previous episode with Catherine D'Ignazio, Jacob Andreas, and Harini Suresh. It stands for Social and Ethical Responsibilities of Computing.
SERENA BOOTH: So recently, MIT announced its launch of the College of Computing, which is a new endeavor to try to bring cross-disciplinary computing all together. It's going to really change MIT as we know it. But as part of that, also, they've launched SERC. The goal is to bring all of the different perspectives on social and ethical responsibilities together-- so philosophers, social scientists, cognitive scientists, computer scientists, many more-- and get that coalition of people to help embed ethics education into MIT's curriculum, very broadly.
SARAH HANSEN: In collaboration with SERC, Serena joined Daniel as a graduate teaching assistant for the course 6.170 Software Studio. One of their primary goals was to bring social and ethical considerations into the engineering classroom. That's when they turned to a powerful tool called the Ethics Protocol.
SERENA BOOTH: The Ethics Protocol was developed here at MIT. It was developed by Milo Phillips-Brown, who's now an assistant professor at Oxford University. And Abby Jaques. They are both philosophers by training, and as they matured in their careers, they started to get more involved in designing ethics-related materials for computing, specifically.
So the Ethics Protocol is a tool which tries to assist in their mission, which is to teach ethics as a skill. Ultimately, through going through this concrete process, the goal is to get you to identify what we call value-laden decision points in your engineering work and to choose better just through being more informed and deliberate.
SARAH HANSEN: Daniel and Serena noted that it's not possible or ethical to simply give students a singular ethical principle to guide their decision making. Instead students are introduced to a set of three lenses that they can apply when assessing a project or idea. The three lenses of the Ethics Protocol are outcomes, structure, and process.
SERENA BOOTH: The outcomes lens asks you to analyze the consequences of a decision or intervention, and that's a very intuitive one to use. The process lens asks you to consider the treatment of your stakeholders. So you should think about things like your stakeholders' autonomy, their consent, their participation. And then that last lens, the structure lens, asks you to analyze how outcomes and process is distributed among the stakeholders. Another good and perhaps more descriptive name for this lens would be a justice lens.
[MUSIC PLAYING]
So in software design, we typically have one definition for stakeholders, somebody who's involved in making, using, or would be likely to use your software. You're building for a relatively small set of people, and you're trying to be very mindful of their needs as you build. But when you think instead about ethical implications, scoping your stakeholders this way is just not reasonable, right? You should also consider all of the stakeholders who are affected or could be affected by your software.
As a concrete example, perhaps we could think about a company like Uber or Lyft. So under the software definition of stakeholders, you probably think about drivers and customers, right? Those are the only two stakeholders who really matter. But under the Ethics Protocol definition of stakeholders, you should think about others, too. So the very obvious example here is traditional taxi drivers who are being undercut by these apps.
It's impossible for you to think of like every possible stakeholder and every potential effect that your software could have. It's just not tenable, right? And so there are things that will be out of your control, and you have to be OK with that. But nonetheless, you should still try to anticipate as much as possible to avoid those unintended consequences downstream.
SARAH HANSEN: I was curious what this all looks like in the classroom, when students begin to put the Ethics Protocol into practice.
DANIEL JACKSON: Our course consists of, essentially, two parts. In the first part of the term, they're learning the basic design ideas and the basic technologies and putting them together and applying them on small, well-contained problems. And then in the second half of the term, they join together in teams, and they take on a larger project of their own invention. And they build that project from concept to implementation and deployment and everything.
We introduce these social and ethical concerns incrementally. We start with the individual assignments by asking them to reflect on the ethical and social ramifications of the assignment they're doing and the implications of that assignment. And we slowly ramp up the intensity and the sophistication of the analysis that we ask from them.
And then, as we move towards the final project we ask them to begin to take those decisions into account in their own design work. And then in the final project, we encourage them to develop their ideas using these lenses that Serena described and in the context of this kind of analysis.
SERENA BOOTH: And it was part of their deliverable, which I think is really important. So it wasn't just them talking about it. It was part of their grade. And it felt very real and important. And I think we saw the students really participate and start to integrate this into their thinking, partially as a consequence of that integration.
DANIEL JACKSON: It was actually a surprise even to me, despite my enthusiasm for this material, that we found that there were so many social and ethical concerns even in the relatively narrow context of the rather technical assignments at the beginning. So I'll just give you one example. We have them build what we call Fritter. Fritter is, essentially, a Twitter clone. But just questions regarding, for example, what kinds of posts are shown, what the criteria are for displaying posts, tweets, or freets, as we call them, what order they appear in, and so on can have social ramifications.
SARAH HANSEN: At this point in the conversation, I started wondering what it was like to teach ethics in the context of cancel culture. But as Daniel and Serena explained, students aren't being taught what to think. They aren't being asked to espouse particular ideologies in the classroom. Instead, they're learning how to ask better questions.
SERENA BOOTH: We try not to be prescriptive. Sometimes that can be quite hard, because we all have quite a strong sense of right and wrong. But instead, what we're trying to do mostly is to get the students to question the decisions that they're making and to try to refine and iterate on their own moral and ethical compasses. And so we want them to develop that understanding for themselves through their own lens and leave MIT with that knowledge.
DANIEL JACKSON: Just because something is subjective, just because we might not all agree on evaluation of particular ethical consequences doesn't mean that we can't attain some kind of objective understanding of what it means to have a competent ethical analysis that is appropriately far-reaching, balanced, and deepened in different respects. One of the things that we're trying to teach the students is that this isn't about any particular ideology or dogma.
As Serena explained, what we do is we really give them tools. And we provide them with these lenses. And we never suggest that any one ethical evaluation is correct and another one is incorrect. We don't say that Uber should make its decisions according to the impact on the livelihood of existing taxi drivers. Nor do we say that the shareholders of Uber are the most important stakeholders. Nor do we say that the drivers or the passengers are important.
The point is that any design decision must be made with a full awareness of which stakeholders you're actually taking into account. And then how you actually do that yourself, well, that's of course, tremendously important. We're not for a minute suggesting to the students that there's any kind of predefined ethical judgments that we expect them to make.
And on the contrary, I think, we'd actually like to complicate their understanding. My goal is always to persuade the students and myself the preconceptions that we come in with are too simple-minded and that there's a lot of work to be done to understand things more deeply.
SARAH HANSEN: As they develop these deeper understandings, students in Daniel and Serena's Software Studio course are also challenging stereotypes about the kind of work engineering students can be passionate about.
DANIEL JACKSON: Our students, it turns out, are absolutely passionate about making their mark in the world and about being part of a larger community. I think it's just a question of giving them the opportunity to do this. I suspect if you asked them, "Would you like to take a course on social and ethical ramifications of computing?" they might not list it. It's actually very much parallel to the experience of the humanities at MIT, where students generally don't rush towards the humanities classes. They're required to take some as part of their General Institute Requirements.
But when they take those classes, typically they love them. And they love them not only because they're a break from the much more sometimes concrete and narrow thinking that they do in their technical classes, but I think because it allows them to bring their whole selves to their work. And it makes them feel that they have agency and that they are actually contributing in a broader and deeper way.
SARAH HANSEN: It isn't just the students who are being prompted to think more deeply and creatively about engineering coursework.
SERENA BOOTH: So I think one of the challenges in embedding SERC materials is that the university experience is like very much a bubble. Normally, you're working on kind of toy problems. You're not shipping software, right? Your decisions kind of don't have social and ethical implications, because probably your software isn't going to leave your hands, really. Your TA will see it. Your professor will see it. But it's still very closely held.
So something that I struggle with is trying to bridge this gap to having the decisions that you make have real world consequences. And so in this class, the way we tried to approach it was by making it realistic. But it's still, to some extent, like building in a sandbox. And so I'm curious about what other educators do to try to make this extremely real to your students, right? How can we make the social and ethical responsibilities questions tangible?
DANIEL JACKSON: The other thing I would say that, for me, is a very big challenge is that we're constantly feeling overwhelmed by the amount of technical material that we need to teach. And we're finding ways to reduce that by preparing the students better in earlier courses, by flipping the classroom more. But I would welcome any ideas that people have on how to switch students out of this mindset, that just learning the details of the programming language or the web platform that they're using or whatever is all that matters and to get them to see beyond that.
[MUSIC PLAYING]
SARAH HANSEN: If you have insights to share about teaching social and ethical responsibilities of computing, please get in touch with me through the link in our show notes. I'll pass your ideas along to Daniel, Serena, and other listeners. If you're interested in learning more about ethics and software design or remixing Daniel and Serena's open educational resources in your own teaching head on over to our MIT OpenCourseWare website. You'll find all the materials there.
Thank you so much for listening. Until next time, signing off from Cambridge, Massachusetts, and about to go google "pterodactyl," I'm your host, Sarah Hansen, from MIT OpenCourseWare.
[MUSIC PLAYING]
Chalk Radio's producers include myself, Brett Paci, and Dave Lishansky. Script writing assistance from Aubrey Calaway. Show notes for this episode were written by Peter Chipman. And the SERC resource site on OCW was built by Cathleen Nalyzaty. We're funded by MIT Open Learning and supporters like you.
[MUSIC PLAYING]