MINDWORKS

Human-AI Teams with Jared Freeman and Adam Fouse

January 26, 2021 Daniel Serfaty Season 1 Episode 11
MINDWORKS
Human-AI Teams with Jared Freeman and Adam Fouse
Show Notes Transcript

In Part 4 of this five-part series on “The Magic of Teams,” host Daniel Serfaty goes deep into one of the most important technological achievements of our time: the marriage between artificial intelligence and human intelligence. Daniel talks with Dr. Jared Freeman and Dr. Adam Fouse about a future for human-AI teams that uses the best of human expertise combined with data-rich artificial intelligence, in which the augmentation of human intelligence by AI could not only improve work effectiveness, but perhaps even better humankind in general.

Daniel Serfaty: Welcome to MINDWORKS, this is your host Daniel Serfaty. This episode is part four of our ground breaking five part series exploring the magic of teams. Over the course of this series we've gone from the ABC of teams to the future of teams, and this week's episode is on human-AI teams. We will explore one of the most important technological development of our times, the marriage between Artificial Intelligence and human intelligence. We'll go deeper into how we can use the best of both human expertise, human experience and data rich Artificial Intelligence . And examine how human intelligence can be augmented by Artificial Intelligence to not only improve work effectiveness, but perhaps even better human kind in general. My two guests today have been deeply immersed in this area, working on a portfolio of interconnected projects, exploring the symbiosis of Artificial Intelligence and human intelligence. But even beyond that, laying the foundations for a whole new science of human-AI teaming.

Dr. Jared Freeman is a chief scientist at Aptimas and Dr. Adam Fouse is a director of Aptimas Performance Augmentation Systems Division. Welcome to MINDWORKS. Welcome to MINDWORKS Jared and Adam. For the benefit of our audience, would you please introduce yourself and specifically what made you choose these particular domain? The mind in a sense, the understanding of human or human performance as a field of endeavor. Jared, could we start with you?

Jared Freeman: Thanks. So I took an undergraduate degree years and years ago in urban design where we encountered the phrase wicked problems, meant to describe the difficult dynamic systems in which every move you take changes the entire social problem you're trying to solve. After that, I was a journalist in the architecture field for a while, doing a lot of writing and a lot of design, both of which are ill defined problems at best, right? There's no prescription for doing them well, wicked problems at worst. So when I decided to take a doctorate like focused in cognitive psychology on human problem solving and learning. That is, how do people understand complex problems? How do they solve them? How do they learn to do this better over time?

Daniel Serfaty: So do you consider that also we can endeavor in the sense that really understanding how humans solve problems is more complicated than the previous domains that you studied?

Jared Freeman: I certainly consider human-AI teaming to be a wicked problem.

Daniel Serfaty: Well, we will explore that one. Apparently you don't consider human intelligence by itself being a wicked problem. That's good news. I'm obviously saying that in jest, because we know... and Jared knows better than many of us, how complicated the human mind is. Adam, what made you choose this field? Of all the fields that you could have chosen in engineering or psychology, why these particular field?

Adam Fouse: Well, ever since I was a young kid, I was interested in the interaction with people and computers. I remember when I was small and my father brought home a Shirley Macintosh, and he was like, "This is pretty cool stuff. Let's figure out how to do this even better." So when I got around to going to undergraduate, I did both computer science and cognitive science. I thought you really need to understand how people work [inaudible 00:04:02] this stuff well. And as I progressed in my career, when I went to get my doctorate, I wanted to look at that even more from the cognitive science perspective. I ended up doing that with some folks that were very much looking at it from this idea of distributed cognition. Where cognition happens is in the interaction between people and other people and the people and the things they use and the tools they use and the technology they use. And so I was looking at that in more of a traditional human computer interaction perspective. That's led very naturally to be thinking about that from the perspective of how do you bring Artificial Intelligence into that.

Daniel Serfaty: Okay, so you're both division directors in charge of a few dozen scientists and engineers, but you're also the scientists yourselves. Can you describe for our audience, what is it that you do in your day job?

Adam Fouse: I spend a lot of my time as the lead investigator on projects that are looking at this question of human-AI teaming. But think, say five, 10, 15 years down the road of how do we model these things? How can we describe them in some way, using mathematics and computational techniques to understand how to bring these things together? Can we do that in a way that goes beyond just trying to look at a specific problem and say, "Well, maybe we have people do this and we have the machines do this. Can we try to develop a more principled way of doing that?"

Daniel Serfaty: Well, we are throwing words around for our audience. I promise we will put some order in them like fusion and teaming and marriages and all kinds of things when we talk fundamentally about carbon-based intelligence on the one hand and silicon-based intelligence on the other hand. Perhaps Jared, as the chief scientist of a research development and engineering organization, what is it that you do in your day job?

Jared Freeman: I'd say I have two jobs that I try to weave together. The one at the corporate level at Aptima is to envision what science and technology should and will look like in the future. And to manage internal research and development funds, manage our interactions with a very distinguished board of S and T advisors to get us to that future. Like many of my colleagues, I also do a good deal of technical work. I serve as the principal investigator or as a researcher on several DARPA programs in which Aptima conducts test and evaluation work on AI. And these range from a program concerning development of artificial social intelligence, program concerning development of tools for biological and chemical research and a program concerning detection of deep fakes.

Daniel Serfaty: Wow. That sounds pretty involved. It's difficult to imagine with a person of your experience, Jared or researcher of your experience, to still be doing things that are new or surprising. Have you had recently or not recently a big aha moment as you are exploring what sounds incredibly futuristics endeavors of social intelligence for machines and things like that? Were there any aha moments in the past 25 years that really surprised you?

Jared Freeman: Ironically, the strongest epiphany I've had in the past few months grew out of work that Aptima did in the late 1990s. Aptima was born on a contract to put computational modeling and computational power behind the design and evaluation of organizations. And that meant we had to structure our understanding of what an organization is, right? How do you represent people, their skills, tasks, the sequencing of skills? How do you optimize all of that? How do you understand if your optimization is something people can actually execute in the real world? It finally dawned on me, decades after having engaged in that work that you started, Daniel. That we can use that understanding of teams to help us position AI within teams and even to run very large scientific enterprises, such as the 200 person organizations that each DARPA program is, to make those ventures productive for the scientists involved.

Daniel Serfaty: There's a tendency, I'm saying that for our audience, of us in this field to use a lot of acronyms. And then I'll try to help our audience. DARPA, by the way, is a Defense Advanced Research Project Agency, for those of you who don't know, and it is the elite agency really looking at pushing the envelope of science and technology. Sometime for the very long-term and sometime for the short term, is fundamentally to avoid what they call technological surprise. And so many of the fundamental discoveries of the last 20 years or so, or 30 years, or even more have found their origin at DARPA. And that includes by the way, the internet itself. What about you, Adam? You don't have the many, many, many years of experience that Jared has, and that's about as much as I'm going to say about the age of my guests here today, but any aha moments that you can remember?

Adam Fouse: Yeah, so one thing I was thinking about is you asked me what I do in my day job looking out further, but in fact in my job as a director, I tend to be looking at projects that are a bit more near term. How can we take things and make them useful in the next couple of years? And earlier on in my Aptima career, there's a project that we were involved in. It was supporting an analyst at the air force. These analysts are actually very much what you were just describing about DARPA. They're trying to understand where technology is headed and try to avoid technological surprise. And so their job is try to collect as much information about a particular topic as they can. Some new types of airplanes, some new types of Artificial Intelligence, and understand where things are going so that we know what to be prepared for.

We're working on this project and we're trying to understand what can we offer here, say, just better than Google? Why not just give them Google? Why not just let them search all the information they can? And the answer ended up being that can be useful for them. You really get ahead to understand what it was they were trying to do, what kind of their work looked like, how AI technology is, things like machine learning could actually fit into that in a good way. That could both help them do their job and help them learn how to do the job better in the future. And that was a real nice instance where it really became clear to me that understanding the pairing is sometimes a lot more important than understanding just the base technology itself. And that's a plate that helped to point my career at Aptima more in that direction of looking at this combination of human and computational technologies.

Daniel Serfaty: That's a wonderful setup for the next area I would like to explore for with both of us. Which is a fundamental dilemma in a sense of paradox. So we know, and a lot of studies have shown that automation and robots and AI are going to displace a lot of jobs over the next 10 years. Some people even think that 25 to 30% of the jobs on the planet are going to be disrupted by the introduction of those technologies. In some areas we're already seeing that. So why is that still important to understand human performance? If humans are going to be replaced, taken over, perhaps by all these technologies or is it something else there? Should we go even deeper? Jared, what do you think?

Jared Freeman: So I think there's a fallacy lying in there. Every introduction of a new technology begins by changing a job or eliminating one, speeding up the tasks, making task work more accurate. But the really significant technologies, right? Think of the weapons of war, think of medical discovery and invention. They change everything, they change the tasks, they change the tactics by which we string tasks together. They change the strategies by which we choose which missions we're going to execute. So we have to think larger than simply displacing individual jobs. We have to be able to look at an incredibly dynamic future in which there are new jobs, new missions, and figure out what we can do with that to enrich human life.

Daniel Serfaty: So you're saying that it's not really about elimination, but rather transformation?

Jared Freeman: Yes.

Daniel Serfaty: And that transformation forces us to double down in a sense, in our understanding of human cognition, human performance because it's going to be transformed by those very technology that we introduce. It's almost like the introduction of the car, for example, or the introduction of the airplane eliminated a bunch of things, but primarily created new jobs and transform old jobs into something else. Do you agree, Adam?

Adam Fouse: Completely, I think that it's not so much about displacement as it's about transformation. And I think going along with that transformation, this understanding about human performance and understand that in the context of technology is really important to help us avoid bad things and make the good things better. There's potential when you introduce technology for all sorts of negative unforeseen consequences, and we want to make sure we avoid those. But there's also potential for really amazing, great things to happen. And we can't do either of those things, we can't avoid making bad things happen or ensure that good things happen if we don't understand what this transformation looks like to what humans are able to do when new forms of technology are introduced.

Daniel Serfaty: Yes. And we already witnessing several of those examples today, we will explore them in a little while. Like the job of a radiologist, for example, has changed with the introduction of Artificial Intelligence that can interpret MRI or ultrasound pictures for example. We are not eliminating the job of the radiology, its just a radiology test to adapt to that new reality. Hopefully as a result of that, patients will get a better service. So let me backup for a second, because I think our audience deserves an explanation. Humans have been working with computers for a while. Adam, you say, as a young kid, you are already banging on a Macintosh or something like that. Would you explain to me two things, what is human computer interaction engineering? What is that? And whether it's consistent? Then the second question that I like both of you to explore is, isn't AI just a special case of this? Just yet another computer or another technology with which we have to interact?

Or is there something different about it? Maybe Adam, you can start with telling us a little bit, what does a human computer interaction engineer does for a living? And then maybe the two of you can explore that is AI exceptional?

Adam Fouse: Sure. So a human computer interaction engineer fundamentally looks at what should the interaction between a person and the machine look like. But computer broadly construed because these days what a computer is isn't just a box sitting on a desk, but it's a tiny little box in your pocket, or it's a refrigerator. Fundamentally we want to say what are ways to make that work well? What are ways that can let us do things we weren't able to do before? Part of that is user interface design, what should buttons look like? How should things be laid out? But part of that is also what different types of interaction might you have?

And so a great example of that is the touch base devices that we're all familiar with these days with smartphones and tablets and things like that. Years before those came out, say, I think five years, a decade before those came out, computer interaction engineers were designing bigger tables to study how you might do things with touch. Like the idea of pinch to zoom is something that showed up in that research field years before you saw it happen on an iPhone. And that's trying to understand and invent the future of what those interactions look like.

Daniel Serfaty: And we know what happened when this is not well-designed. When that interaction is left to random acts or improvisation, we are witnessing many accidents, many industrial accidents, sometime very fatal accidents that happened when that interaction is not well-engineered. But Jared, perhaps based upon this notion that there is a discipline that deals with, how do we use the computer as a tool? How do we engineer the interface so the intent of the human is well understood by the machine and the machine actions are well understood by the human? But when we introduce AI into that machine, Artificial Intelligence, is there a paradigm shift here or just an extension?

Jared Freeman: I think there's a paradigm shift. I think that AI is qualitatively different from most computing systems and certainly most mechanical systems that we use now. And it's different in two ways. First it's enormously more complex internally, and second it has the potential, which is sometimes realized to change over time, to learn new behaviors from its experience. So, this has a couple of effects, it makes it harder for humans to predict how AI will react in a given circumstance because we don't understand the AIs reasoning. We don't even understand its perception often. Additionally, current AI at least is quite fragile. We have all seen the examples of putting a piece of duct tape onto a stop sign and suddenly the AI can't identify what that object is. Yet in human teams, we've put immense value on the reliability of our teammates, that they be competent, right? They not fail when the stop sign has a piece of duct tape on it, and that their behavior be fairly predictable.

These are exactly the attributes in which there's weakness in AI, and yet there's huge potential there, right? Such that it's really in a different domain from classic computing system.

Daniel Serfaty: We'll explore that notion of trust and reliance on a teammate and certainly the metaphor that is sometime disputed in our field of the team, the AI as a teammate. And primarily because I want to explore this notion that the Artificial Intelligence, unlike other machines that may be very fast or can accomplish many tasks, AI can actually learn and learn from working with a human and change as a result. Adam, can you tell me about the recent project in which you are trying to do exactly that, what Jared is saying. And combining the human expertise we use Artificial Intelligence, computational power, and any insight you could provide to our audience about what you're learning so far. What is hard in that new engineering in a sense?

Adam Fouse: An interesting example is on a project that I get to be a part of. It's a project for the DARPA agency that we mentioned earlier, that it's looking at trying to identify software vulnerabilities. So this is the idea that we've all seen all sorts of hacks that happen out in the world, things, software, where there's some weakness you have to update your phone once a week to prevent some vulnerability that was found in the software. And it's a really hard thing to do to find where those vulnerabilities exist. And so what we're trying to do is to say, are there ways that we can bring together AI based methods to look for these vulnerabilities? Both human experts that know a lot about this, but also human novices that may know something about maybe computer programming or might know about different aspects of technology, but aren't hackers.

Daniel Serfaty: Those human experts are what? What kind of expertise do they have? They [inaudible 00:20:27] or what do they do?

Adam Fouse: They spend their life looking for software vulnerabilities. Major companies like Apple or Microsoft will hire companies to say, find vulnerabilities in our software. And so they're people that know things like here's common ways that a software engineer might make a flaw in a system and we can look for places where those might occur. Or here's a recently introduced vulnerability that someone discovered and we can look for variations on them. So their job is to keep track of what are common vulnerabilities? What are new things that have existed? And what, just from a very practitioner perspective, what are the tools they can use to look for these things? How can they be looking at things like computer memory addresses to say, is there something here that might let me cause some trouble? So we're trying to say what that process I just described, those people are very time consuming. It was also a very specialized skill that isn't well distributed. And so as more and more software gets created, less and less of that software has that process applied to it. And so we need to be able to scale that.

Daniel Serfaty: At that point, the AI can help or the AI can replace that need for very fast reaction or very complex insight into those systems?

Adam Fouse: A good way to answer that is to say that at one point, people thought the AI could replace them. And so there was another project by DARPA that was funding an effort to do exactly that. To create fully automated systems to find software vulnerabilities. And that was semi-successful. They were able to create tools to fit this kind of thing, but definitely not reach the point where it could replace what the humans do. But the interesting insight that they had in the process of creating those, was they would watch what those systems are doing and they had to be hands off. And they realized that they could just get people in there to help guide things, provide a little bit of initial guidance to cut off paths that weren't going to be fruitful. That they could be much, much more effective.

In this project we're trying to figure out how can we do that? How can we incorporate this insight from experts? How can we find things where someone has less experience but has human insight and might be able to look at something? Let's say that you can might be able to look at some image output and see whether something looks correct or not. Which might be hard for a automated system to do without the right context, but a human that will pick that up pretty quickly. And so one of the nice insights from this is, how can we design this human-AI system to bring in input from multiple different things, multiple different people that have different skill levels, multiple different Artificial Intelligence systems that might have different strengths and weaknesses and bring that all together.

Daniel Serfaty: Thank you, Adam. Jared, this is an example I think, of what you said earlier. Where by the job of that cyber defender, of that vulnerability specialist has changed from these very tedious looking at enormous system, and looking at small vulnerability to one of the guides that led to the AI look at those tedious, extremely complex systems and guide the AI here and there. So that's a job transformation, isn't it?

Jared Freeman: I think it is. And in some sense, that work is quite similar to work that we're conducting on the test evaluation team for another program in which, the goal is to build AI that detects deep fakes. A deep fake is a piece of text, a photo, a video, a piece of audio, a blend of all of them, that is either generated by AI or perhaps elegantly edited with machine help. In the worst case, these may be designed to influence an election, a stock market decision. And so you can look at this challenge in Semafore in two ways, one is simply building machinery, which throws up a stoplight, this is a fake, this is not a fake, this is legitimate, right?

Or you can look at it as changing the task, right? Finding some way, as Adam said, to prioritize for an overworked analyst, what to look at, but more deeply giving them opportunity to infer the intent of those fakes. Maybe even aid them in making that inference about whether this fake is evil or comical from the onion and accident. This is the deep qualitative issue that analysts don't have time or energy to address. And it's what the AI will give them time to do. And the AI must also help with that.

Daniel Serfaty: Well and I hope they will succeed because certainly society with this extraordinary amount of information we are asked to absorb today, doesn't let the user or the consumer of that information really easily distinguish between what is real and what is fake. And we know these days that it is a major concern, as you say, Jared. So maybe AI can be used for good here to try to weed out bad. Let's continue on that, I mean, you talk about this project earlier today and you mentioned the term social intelligence. I would like to know what is it? Are we trying to make AI aware of society? Aware of their microcosm of the people and the other AI that interact with it? What is it?

Jared Freeman: Even a three-year-old human has social intelligence. This is labeled in the philosophy of science theory of mind, right? An ability to watch mom enter the room with a pile of clothes and then open the door to the washing machine so she can put them in. Inferring that what she wants to do is put those clothes into the machine. The assist program out of DARPA aims to imbue AI with a theory of mind, meaning a little more specifically an ability to infer what humans know and believe to be true about the world. And to predict what action they might take, then to deliver guidance, just deliver advice which humans will listen to because it aligns with their knowledge, which they might comply with more readily or be able to critique more easily.

Daniel Serfaty: So in a sense are you trying to develop empathy in Artificial Intelligence? I mean, is that really what we're trying to do? So the ability basically not only to infer actions by others, but also to understand the reason why others are taking certain actions?

Jared Freeman: Yes. I think people generally associate empathy with emotion. And certainly AI that can appreciate the emotional state of its human teammates will get farther than AI that doesn't. But here we need to expand the meaning of empathy a bit to say that it also denotes understanding the knowledge that others bring with them. The beliefs about the state of the world, that others bring with them. Their understanding of what they can do in the world and can't do in the world, right? So there's a distinct cognitive component, as well as the effective component.

Daniel Serfaty: I certainly want to explore that to the end of our interview, really these kinds of ground limits of AI, social intelligence, emotional intelligence, creative intelligence, things that we attribute to the uniquely human. And hence my next question to you Adam is that, wouldn't be easier to say, "Okay, let's look at what humans are best at. Let them do that part. Let's do what machines are best at. Let them do that part and just worry a little bit about some interface in between." I remember that call the MABA-MABA approach where men are best at and machine are best at approach to design. Why isn't that sufficient? Is there something better that we can do by in a sense engineering that team?

Adam Fouse: Well, we certainly need to be thinking more than just about [inaudible 00:28:43]. And the theory answer to your question is, I think that's a bit reductive in that, just trying to break things up is awfully limiting to what teams of humans and AIs might do in the future. It partitions things in a way that both doesn't let the team adapt to new things as well but also doesn't really take advantage of what some of the real strengths are. That MABA-MABA type of philosophy. It's a very task oriented way of thinking about things. What either side is doing the people or the machines is just about accomplishing tasks. And going back to Jared's point about the importance of social intelligence, a lot of the strength of human teams comes from the interaction between the team members, not just, "Well, you've got someone that can do this thing and someone that can do that thing, and then we can each have their own thing, and then we get the end result."

But they're going to work together to figure it out. And if we can add in AI to that mix of being able to work together to figure it out, then I think there's going to be a lot more opportunities that are opened beyond just crunch a bunch of numbers real fast.

Daniel Serfaty: That's interesting. So you trust our ability basically to build teams a very same way we build work teams or sports teams for that matter? Not as a collection of individual experts, but maybe a collection of individual experts that are brought together by some kind of secret or a hidden source? So teamwork aspects that the particular quarterback can work best with a particular wide receiver in football because they work together well, they can anticipate each other. Is that your vision of what eventually those human-AI team are going to be like?

Adam Fouse: Down the road, absolutely. In sports, you can have people that are glue guys that are going to bring the team together. You can imagine that same type of thing happening with the right type of AI that's brought into a team.

Daniel Serfaty: So Jared, one of the big questions that folks are discussing in our tool based on what Adam just told us is, fundamentally should we consider we humans that are still in charge, kind of in charge of our world. Should we consider AI as a tool, just like a hammer or a computer or an airplane? Or should we consider AI as a teammate, as another species that we have to be a good teammate with?

Jared Freeman: The answer depends on the AI. There will always be good applications of AI in which the AI is a tool, a wrench, a screwdriver for a particular job. But as AI becomes more socially enabled and as humans learned to deal with it, I think that AI will become more and more capable of being a teammate. And this means a few things, right? It means that we might have human-AI teams that can collaboratively compose the best team for a job, right? The leader in the AI pick out three good officers, 16 cadets, and a bunch of machinery that can do a job well. It means that we'll have human-AI teams that can work with each other to orchestrate really complex missions as they're being executed. And it means that we will have AI helping us to look into the future of a mission, to discover where the risks lie, so that we can plan the present to meet the future well.

Daniel Serfaty: Okay. So that's a pretty optimistic view of the future of AI I think. Adam tool or teammates?

Adam Fouse: I think that I would give a very similar answer to Jared. When we were talking about empathy, Jared made the comment that we need to think about how we're defining that and maybe expanding that. And I think how we define a teammate is something that we're going to need to grapple with. I think we shouldn't be afraid of taking a look at how we define that and expanding that. Or maybe taking some different takes on it to that are broader and encompass some different ways that AI might fit into a team that go beyond a tool that maybe don't come with the same presumptions that you would of a human. Not the exact same ways you interact with the human and you're making a virtual human or making an AI teammate. And so we need to be unafraid of thinking about what we mean when we say, AI teammate.

Daniel Serfaty: Okay. I like that, unafraid. That's good for our audience who may think, "Okay, are we entering a totally new era where those machines are going to dominate? Where is the center of gravity of the initiative or the decision? We've seen enough science fiction movies to scare us." Okay Adam, so you have been talking about the need, not just to design superior AI with the new techniques of deep learning and natural language understanding, and also having experts interact with the AI, but also looking in a sense of how to build an expert team of both sides. Being aware of each other's capabilities, of each other's, perhaps even weaknesses and adapt to each other in order to form the best team. Jared, are you aware of a domain or could you share with us an example where these will intention system of both automation, that is well-designed to supplement the humans that are well-trained to operate a system when failure open, because the two of them are not well-designed together? The human side and the AI side.

Will you share that with our audience? And I want you to extrapolate, because I know that you have been very concerned about that, about the measurement aspect. How do we test that those systems are actually going to behave the way we expect them to behave?

Jared Freeman: Let me draw on the most horrific recent event I can think of. And that's the Boeing 737 Max 8 disasters. Multiple plane crashes. There in the Max 8 was a piece of Artificial Intelligence meant to protect the aircraft from among other things, stalling. And when you look at the news reports from that event, you see that the Max 8 systems read some system data predicted an engine stall incorrectly, took control of the flight services, and then effectively pitched the aircraft into the earth. 346 people died if I recall.

Daniel Serfaty: But is that without telling the pilot that it is actually taking over?

Jared Freeman: Right. Yes, that was part of the problem. And so you can imagine part of the solution, imagine if the 737 Max 8 was able to infer the pilots belief that they were in control of the aircraft. The pilots were wrong, the Max 8 had taken control of itself, but that was not the pilots belief. Imagine if that system could predict that the pilots would apply the manufacturers procedures to restore that aircraft to stable flight. Even though those procedures would fail in that circumstance, then the AI could guide them away from the wrong actions. But the AI had neither of those, not the ability to infer the pilots current beliefs, nor the ability to predict what the pilots might do next. And so it was in no position to work as every human teammate should, to guide the teammates towards correct understanding and correct behavior.

Daniel Serfaty: Hence your call earlier today about social intelligence is kind of an early form if you wish. For a human team, a pretty and sophisticated form of human intelligence, but for AI is still something that is a little beyond rich of current systems.

Jared Freeman: There are a couple of very basic metrics that fall out of that story. One is simply can AI infer the knowledge and beliefs of the human? And experimentally, we can fix those, set those knowledge and belief, and then test whether AI can accurately infer them. Can AI predict human behavior at some level of granularity? Imagine as we're doing an assist, having humans run their little avatars through a search and rescue scenario in a building. Can AI predict where the human will turn next? Which victim the human will try to save next? If it can do that successfully, we can measure that against what humans actually do. If the AI can make those predictions successfully, then it can guide humans to better actions where there is a necessary. Let the human do what they're planning and understand where that's the most efficient, effective course of action.

Daniel Serfaty: Thank you, Jared. That's a pretty insightful albeit horrifying example of what happens when that last design, the teamwork aspect of the design of human-AI systems fails. Adam, I know that you're involved in a leadership capacity on a couple of more futuristic research projects dealing with that. Imagining the future of those hybrid teams of AI and human team. Can you give us some insight about what's hard there? What is it so hard to design those systems? And perhaps some surprises in a sense of maybe AI could do something here that humans could have done before? In a sense, it's not just to repair something, it's also to improve upon something.

Adam Fouse: I want to take what Jared said and just build upon that just a little bit, because all the things he mentioned are entirely true and needed for a good human-AI teaming. But I think one of the things that we also need to think about is not only making sure the AI can understand and infer and predict the human, but thinking about things in the opposite direction as well. And it doesn't necessarily mean that the human can predict everything the AI is going to do, but that there's some ability of the human to understand what it is the AI is trying to do. And I think the 737 Max example that Jared was talking about, that was part of it as well. The AI was meant to be magic, was meant to just operate so that the pilot didn't even know was happening, so that it was just going to do its thing, do it perfectly. And the pilot won't have to worry about it.

You wouldn't really want a human team to be operating like that. To have no idea what someone else is doing. And it works as long as that person does it perfectly. That's not very believable or sustainable. I think one of the things that we're looking at in some of these projects where we're trying to think about the structure and behavior of teams of humans and AI down in the future. That's one of the things we're thinking about is, what does the type of information that goes between a human teammate and an AI teammate look like? When should that happen? What form should that happen? How can maybe some rules that govern, however, that team is set up, help to mediate those things? Help to make that information flow happen in more efficient ways and ways that let each part of the team know the things that needs to know to be able to both do the job they're trying to do. But also anticipate what the team's going to look like.

What are the decisions the team's going to make? How can they keep those things aligned so that the teams in a good position to keep working together.

Daniel Serfaty: So it's fascinating to me that you mentioned basically the arrow going in the other direction. You're asking when does a human need to know about AI? Is AI able to explain itself to the human so that the human understand, not just what is being done, but perhaps even why the AI is acting a certain way? Which is interesting as we look more at this notion of multi-species system. Because for humans, it's difficult to coordinate in a team generally when people speak a different language, for example. Or when people have drastically different expertise, say an anesthesiologist and a surgeon, they share some expertise, but each one of them goes really deep into what they know how to do. And they know how to develop basically that language, they are not the same.

Aren't we in AI and human combined systems? This goes beyond expertise isn't it? Is the fact that the AI thinks differently, has a different structure of intelligence, if I may say so. And that becomes even more difficult for the human to infer just from its action, basically what it is thinking. So, as we look at these new systems, new hybrid systems, some people call it multi-species system, I just mentioned. Where do you think AI being introduced into these human systems is going to have the biggest impact? In what domain? Or perhaps the earliest impact, choose to answer the biggest to the earliest, depending on that, is that in the military domain? Is that in healthcare? Is that in gaming? Is that even in our day to day job? What do you think?

Adam Fouse: I think one of the earliest impacts is going to be in the healthcare domain? And I think when you look at the types of decision-making that needed to happen there, the decisions that doctors need to make about courses of treatment, about diagnoses. And I think there's a huge opportunity for that to be improved and I think there's also risks there for that to be changed and unintended ways. And I think that goes back to some of the earliest things you're talking about today on the podcast of making sure that we understand what that transformation looks like. Because the potential upside is huge, but that there's some potential downsides as well we need to make sure we avoid.

Daniel Serfaty: Okay, Jared. Can you desired to venture a guess here or where it's going to have the most impact?

Jared Freeman: I'm going to look in the military domains here. And I think there are good applications of AI as tools in those domains for doing the sorts of tasks that humans already struggle with. Detecting signals, tracking potentially enemy aircraft, things of this sort. But when we look to AI as teammates, I think one of the first applications will be in training. Here's an area in which AI is occasional deficiencies won't get anybody killed and good AI teammates will make a difference. So one of the programs that we run for the air force research labs is called the challenge. And in that we are the test and evaluation component for eight different teams of AI modelers from eight outstanding AI development companies.

And we're looking there to find ways to develop AI that can fly as an adversary in simulations so that our trainee pilots can develop their skills. And this means that these AI need to be what current simulation entities are not. They need to be resilient to the errors that trainees make, they need to be creative in the face of opportunities that trainees open up for them. And they need to have the judgment of what actions to take that will most improve the trainee's capabilities, not just beat the trainee, right? But train that pilot. That's a wonderful team interaction that we typically see only with good professors in good classrooms. And here we're trying to make an AI that can serve part of that professorial role, right? That can be a good playmate in a tough game.

Daniel Serfaty: That's certainly... this area of education and training is certainly a very fertile ground to not only create all kind of opportunities for learning for our learners and our students, but also to personalize even that learning to that particular student or that particular learner. Given their preferences, their level of expertise, where they are in the learning curve. And it is pretty fascinating to me that I observe in the past year or so some interesting notion of acceptance of AIs in people's lives. Whether it's a trainee or a doctor. It's interesting, I look at the two ends of the spectrum, the very young and the very old. And it seems like that suspension of disbelief that we all have to do a little bit when we work with AI. That caution against anthropomorphizing the AI, you can turn AI into a human and therefore expecting the human reaction from AI, is really more present with the very young. There are some classrooms in Japan where the teacher's assistant is actually a robot with whom the young children interact very, very naturally while knowing that it is not a human teacher.

And I wonder if you put the same AI into the high school or into a college class, whether or not the adults are going to be more suspicious of it. And on the other spectrum in a lot of assisted living communities, older the people have become very attached, talk about the emotions, to little AI system that remind them about their medicine, that are there like pets sometimes, and that provide real comfort at the emotional level. And so it's interesting maybe by looking at the two book ends here, in a sense to understand what do you need to do to have a rich interaction with AI? So let me ask you a very threatening question here. Chief scientists, division directors, both PhDs in the sciences. Can you imagine your own work being transformed? I'm not going to use the term replace, but transformed by AI over the next 10 years? Jared, are we going to have a chief scientist with an AI soon?

Jared Freeman: Not only do I predict that AI will transform my own work, I've seen AI transforming our work. We have applied some of the most advanced AI to create an entity we call Charlie, who has served with you and me, Daniel, in a radio interview, who has served with you and me Daniel in a radio interview. Who has served with you Daniel on a panel at a major military training conference, as a panelist, who has written parts of a proposal. So this ability to use AI for, in this case, the function of brainstorming, serving as visionary, is here and now can be improved, but we're putting it to use already. I think there are also much more mundane but difficult tasks in which I look forward to competent AI. AI that can help us assemble from the vast number of partners we have and our large staff, the right team for the job and negotiate all of their other obligations so that they have the time to do that job. These management tasks that could be performed so much better and with much less human pain by an AI.

Daniel Serfaty: Well, our faithful audience is familiar with Charlie as Charlie was the object of our two first podcast of MINDWORKS. And it is interesting that the person in your position, Jared, thinks that Charlie's moving on your job or parts of your job. So it will be interesting to watch the degree to which this notion of orchestration, of assembling, of disparate sources of information to form a new thought that was not in any one of those sources of information is going to continue. That's probably a pretty far-fetched use of AI, but personally I'm not worried too much. And you shouldn't worry about your job yet. How about yours Adam? Are you in danger?

Adam Fouse: I don't think there's any AI that can do what I do Daniel. I have very similar thoughts to Jared. In fact, one of the things I think that surprised me about the evolution of AI over the last couple of years, and even just what we've done here at Aptima. Is the role that AI can play in creativity, injecting new ideas into a discussion or two in some thought process. In hindsight, this shouldn't be that surprising, because one of the things that early successes of AI say like in the chess domain and things like that, right? They can explore a million possibilities in a second. The ability to explore a far more things than a person is able to.

And that helped to do things like become far better than any human chess player. But I think that same ability to explore many more things and then to be able to pick things that maybe are novel in some way are new, that haven't been talked about yet. That aren't part of the current conversation or the current research effort, the current report, the current proposal. And bring those things in and do that without fear of, "Hey, where do I toss in a stupid idea under the discussion?" AI doesn't have that problem.

Daniel Serfaty: Well, I don't think your job is in any danger of disappearing or being awarded to an AI personally. But I think this notion that both of you bring into your own job, which is a complex managerial and scientific job being augmented by AI. Imagine that each one of us in the near future could have some kind of an AI deputy with whom we can brainstorm, that can organize our information in advance, knows us intimately about our preferences, about our biases, about our strengths and weaknesses. It's not that much in the future. It's already happening here and there and that it will be fascinating to observe that over the next few years. But are there some ethical consideration to this particular marriage? Whether it's in your job or in the example that you gave earlier, the learner and the teacher or the pilot and the automation. Are there ethical consideration there that we should consider, we should worry about and perhaps we should guard, present or anticipate? Who wants to go there because that's a tough one. Jared, go ahead.

Jared Freeman: So I want to give a trivial example that actually has some deep implications. Let's imagine an Easter egg hunt, and we send our small children out. There's a little AI robot in the hunt as well and the AI robot discovers that the single most effective way to get the most eggs is to knock over all the little kids. This is behavior that we don't want our children to observe, we certainly don't want them to adopt it. It requires some ethical sense within the AI to choose other strategies to win. So where's the depth here, right? Let's just translate this into a warfare scenario in which the optimal strategy for war, right, is to remove the adversary from the game. You can do that in a lot of ways, trap them in an area, bomb them and so forth. It is well within the ethical bounds of war, and we want AI to have the liberty to take those actions perhaps of killing others or at least of entrapping and nullifying others. It needs to understand that that is an ethical option in that domain and should use it when it absolutely needs to.

Daniel Serfaty: Okay. That's a pretty sobering perspective because it can happen and those emerging behavior is actually, but the question is that, is it our responsibility as scientists and engineer to engineer ethical rules, almost in an Asimov kind of way, into AI. Or are we expecting that AI will develop those rules internally from observing others behaviors and derive them and exhibit them in some kind of an emergent behavior. Adam, what do you think? Ethical considerations in designing human-AI teams?

Adam Fouse: That last point you brought up, Daniel, is I think the really important one which is that, relying on AI to behave ethically through observation of humans or society, which does not always behave ethically is something we need to be very vigilant about looking for and counteracting things that might unintentionally happen in that setting. And I think we want to have AI that is ethical and we also want to have the ethical application of AI. We've already seen cases where we train AI models to help with decision-making, but because we exist in a society that has lots of inequality, those models they are just encapsulating that in inequality. A real danger there in terms of thinking about this from this human-AI team perspective, is that then humans assume that this AI is objective. It's doing number crunching and therefore I can't have any biases that are about race or income levels or other kinds of marginalized aspects of society.

It's just going to encapture those things that already exist. And so I think one of the things that we need to be very careful about is when we are designing AI, to make sure that we look for those things, but then make sure that when we apply that AI, we do it in such a way where there's processes or structures in place to look for those and counteract those, even when they do exist. Make sure that there's humans that are involved in those decisions that might be able to see something that isn't quite right and either have the combined input of the two, we do a better decision, or feedback in to say "This AI can be improved in some way."

Jared Freeman: I want to follow on to Adam's very good point there. So here's a perfect moment to look at the way that human and AI can collaborate. We know that when AI learns from historic data, it embodies the biases that are in those data. We know that when humans try to write rules for symbolic AI systems, those systems turn out to be quite brittle. And so an alternative or a compliment to those two is to ensure that AI programs in which ethics matter, such as military programs, first establish a set of ethical principles, bounds of behavior. And use those in test and evaluation of AI that learns its ethics or whose ethics get built by programmers. There needs to be at the moment, a human on the top of the stack who has a set of principles, a set of test cases, a way to evaluate AI on its ethics.

Daniel Serfaty: Yes. Well, thank you both for these profound and thoughtful remarks regarding ethics. I think in this engineers career, this is probably the period in which philosophy and design are merging the most. Precisely because we are creating these intelligent machines and we use intelligence really with a lot of caution and as engineers, as scientists, we need to think very deeply about the way we want those machines to behave. We didn't have that problem so much when we were building bridges or airplanes or cars before, but now it is very important. I believe that all curricula about engineering, all curriculum about computing science and computing engineering include ways to think deeply and maybe even to design into the systems these notions of ethical principles. Jared, Adam, it's been fascinating. Thank you so much for your insight and we'll see you at the next podcast.

Jared Freeman: Thank you so much. It's been a joy.

Adam Fouse: Thank you.

Daniel Serfaty: Thank you for listening. This is Danielle Serfaty, please join me again next week for the MINDWORKS podcast and tweet us at @mindworkspodcst, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated, my executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.