From the U.S. Government Accountability Office, www.gao.gov Transcript for: Artificial Intelligence: Watchdog Report Deep Dig Edition Description: Artificial intelligence or AI will improve human life and economic competitiveness in countless ways over the next few years, but it also raises new questions about how this technology is used and managed. We met with GAO's Chief Scientist Tim Persons, and special guest, the former-Federal Chief Information Officer Suzette Kent to discuss. Released: July 2020 [ Music ] [Jacques Arsenault:] Artificial intelligence or AI will improve human life and economic competitiveness in countless ways over the next few years. For example, it could help reduce the amount of time it takes to bring lifesaving drugs to the market and improve the speed of banking, but it also raises new questions about how this technology is used and managed. Welcome to this Deep Dig edition of the Watchdog Report where we focus on larger issues. You're going to hear from people behind the work at GAO, their efforts, and their experiences. You know, things we could dig deeper on. This episode is taking a look at artificial intelligence or AI. [ Music ] [Jacques Arsenault:] I'm Jacques Arsenault, and with me today to discuss this fascinating subject, we're talking not only to GAO's Chief Scientist Tim Persons but for the first time on our Watchdog Report, we're happy to welcome in a special guest, Federal Chief Information Officer Suzette Kent. Thank you both for joining me. [Tim Persons:] Thanks very much. [Suzette Kent:] Thank you for having us, Jacques. [Jacques Arsenault:] So, Tim, why is there so much hype surrounding AI? Can you help us tell the difference between what's reality and what is fiction? [Tim Persons:] Right. So, it's a great question, Jacques, because as you look at AI in the reality versus the hype narrative, this is very important to understand. Essentially, AI has been a vacuum that's been filled by Hollywood for over half a century. I mean, remember back it's now 52 years old since the "2001: A Space Odyssey" and it's a scary movie in parts even though it captured the imagination about technology. But, it was just scary because of the bad guy in the case was the Hal 9000 computer. I think it spoke to some of the key unsettling issues about AI that if not properly understood can engender fear in terms of the narrative on things. So, those are two things. One is the fear or perception of the loss of control as well as a second one that really is pervasive in this day and age as well. That's really the fear of the loss of meaningful work or being replaced. And so, that's the mythology. The reality is different in that it's a technology that's already here. It's pervasive. It's been commoditized, and it has technology that has its limitations, but it has come a long way over the decades, and it is impacting work. It will impact more routine and repetitive task oriented jobs, but on the other hand, it could enable work and hire a more complex work for people. So, in many ways, it's much more of an augmented type of intelligence versus replacement. [Jacques Arsenault:] So, we shouldn't have any fears about the singularity coming in the next few years. [Tim Persons:] I don't think that's a reasonable concern in this case. Where we are now is we can still use AI for common, everyday tasks, but it's not something that AI's ready, sort of take control of everything and do it without human intervention. [Suzette Kent:] And, Jacques, Tim hit the most important point there is that even though there's, you know, wild and vivid speculation about what it could do, and it is an incredibly powerful tool, it still requires human direction, human construction, and data that is put together by humans. So, there's a lot of pieces in it towards that, you know, control and change and, you know, there's also a hype that is the silver bullet that's going to be able to do everything and, you know, kind of wrapping into what Tim said. A lot of power there, but it still requires deeply involved, human directors in the process. [Jacques Arsenault:] So, thinking about that process and the journey that AI has taken up to this point, Suzette, can you talk about how AI has changed over the years and what can we expect to see in the future? [Suzette Kent:] We have become much more insightful about the critical link between what AI can do and those capabilities and the data that we have available. Inside the federal agencies, we're getting much better at defining the specific questions that we want to answer, and we learn where AI can be most useful, and that has something to do with the clarity of the question, the available data, how comprehensive it is, and the types of controls that are in place and the types of things that are aligned with values, ethics, and mission, you know, of those particular agencies. So, I think we're going from wild speculation to on the ground use, and as we continue to, I'll call it move up the food chain, on what those uses look like, we learn more about the operating environments that we have to have in place, the infrastructure, and the human interaction that's required and what we will need to do in the future, not only for the workforce but people who are consuming the outputs that may come from things that are driven by AI. [Tim Persons:] How AI's changed over the years is really based upon the convergence of key technologies that we've seen today which can be a challenge from one sense, but on the other hand, there's an asset with that in that you can now store that data, and you can compute on that data and do communications with the data with advanced algorithms and things that, you know, heretofore were unimaginable. So, all those things sort of merging together has created ideal conditions for AI. [Jacques Arsenault:] So, knowing there are these powerful algorithms ingesting data and using it to impact our lives, Tim, could you talk about some of the changes you see AI having on everyday life in the future? [Tim Persons:] I, for one, would be a big fan of an AI that could real all of my email and triage and give me exactly the things I need to do today and exactly the most important decisions. I think there are assistants now trying to do that, but that's certainly something that I would love to see personally for my own workflow. But, I think we will see, again, more augmentation of tasks so that it actually replaced things that we might have done that are more mundane or routine or repeatable and we're going to be leveraging more and more automation with the usage of statistical computing and data and things like that. Now, I have my kids, for example, already learning to speak to devices and they're capably interacting with an Internet of Things and AI algorithms to answer questions or help them out or play music or whatever, and they're totally unaware of the sort of the computer science and mathematics and engineering miracles that have been behind what we have today, and yet they don't need to. They can still use the technology and still do things without having to have a master’s degree in computer science, and that just gives me hope that what they're going to be doing moving forward in their jobs is leveraging more and more of AI. [Suzette Kent:] This is an area that actually gets me excited, and when we think about the government uses, Tim and I have talked about the things that happen in everyday life that citizens already experience because data's being collected from so many sources. And, because they experience these in everyday life, as we think about other applications, maybe that's a way for us to get more comfortable. And, what I mean by that, searching and marketing and entertainment, and many of us may have something suggested to you based on something else that you did. And, that's, you know, using different types of capabilities and inputs to suggest or predict a likely behavior. [Jacques Arsenault:] What about the ability of AI to take all of this data and allow us to be more prepared for things that as humans we're just unable to predict? [Tim Persons:] Thinking about these systems from where they're well known, or we had them being able to do a baseline amount of what's called descriptive analysis. So, you can use them, and they can say, well, here's a snapshot of an as is, kind of, scenario. So, that's great. Now, with the power of AI and moving from descriptive into a predictive scenario where you're talking about what you think will be. And, AI's going to help out with that just like Hurricane tracking and prediction and those sort of things that we see in everyday life. But, even more exciting than beyond just the predictive part is then the prescriptive, right. Given what you think will happen, here's what the AI might recommend should be done based upon massive amounts of data and calculations and things that no human, no matter how smart we may be, would be able to ingest and make sense of at one time. [Suzette Kent:] Some of those things that we've had these great conversations inside the government about is think of something simple with the city, tree maintenance. I don't go just every third month and cut the trees. What if I understood the data from rain, wind, aerial views, you know, city maintenance, and I knew when to go take that preventative action so that a person, a building, or transportation wouldn't be disrupted. Or, if we saw a storm coming and we know in our weather model, you know, can predict what flooding might look like, and we know what it takes to recover from that. At the same time that we're taking actions to save people, we're already forward on bringing supplies for, you know, food, lumber, construction goods so we can get back to normal as quickly as possible. [Jacques Arsenault:] Personally, if I find myself watching Netflix or Amazon Prime, listening to Spotify or just seeing adds online, it gets me wondering about other ways that AI might be impacting me. When we come back from the break, Tim and Suzette will talk about something called natural language processing and how that impacts not only everyday customer services but also government services. [Tim Persons:] Customers have bought what you just bought, also buy this. [ Music ] You're mentioning, like, movie suggestions or one thing that's been a big impact on our lives, but even in a similar way, just going to a major shopping site or online presence. You know, if I buy something, as soon as I buy it, they'll say, well, you know, customers that bought what you just bought also buy this. Or, if I'm buying a Lego set for what looks like a certain age range, they'll assume, like, well, okay, well, you must have a daughter of this age range and let me suggest books of this author or something like that. So, certainly we see that in our everyday lives. I've been thankful in recent years being able to, when I've been traveling, when we were traveling on airlines, I've been able to change flights by calling an agent that sounded a lot like a person where just a few years ago, it would sound very robotic. You could absolutely tell that you were dealing with a machine. It was frustrating. You would have to repeat yourself. You would have to revert to pushing buttons. And, now, with what's called natural language processing, you can interact with an agent, so to speak, that's actually a computer AI at the other end and get very pristine service. [Jacques Arsenault:] Suzette, what are ways federal agencies are using AI to improve government services? [Suzette Kent:] One overarching one touches on kind of where Tim was going, but with use of natural language processing, image, or collected locational data, we can do things to make interactions more secure. So, many agencies are using AI as they look at threats. They look at access, and they use it both to protect access to federal networks and data as well as interpret, you know, what they're seeing. We have experiments in hydroponics. So, how do you use AI to maximize food production where, you know, light and water and contents of the soil. There's been a lot of discussion across the Department of Transportation and others about autonomous vehicles of many types. And, think about the amount of data that you have to have, you know, for an effective autonomous vehicle. We're seeing in HHS, you know, right now, we're seeing AI as well as our high performance compute environments used to accelerate the work to find vaccines and to look at relationships and correlations. We've also used it inside that same department for some operational activities to help with regulations and things like that. We have a community of practice around AI inside the government that's operated between GSA and the CIO counsel on the executive branch side so that agencies can both share resources and insights and most importantly use cases as we are moving down this path of AI. [Jacques Arsenault:] Suzette, what do you think is the most important component of AI? [Suzette Kent:] I want to ensure that data stays top of mind. In many cases, we can imagine a lot of uses, but having the data to power that is where we find ourselves in need and spending time and ensuring that not only do we have the volume, that it is free from bias. And, just because it's historical, there still may be things that we need to address. And, that's why we have side by sides of technology and data initiatives in many of the areas of the agencies, but also as we talk about controls and oversights. [Jacques Arsenault:] Now, Tim, there have to be enormous challenges in many aspects in getting buy in at the federal level. [Tim Persons:] There's a benefits and challenges narratives with AI everywhere the federal government may engage with it, right? It could be a regulatory function. It could be a research and development function. It could be operational entities that are trying to solve real world problems, and everywhere AI goes into, there will be a benefits challenges narrative, and understanding that quickly and being able to adapt in an agile manner more toward the benefits and recognizing and mitigating the challenges is the proper narrative across the board. [Jacques Arsenault:] So, then, what is the federal role in overseeing AI? [Tim Persons:] So, it's a simple answer, and yet there's complications behind it, meaning it's simple in that the federal government absolutely has a role of oversight in AI. The oversight role, however, is context dependent. If you're talking about let's take the Food and Drug Administration, our key regulatory entity, about determining medical device safety or drug safety or Suzette mentioned vaccine development's a key and exciting area where AI's developing it, but it also help with de-risking the vaccine candidates through the chemical trials process, and so on. And so, it has a key oversight role in that regulatory sense. However, if you shift over just to a different part of HHS and look at NIH, what they're going to want to oversee is it might be the data quality issues and things that go into the research that are going to inform what we hope are those disease curing and vaccine discovering kind of research activities around the institutes. And so, let me just mention one more, a third, in just in the HHS umbrella which is Medicare and Medicaid services. So, CMS, as it's called, is where you're talking about billing and so on, and they are going to want to have, and they do have machine learning algorithms that try and make sure that claims that are legitimate for medical care, billing to Medicare are properly and timely responded to or paid. But, at the same time, to protect taxpayer dollars, they want to be able to select out fraudulent or erroneous things and so on, and so you're going to want to have oversight into that process to make sure that the legitimate claims for urgent medical needs and things are timely responded to by CMS, but at the same time money is finite and so on. And so, you want to protect the taxpayer resource so that more benefit is paid out properly and not improperly or fraudulently. [Suzette Kent:] When you think about the federal government's role, we have a role both as supporting American industry and that's some of the external things that we do around AI, as well as we have an important commitment to citizens about how we use it inside the federal government in those purposes. And, you know, as part of the industries of the future and the President's management agenda, there have been a clear marker stated that, you know, we want to be a world leader in this space. And, we want that leadership and the way that we approach AI to be in alignment with American values. And, that means that, you know, we still protect privacy and we don't, you know, discriminate, and we act in a lawful way, and that things that we do are traceable, repeatable, and explainable. And, those all sound like great words, but we have to be able to prove those things. So, we have some responsibilities around how we think of it in our context as a world leader and, you know, what that looks like in our interactions in American industry. You know, in the federal government, a lot of the AI that we use, just like our conversation about being a citizen, we're actually requiring form somewhere else. So, we have to understand it, and those things that we build, we have to build with very specific intention to ensure that the context it fits the outcomes, and it is in the alignment of the commitments that we've made to serving, you know, that particular mission or set of citizens. [Jacques Arsenault:] So, context is key. One size clearly does not fit all in approaching oversight of AI and the way that technology is applied to AI in the safety of our automobiles, say, is different than how it's applied to the criminal justice system. But, make no mistake, AI is reliant on data, and many of us are worried about the personally identifiable information or PII in that data. When we come back, we'll talk about how policy makers can think strategically about the use of personal data in government programs. [Suzette Kent:] What is appropriate in this context, and do I have appropriate, you know, permission from wherever the data came from. If it came from a citizen, is this in line with the purpose for which I collect it? [ Music ] One of the areas where there's a lot of debate across the federal agencies is the balance between protecting privacy and the accuracy of the data in the outcomes. And, an example of if you anonymize data, in doing that, you miss some of the information like age or gender that may be very important to the outcome. If you deidentify data, you know, you have to create, you know, tools and structure depending on what you're going to do with the data on the other end to put it back together. You have to authorize uses in different ways. So, as policy makers, we have to think about back to that context question. What is appropriate in this context, and do I have appropriate permission from wherever the data came from? If it came from a citizen, is this in line with the purpose for which I collected it? If it has to do with, you know, health, safety, and welfare, do I have other reasons for using it? And then, in many cases, in what we've been currently doing with the virus is a litmus test there. When the data changes very frequently, how do I ensure the integrity, ensuring that the data is accurate, has not been manipulated, and is appropriate is another set of works. [Jacques Arsenault:] So, what about the flipside of that? You know, not having enough data. [Suzette Kent:] It's the right balance, you know, the right use, in the context, then those are the types of things where agencies are still going through examination, experimentation, and building the perspective. [Jacques Arsenault:] So, Tim, what can policy makers plan for now in order to be able to adapt and balance opportunities and challenges that AI presents? [Tim Persons:] What we've heard the community or experts talk about this or even practitioners in their areas is of course baseline, you know, picking up on what Suzette was just saying is more research, clearly, is needed. We are awash in data. It's not always the right data. AI is, would still suffer from the risk of, you know, garbage in and garbage out, so we need to have a data-centric strategy in terms of research. Whether you're a conventional research entity like a National Science Foundation or things like that, but you also need to think about it if you are in an operational space or if you're a regulator or things like that. And so, more research is needed, is true for AI across the board, regardless of your mission. The second issue is, again, looking at data. It is the lifeblood of AI. Just like we have blood in our veins that gives us life and so on, so to, you can think about an AI system with, you know, having good data is, leads to a healthier system and healthy operations. And so, you want to have it, not only have the quality of the data but the access to that kind of data. What's the right time, the right size, the right place kind of approach to data access at times where, you know, frankly in the federal government, we don't think in terms of that kind of way of a responsibility to provide versus when we might think automatically about need to know. And, a lot of that is completely legitimate. It's just that with the AI usage adding into data, increasing access can yield to better performance and outcomes, albeit again, you have to identify the risk you're trying to avoid, let's say, you know, to personally identifiable information or violating privacy and civil liberties. The other thing that I think the government needs to do is essentially have a standardization kind of approach. This is often something that's left off. We do need more research, but we want to be able to have AI operate in its context, but can we think about standardization of data and uniform standards for data an algorithms that can help drive solutions. And, there are science entities or, you know, science societies and things, medical societies that could help do this as well. This is a, I would say, a [inaudible] thing. But, I think that would be important. And then, another thing policy makers can do, really a lot of this is about the human side of things. It's not strictly getting the algorithm and the machine right, but we need to invest in human capital. We need to rethink how we're training the next generation to, again, use these technologies to augment their various appeal to practice in their vocations, not to necessarily replace, but to do so, so that their productivity, their outcomes, and so on can be far beyond what their predecessors could even imagine. And then, I think another thing the market is going to need in terms of the private sector because that's going to be important, but I think bringing around regulatory certainty is critical with these technologies if you're in a regulatory context. That could be something that's done in a process where it's, again, cross-sectoral[] usage of AI to help drive and bring more regulatory certainty could drive better outcomes in terms of American competitiveness and the innovation systems. [Suzette Kent:] Last year, towards the end of June, we updated the AI national research and development strategy, and that was issued directly by OSTP, but it was a representation of input from many of the agencies, and it made commitments around what the government will do for private sector. One of those was the point Tim just made was being very careful about partnering with the private sector in developing regulations and not jumping ahead before we have, you know, understanding. It was commitment to long term investment. It was collaborative discussions about the ethics and, you know, legal, societal implications. Had a lot of elements of reexamining safety, and it also made some pretty bold commitments around those public datasets and environment that we could understand and examine, and then one of the ones that I think is really important for how we go forward, a commitment around the workforce needs. And, that was looking at the context of the workforce in the light of some of the comments as to how we started the conversation of what is the turning point when we need different types of pathways for people to become professionals and operate in this space. I know as we formed the select committee on AI about two years ago, a few of the universities were the very first ones to actually even have a major in AI, and we know, you know, based on, you know, that that's not the only path, you know, into technology. But, I thought it was an interesting indication of where we are in preparing the workforce that it's not something that is completely covered through traditional academia. So, we've made commitments around regulations, commitments around funding, commitments around supporting the workforce, but we're going to continue to iterate on that to ensure that we are building out the technologies, the workforce, and those protocols that we need to be a world leader in this space. [Jacques Arsenault:] And, Tim, what is GAO doing to ensure artificial intelligence accountability? [Tim Persons:] From GAO's perspective, we are or have put together a comptroller general forum on artificial intelligence accountability. The key goal is to develop an accountability framework around AI systems that can be used not only by GAO, but by our federal partners or others to try and get a sense for how one does conduct oversight on AI and machine learning systems. [Jacques Arsenault:] When we come back from our last break, we'll talk to Suzette about her recent announcement that she's leaving her position as Federal Chief Information Officer and her next steps. [Suzette Kent:] I will share that my next endeavor's going to be deeply focused on many of the topics that we talked about today. [ Music ] This has been an incredible opportunity, and the things that we talked about today are square in the center of one of the reasons that, you know, I came out of the private sector to spend time in the federal government because it is so important that we have perspectives from both private sector, you know, and government to advance the ball. And, in the near term, not doing anything definitive, but I will share that my next endeavor's going to be deeply focused on many of the topics that we talked about today and how we do continue to leverage technology to drive not only U.S. leadership in the world economy, but we improve the lives for citizens in the U.S. Whether it's recovery from a hurricane or ways that we look at the dynamics of different types of support structures, to creating meaningful lives for citizens. Those are the things that are really exciting, and I look forward to continuing to work with Tim and team because it's going to take a unified effort across the Executive Branch and Congress and deep interaction with citizens for us to get this right. [Tim Persons:] I look forward to working with you, as well, Suzette. Thanks very much for that, and thanks for, again, all your leadership and service as our Federal CIO and certainly leadership in this AI area. [Jacques Arsenault:] Well, my final question I'll pose to each of you in turn, and I'll start with you, Suzette. What would you say is the bottom line when it comes to artificial intelligence? [Suzette Kent:] I think we are still very early in the journey, and there are amazing possibilities that we can imagine. We have to move down that journey, you know, with caution, but in light, with excitement, but bounded by American values, the mission of agency, appropriate context, and focused on the outcomes, you know, that we're trying to achieve. And, ensure along the way, as we're getting excited about a new technology and we have to ensure that the technology, the businesses process, the data, and the workforce and people receiving those services are all at the table as a part of this journey. Unlike many other technologies, this has the ability to move faster, and we have to be very vigilant in ensuring that, you know, everyone has a voice in the discussion about how we use those powers. [Jacques Arsenault:] And, Tim, what would you say is the bottom line? [Tim Persons:] This is not the robot apocalypse. We're not going to lose control, and it's not going to destroy the world and take over things and so on. I think that's important because this is really about augmentation of human function and intelligence and not a complete replacement. Which leads to the second part of it is that this isn't the job-pocalypse either. It's not going to outsource us all. We just have to think differently. You think on the way we do our jobs, break things down into tasks, and are those things that are more mundane or repeatable and so on, we should embrace whole heartedly to completely augment ourselves to that we can be grander and better version of ourselves in whatever it is that we do. And then, related to that is that there's no vocation that won't be touched directly or indirectly by AI. I think that when we look back at this timeframe, we can't imagine not having AI around to do things. It's just, again, just like electrification where we can't imagine not having outlets in our walls that when we plug in our devices that, you know, electrons will flow through, and we'll be able to do things. So, AI will be that kind of same kind of incredibly transformative technology. I think we should embrace it. I do think that we need to be cautious about it. So, skepticism is warranted, and that's where I think we just need to work on managing those risk around it in the proper context. [Jacques Arsenault:] So, the risks of artificial intelligence are there, but the tremendous improvements in our lives and the economic advantages that AI provides make the management of these risks, in the proper context, so critical. Thank you to Tim Persons, Managing Director of GAO's Science Technology Assessment and Analytics Team and GAO's Chief Scientist. And also, to Suzette Kent, the Federal Chief Information Officer for their work in this area and for sharing their efforts and experience on the subject of artificial intelligence technology. And, thank you for listening to this Deep Dig edition of the Watchdog Report. To hear more podcasts, subscribe to us on Apple Podcasts. Leave a review while you're there and let people know about the work we're doing. Follow us on Twitter, Facebook, LinkedIn, and now also on Instagram, and for all things GAO, visit us at gao.gov. [ Music ]