[Amanda Prichard:] Hello and welcome everyone. This is our second panel in the centennial series, Foundations for Accountability Oversight Issues for the Next 100 years. My name is Amanda Prichard. I am a senior analyst here in SI GAO's Strategic Issues Team and I'm so happy to welcome you to our second panel. And with that, before we get started, I'd like to offer a huge thank you to the Centennial Planning Committee, especially Brody Garner, Carter Russell, Stephen Patansio [assumed spellings] and the other panel leads, as well as our panelists today. Additionally I'd like to thank the Comptroller General for his leadership and support in our 100th year. And with that, I'll turn over to Carter to play our Comptroller General's welcome. [Gene Dodaro:] Hello. I'm Gene Dodaro, Comptroller General of the United States and head of the US Government Accountability Office. 2021 marks GAO's 100th anniversary serving Congress and the American people. As part of our Centennial celebration, we are pleased to present this webinar series called Foundations for Accountability. Oversight Issues for the Next 100 Years. We rely on the deep pool of expertise within and outside the agency to help monitor changes in public policy and management. In addition to our own people at GAO, we also consult with advisory panels, such as the Comptroller General's Educators' Advisory Panel, independent researchers, and agency managers who implement the policies and programs we audit. We are proud to bring these experts together for webinars covering the following topics. Leading practices to manage, empower and oversee the federal workforce. Building integrated portfolios of evidence for decision-making. Managing complexity across public policy challenges. The legal context of accountability. And major challenges for the next 100 years. These webinars will explore the goals, conflicts, tensions, and challenges that shape the need for rigorous evidence-based decision-making to improve government and support oversight. They will highlight promising and effective practices that can help achieve these goals and demonstrate what GAO has done and will continue to do to support an effective, economical, efficient, equitable, and ethical federal government. I hope you will find them informative. Please enjoy. [Amanda Prichard:] Excellent. So, before we really get into it, a few housekeeping details. This Zoom webinar will be recorded. We plan to have it posted on our website within the next couple weeks. As you may have noticed upon joining, all of the audience is muted and so as such, please feel free to submit your questions in the chat, which I will be monitoring as we go. We'll get to as many as we can during the question-and-answer session that follows our panelists' presentations. As the CG mentioned, today's webinar is titled Building Integrated Portfolios of Evidence for Decision-Making. Essentially, today's topic is evidence-based policy-making. As underscored by the other webinars in this series, confronting the complex challenges facing the nation requires an interdisciplinary approach, one that the conciliates of evidence-based policy-making may help with. Evidence, which can include performance information, statistical data, program evaluation, and other rigorous research and analysis, can provide important insights that can guide federal, state, and local decisionmakers. Over the past several decades, policymakers and academics, like the ones we have with us today, have driven a shift towards a more results oriented culture across government. Recognizing the importance of evidence in these efforts, Congress and OMB have taken actions to formalize and strengthen federal evidence building activities. For example, in 2018, the Foundations of Evidence-Based Policy-Making Act, put into statute a number of requirements for federal agencies to enhance their evidence building capacities, make data more accessible, and strengthen privacy protections. Similarly, over the same period, advances in technology, computing power and analytics capabilities provided decisionmakers with new tools to collect, analyze, and use exponentially larger data sets. At present, decisionmakers have more information available to them at any time in history. GAO has a long-standing body of work encompassing these topics that make up evidence-based policy making. However, in this work, we've noticed that realizing the premise of each of these disciplines has to improve government performance and efficiency, it can only be achieved by linking the information evidence provides to results and accountability. This is no small task. Today we have five fantastic panelists with deep expertise across the various aspects of evidence based policy-making. They're here to share their thoughts on this challenge. I'll introduce each of them before they present and then we'll have about 25 minutes for a question-and-answer panel discussion afterwards. Our first panelist today is Michelle Sager, the managing director at GAO's Strategic Issues Team. Dr. Sager holds a doctorate in public policy and a master's degree in International Commerce and Policy from George Mason University. She earned a bachelor's degree in Communications and Political Science from Truman State University. Here at GAO, she oversees work on government wide management, governance, strategy, performance, and other resource issues, including a portfolio on evidence-based policy-making. Today, she'll discuss GAO's oversight, insight and foresight as evidence act implementation continues. Dr. Sager, if you'd like to turn on your camera and show your slides, the floor is yours. [Michelle Sager:] Thank you so much, Mandy. It's great to be here today. If you'll give me just a moment to share my screen. I'm having a brief technical difficulty, so please bear with me. [Amanda Prichard:] Yes. Michelle, we can see your screen. [Michelle Sagar:] Okay, wonderful. Thank you. I cannot, but we will proceed. First and foremost, thank you so much for the invitation to be a part of this esteemed panel today. It's really a pleasure to be here and to share a little bit about GAO's work over a number of years on evidence based policy. As we get started, I should note, of course, it is GAO's centennial year and so it is truly an honor to be able to benefit from what has occurred over literally decades, as many people have been using evidence in a variety of ways since GOA's inception. So today, we're going to focus on some of our current work, but of course this work really reflects work that has occurred over many, many decades in a variety of ways and continues to evolve in the current environment. So very briefly, I'm going to talk about oversight, insight, and foresight as a way to frame the conversation about some of the recent work that GAO has done. So as we move forward, I know a number of our audience members are GAO employees. And so as I highlight some of our work, all with the goal toward more effective, efficient government that exists in a culture of evidence, one of the things that I would be remiss if I did not do is just note that evidence is infused in all of our work at GOA. For GAO employees, one of the things that we do is we are required to understand the yellow book as a way of framing much of our work. And so as we think about evidence and the long history of evidence at GAO, we think about words such as appropriate, relevant, valid, reliable, sufficient, objective, and credible evidence. And what you see here is just an excerpt from the yellow book, otherwise known as the Government Auditing Standards. And evidence is really part of the conversation and critical to how we do what we do GAO. Nonetheless, it continues to evolve over time. And so what you see here on this current slide is two snapshots of one of those issues that continues to emerge. Certainly, it's something we're all talking about in the current environment of the Covid-19 pandemic, but yet it's the kind of topic that has emerged in our work over many decades. So what you see on the left-hand side in front of you is the front cover of a testimony statement from 1994. From something called our Program Evaluation and Methodology Division, PEMD. Which is a prior iteration of the group that one of our later panelists, Lawrence Evans, now leads our Applied Research and Methods Group. And as you can see in this title page vaccines for children was a topic back in 1994 of interest to Congress. What you see on the right-hand side of the slide is a more recent what we call a spotlight product from our Science Technology Analytical Team, the STAA team, on vaccine safety and of course that is a topic that is very important in the current environment and remains at much interest. I mentioned this just as an illustrative example of how evidence it evolves over the decades. Sometimes the topics, the policy issues remain the same but of course the context changes. And so here is just one example of how that works. I won't provide an exhaustive inventory of GOA's work in evidence-based policy-making, but I will mention a couple of some of our signature products in terms of GOA's oversight role and how evidence comes to bear in this particular area. So what you see here are some icons for some of our signature efforts that include a wide range of policy topics, including cybersecurity, I already mentioned the Covid-19 pandemic where we continue to do cross-cutting whole of government reviews, providing oversight of the use of those funds. In addition, science and technology is increasingly a focus of much of our work, in addition to diversity, equity inclusion and accessibility. And this also includes- [Brody Garner:] Ms. Sager, I apologize for cutting. And just, we are seeing your notes slides instead of your actual slides. If you want to click up at the display settings at the top left of your screen. We're fine with that if you want, but under display settings, you might be able to switch what screen you're displaying. [Michele Sager:] My apologies. Give me just a moment. And I am not able to change that at this moment. Mandy, I don't know if you're able to. [Amanda Prichard:] Yeah, let me present your slides. Just a second. If you want to just keep going, I will do that right now. [Michelle Sager:] Okay. give me just a moment. [Amanda Prichard:] All right. Can someone else let me know if you can see my screen? [Brody Garner:] Yes, I see it. [Amanda Prichard:] Excellent. Thanks, Brody. [Brody Garner:] Michelle, are you there? Can you still see her screen? [Michelle Sager:] I am here. Give me just a moment to get my camera back on. Okay. Thank you. Apologies for that. We are back in action. So what you see here are some of the icons for some of our signature work. I think, believe, I've mentioned all of them but I will just end by saying a couple of them looking at some of our more traditional areas for the areas at greatest risk of fraud, waste, abuse, and mismanagement, as well as duplication, overlapping, fragmentation kind of round out our broad oversight portfolio. So as we look at the next slide, I realize this is a lot of information. Of course, I don't expect you to read all of this information in this particular slide. What this is, is a figure that was in our recent reports where we really started looking at agencies' evidence building activities across the federal government. Looking at some selected agencies. And what I want to focus in this particular slide is just three particular areas of note. One is to note in the call out box that there was an evidence-based policy-making commission that existed. And this was the impetus for what we're talking about in the current environment. And that commission produced a report in 2017. Many of those recommendations then were reflected in the Foundations for Evidence-Based Policy-Making Act, which is currently being implemented. It continues to be implemented. And that will frame much of GAO's work going forward. We have issued a number of reports already and will continue to issue reports. Again, I realize there's a lot of information on this particular slide and I'm not going to talk through it. But I do want to take the opportunity to highlight a recent website that became available that captures much of what you see here. Evalution.gov. Evaluation.gov now exists. And that is a resource that has been made available where the memos that you see mentioned here, the act itself and current agency activities are now reflected in a way that as a resource, as a consumer, as an analyst, you can go to that website and see these resources as a way to keep your finger on the pulse of what is happening in the evidence-based policymaking world. So turning to the next slide, I do want to mention that there are a number of mandates within the Evidence-Based Policy-Making Act, which we refer to as the Evidence Act at GAO. There are a number of mandates for us to continue our oversight work going into the future. And so, what you see here is just a highlight of what we're required to do under the Evidence Act as we continue to assess the ongoing implementation of that act. I won't read the word on the page, but what you can see is that it spans open data requirements, it spans the federal government's use of evidence, and we're also required to report on agencies' evidence building capacity going forward. We have started that work already. And this work really reflects some of our insight that we provide to Congress in a cross-cutting way as we look at multiple agencies. So looking at the next slide you see just a couple of our recent reports. As we've begun to provide not only oversight, but also insight into some of those key areas looking at data governance, looking at data inventories, looking at selected agencies' initial activities, and then most recently, as you see at the bottom, survey data that talks about agencies' capacity to use evidence. So as we turn to the next slide, this is a figure from a recent GAO report that highlighted something that we've done for a number of years called the Federal Managers Survey. This particular survey was especially challenging in that it happened during the Covid-19 pandemic. So as we were surveying about 4,000 federal agency managers and following up with them, they had pivoted to working remotely. That required extensive follow up efforts and huge kudos to the many individuals at GAO who were part of this effort and made it happen. We had about a 56% response rate of those federal managers. And so we were really delighted to be able to highlight some of those initial findings in this first product and you'll continue to see more products over time. But what you can see here is that one of the things we did ask about is agencies' managers capacity to use evidence and the extent to which they were using it in their current work. And overall, it was relatively a good story in that agency managers are using evidence. Generally, they have the capacity to use evidence, but one of the things that was particularly interesting in this report is that when we desegregated these results, we saw that there was great variation among federal agencies. And so that is something that we will continue to look at as we continue to analyze not only these results but agencies' ongoing implementation going forward. And so finally, the final slide talking about some of GAO's foresight activities. There's so much to talk about here that I'm only going to hit a couple of key areas recognizing that I can't possibly begin to touch on everything that GAO has done and will continue to do as we move forward on these and other related topics. So a couple of things that I do want to focus on, at the outset I mentioned something called a culture of continuous learning. That is something that is being embraced and we're seeing it being embraced at federal agencies across the government. And so one of the things that we will continue to look at is learning agendas and that culture of continuous learning is also accompanied by a culture of evidence building. I mentioned the website evaluation.gov and that is an important resource that you can use in the current moment to see what agencies are up to. And that over time, one of the things that we also plan to continue to explore is how related initiative fit together in an integrative portfolio of evidence. So looking at areas such as budget, looking at performance. We've had many, many GAO reports looking at the Performance Results Act and the related Modernization Act. But now integrating our work that I mentioned at the outset with some of the evidence-based policy making work going forward. I would also be remiss if I didn't mention some of our recent STAA team, or science and technology teams' products and projects. We recently had a framework for evaluating artificial intelligence. And so that's really exciting as we think about evidence in an environment that continues to evolve and how we can make these connections across government. In addition, grant's management has included a number of innovations over many years and the entire Grant's management world continues to evolve. That includes not only performance provisions, but also requirements to continue to build the body of evidence for individual grant programs, as well as working with other grants managers across the government. And then most recently, in the American Rescue Plan Act, that included additional evidence provisions, so as we continue to provide oversight for the use of those funds to states, localities, territories and tribes, we will be following their use of evidence to find out what works and what doesn't in this culture of learning and evidence. So in conclusion, I will just mention that similar to many of the other federal programs, projects and activities that GAO reviews through oversight, insight, and foresight lenses. One of the things that we have found and will continue to follow going forward is how federal entities and individuals are coordinating in this world of evidence-based policy making. Both across the federal government in an intra-governmental way with their state and local counterparts in an inter-governmental way. And then, in addition, how they are integrating evidence building activities with these other cross-cutting initiatives that I've mentioned. So I will end there. I look forward to the conversation and thank you again so much for this opportunity. Back to you, Mandy. [Amanda Prichard:] Thank you so much, Michelle. Alright. Our next presenter is Dr. Kathryn Newcomer. She is a professor at the Trachtenberg School of Public Policy and Public Administration at the George Washington University, where she teaches graduate-level courses on public and nonprofit program evaluation and research design. She is a fellow at the National Academy of Public Administration, and currently serves in the Comptroller General's Educators Advisory panel. Dr. Newcomer earned a BS in Secondary Education and an MA in Political Science from the University of Kansas, and her PhD in Political Science from the University of Iowa. She has long been recognized as a leader for her work in performance management, government accountability, and program evaluation. And has published no fewer than six books on the subjects. Today, she's here to share her thoughts on the status of evidence building in the federal government. I see you have your camera on Dr. Newcomer. I will share your slides and you are free to begin. [Kathryn Newcomer:] Thank you very much, Amanda. I'm delighted to be here. I'm a big fan of GAO and I had lots and lots of my students, alumni now, as well as grad students, who work there. So this is just a pleasure to be here. I'm going to be giving a very 10,000 feet up sort of overview of where the federal government has been. Being at GW, I have the opportunity to be pretty close by the federal government and be in touch with what both GAO, and OMB, and other government agencies are doing. So, okay. What I'm going to be talking about, Amanda, thank you. Or what I, from my viewpoint, what I think is that the momentum. So the good news, promising elements of the Foundations for Evidence-Based Policy Making Act, and then also talk about what I think are some of the interesting bumps in the road, but the possibilities that we will encounter. Okay. So, in thinking about evaluation in the United States, it seems to me that we have come through sort of three key evaluation imaginaries over the last forty years. We were talking about outcomes in the 80's. Then we were talking about results and results-based management. And then around the turn of the century, the terminology or the guiding principle has become evidence-based policy. And I think what's interesting about that is that what that actually means is sort of different things to different people. Okay. In other words, for leaders, some mean that leaders, such as governors, legislatures, Congressmen, should be choosing those proven evidence-based interventions to fund, for example. Or use cost-benefit analyses and other similar techniques to find the best, the most proven kinds of strategies. Others talk about evidence-based policy-making as really talking, making programmatic decisions based on impact evaluations and other fairly rigorous evaluations. But then other people talk about data driven decision making. And what they're talking about, really, are people that are actually at the operational level that are looking and watching the data to figure out how to, you know, target resources, for example, or make changes in program delivery. And so, for example, the recent report that GAO issued about the survey results across government looked at a variety of uses of data that tend to be more at that operational level. Okay. So, as I sort of see it in looking, and I've been able to be watching this for the entire 40 years, there's some similar events, or I call them key moments, that I feel are extremely important to understand evidence-based policy making in the US. And I think that going back that the involvement of the John Barren and his Coalition for Evidence-Based Policy, back during the George W. Bush administration, particularly his role in coming in to provide guidance on when evidence was good enough. And their definition of that was with using random control trials, or in other words, research based on the model used in drug trials. And that was very key because there was a signaling across the federal government that there was a need for more rigorous evidence. You then also saw a variety of clearing houses that have been established in the last 20 years. The one that's probably most visible is the What Works clearing house. But there a variety of other organizations, such as with an Administration for Children and Families, Labor Department. There are a variety of other such clearing houses that are promoting the use of rigorous studies of interventions. Then, you have, of course the Commission for Evidence-Based Policy Making, and I just want to highlight two things. Now obviously the recommendations are critical. But one cool thing was that during those 18 months, there were a variety of hearings that were held, across the country not just within the beltway, and there were a variety of people including stakeholders, people that conduct research for and with government that provided a lot of input that informed the excellent report the commission provided. Then, about half of the, literally half, of those recommendations became law in the Foundations for Evidence-Based Policy Making Act. I also just want to give a shout out, if you haven't seen it, to look at the June 30th memo that came out for the Office of Management and Budget. There was a lot of great information that provided a guidance, for example, about the need for a portfolio of evidence. Not simply using impact evaluations, but figuring out how to, for example, link data across agencies, and working towards more common, should we say, terminology and embracing different kinds of evaluation strategies. Okay. As I mentioned, the Coalition for Evidence-Based Policy Making had fabulous recommendations. And there were many that dealt with data and privacy. But the ones that obviously caught my eye, as a practicing evaluator, were those that dealt with building capacity within government. Okay. So the key highlights that made it into the Foundations for Evidence-Based Policy Making Act are requiring evidence building plans, which many of us call learning agendas. Which means we want to see organizational commitment forward-looking to figure out where are information gaps and how are we going to fill them in line with strategic planning and mission driven decision making within agencies. Very cool. The evaluation officers were called for in all 24 agencies, but that very cool June 30th memo from OMB said let's have them in more than the top twenty-four agencies. In fact, within organizations such as DHS, for quite a while, we had evaluation officers at ACF and CDC, for example. And the idea that someone will be in charge and thinking about building evaluation capacity is extremely important. Relatedly, we're going to see the results of their inventories that agencies, again the top 24, are tasked with filling out in terms of what kind of capacity they have and where are the gaps. I just need to throw in the Social Equity Executive Order that was passed by Biden on his first day in office. Because at the same time that the evaluation officers within the agencies are doing these inventories of evaluation capacity by that focus on social, and particularly racial, equity, they are also conducting equity audits. This is very key and I'm going to come back to that. There were other points, for example, of what people call the Evidence Act, having a Chief Data Officer, for example, in council that OMB is currently running, as well as coming out with standards. Okay. I want to give a shout out to OMB for not only issuing standards, but actually reaching out to a variety of stakeholders to come up with some extremely relevant standards. And again, that was issued, I believe, in March 2020 and you can find those. But having those standards at federal agency wide, I think is extremely helpful. Okay. Now I have kidded about the fact that I was giddy with happiness and joy when I read the June 30th OMB memo. And I won't go through all of it. Just to say that there were things that I have actually been writing about and talking about and prophesizing about for 30 years are in that one memo. For example, that it's going to take a lot to shape learning cultures within agencies. Some parts of some agencies are there. But there's a lot of work to do in moving towards what we might call a culture of learning. And they recognize the importance of evaluation as a central function that is mission critical. And these are just, these are all words just oh my gosh. I was so excited to see some of the things that were established in that OMB guidance. They also talked about the importance of having learning agendas, evaluation building plans, and the evaluation plans for exactly what's going to be done. And the importance of using techniques and methods that are appropriate for the questions that need to be addressed. And the point is that in a portfolio of evidence, we need to know more about implementation and processes and what's working. And then calls for, not only use of evidence, but more investment. Okay. Now briefly, I want to just say, I do think there are sort of five bumps in the road that we need to just consider, which are, sort of, the location of the evaluation capacity, maybe the siloing, the potential for blocks and difficulties in transfer-ability of evidence, and particularly evidence we're addressing, inequities and racial disparities. And then generating the demand for the use of evidence. Okay. So for example, within any particular federal agency, as you can see by my little Venn diagram, you have lots of little siloed offices that oh, we might call the support, the mission support, which are separate from the mission driven activities. And the larger the agency, the more separate some of these offices. For example, IPM for a performance measurement office may be very separate, and is typically, from an evaluation office, which is very separate from maybe the chief economist office, and so on. Next, and the siloing can be problematic. I think we overestimate how easy this is going to be to coordinate the performance measurement or monitoring folks with the evaluation office, with the folks that may be doing sort of behavioral economic sorts of Tweets. Okay I think we also overestimate the ease of transferring evidence across context, just because there was a particular intervention that worked really well in Cleveland does not necessarily mean it's going to work well in Fremont, Nebraska, or in New Mexico. Next. Also, my colleagues and I, as part of a research project we're looking at the various clearing houses to see how many of these interventions actually addressed inequities and the answer as of six months ago when we did this little audit was not many. For example, the Department of Labor's clearinghouse called Clear, had zero that dealt with racial equities. Of the 195 in the Campbell Collaboration, three. In the What Works Clearinghouse, 23. In other words, there's not a lot out there. Okay. So, I think we also overestimate the level of ongoing communication and understanding between the data or evidence generators, in other words the analysts, the data providers, and so on. And the potential users. Whether we're talking about within the agencies, the public, the knowledge brokers that are supporting certain approaches and so on. Okay. And the point is that evaluation capacity and I hope that the evaluation officers are really going to be focusing in on this, is not only providing but it's getting people to figure out what they should or could be informed by. And to figure out where are those gaps and literally to help the potential users figure out what questions to address and then make sure that there are sufficient answers and evidence provided. Okay, I'm going to probably not have time to go into it, but I just want to give a big shout out for the potential for learning agendas, to bridge the gap between the potential users and the producers. Because in a study that some colleagues and I recently conducted, we looked for promising practices for agencies that had actually had learning agendas before they were required to do so. And we found a lot of great benefits of having very user-oriented, inclusive, co-designed, iterative, and both top-down and bottoms-up processes for developing the learning agendas. And then, lastly some of the benefits are that building relationships across senior officials and across all of those, you know, many pieces of agencies I talked about. Institutionalizing the helping to move towards a learning culture, helping the evaluators helping the leaders think about, for example the theories of change underlying their programs. Prioritizing evidence-building, sharing thoughts and insights. And one of the things I thought was extremely interesting is that there are many actual existing data, many datasets and storage of evidence that could be used. But that the program people don't even know about. And this is one of the interesting things about having the inclusive development process. Is that you literally can make sure that program management folks understand what's available to them. Okay, thank you very much I know I'm talking pretty fast. I've been thinking about evidence-building in government for a very long time. And I'm very excited, our book will be coming out in a few months. Thank you very much I look forward to questions. [Amanda Prichard:] Thank you so much. Our next presentation is from Mr. Robert Shae, the National Managing Principal for Public Policy at Grant Thornton. Mr. Shea has been working to improve government performance for 25 years, including 10 years at Grant Thornton, and 15 years in the federal government. Including six as the Associate Director for the US Office of Management and Budget, OMB. Mr. Shae received a bachelor's degree from Connecticut College and a law degree from the South Texas College of law. He is a leading proponent of evidence-based policy making. He is a fellow and former chairman of the national academy of public administration. And he served on the commission on evidence-based policy making. His presentation today will provide us with his insights on the progress on implementing the commission evidence-based policy making's recommendations. Mr. Shae, I'll present your slides now. [Robert Shea:] Thank you, Amanda. I also want to take a minute before I start by fanboying on GAO. I'm so proud to be a part of your 100-year anniversary. My very first duties as a young staffer on Capitol Hill was ushering through enactment GAO's authorization. That laid out the requirements of the organization, the appointments of the Comptroller General the Deputy Comptroller General. And it really gave me a PhD in that organization. And it began a long love affair. And I've been working very closely with the organization ever since, including in my time in the executive branch when it was suing my boss at the time. So, but the contributions you've made to improving the effectiveness and efficiency of government are incalculable. Even though I know you try to calculate them on an annual basis. So as Amanda said, I've been asked to talk about progress on uh the recommendations of the Commission on Evidence-Based Policymaking. I'll give a little background. What I'm going to say is going to be somewhat redundant of what Michelle and Kathy said, but hopefully you'll get, from the three of us combined, a really good sense of how this has evolved. So my first slide, if you could, Amanda, let me talk about the genesis of the Commission on Evidence-Based Policymaking. From my vantage point, this has really evolved and transitioned between a number of administrations. Kathy talked about the Bush administration having a focus on rigorous evaluations. This was a point at which the administration was asking of all programs to what extent has your program been evaluated. And to what extent was that evaluation independent and rigorous. Precious few programs had been evaluated at all. Fewer, still, would have met any academic standard of rigor based on the question that was being asked. And we were genuinely trying to drive towards impact evaluations but there was a paucity of evaluations that would have met any standard of rigor that were supplied to us by the agencies being asked that question. The Obama administration sustained this focus on evidence building and evaluation. And I think somewhat surprising, that also transversed the Trump administration. I'm not sure what the secret was to sustaining it in that environment, but we did make enormous progress with the appointment and recommendations of the Commission on Evidence-Based Policymaking. And of course enactment of the foundations for evidence-based policymaking. And then of course the Biden administration, a lot of what Kathy has talked about, I am equally excited. Although I'm not sure I can mirror the enthusiasm that she showed. So let me talk about the commission first. The commission was born out of negotiations over welfare reform between then speaker Paul Ryan and senator Patty Murray. They struck up a strong bipartisan relationship as a part of those negotiations. They didn't get welfare reform out of that. The only thing that wasn't left on the cutting room floor was a statutory requirement to establish a commission on evidence-based policymaking. And that law was enacted, commissioners were appointed. It was made up of 15 commissioners appointed by bipartisan members of Congress in the administration. It was chaired by Catherine Abraham of the University of Maryland and Ron Haskins of the Brookings Institution. Kathy mentioned, we had seven public meetings, three open hearings, 350 comment letters. Moreover, we had vibrant sometimes heated discussion over what kinds of practical recommendations we wanted made. One of the things that the commission was focused on, one of the promises of welfare reform in the negotiations between Ryan and Murray was that if we could unlock a lot of the administrative data already collected by agencies and programs, we wouldn't have to go about the expensive and time-consuming data collection that long-term rigorous evaluations require. So the commission was trying to come up with recommendations that would help basically make it easier to get access to data for the purposes of learning about which programs work, what made programs work better. So that we could share that across government and accelerate the adoption of these proven practices so the commission issued a report four years ago, yesterday believe it or not. Made more than 20 recommendations in four categories. One was secure restricted access to confidential data. The second was enhancing privacy protection for federal evidence building. There were a number of commissioners whose expertise was specific to the privacy arena. And they provided important insights into concerns of the privacy community. And what evolving innovative privacy protecting techniques could help improve the trust in data sharing in government. Modernizing America's data infrastructure for accountability and privacy. and then strengthening the evidence-building capacity within the federal government. So commissions aren't traditionally in my view a good way to catalyze action, but I'm proud to say I think this one was different. So let me walk through my perspective on where we are vis-a-vis the recommendation. So importantly as the commission was finishing its work, staff of the House Oversight Committee was already working on build text for the foundations for Evidence-Based Policymaking Act. What was called a down payment on the recommendations in the committee's report. That bill was enacted, signed by the president just 16 months after the committee made its recommendations. And having worked zealously on advocating for enactment of that law, I can tell you it was not a fade accompli. There were plenty of steps in that process that could have prevented that law from being enacted. So let's take a look at the recommendations by status. So prioritizing evidence building, creating learning agendas, appointing chief evaluation officers, chief data officers, statistical officials. That establishing of governance over evidence building was really one of the principal things embodied in the Foundations for Evidence-Based Policymaking Act. Kathy mentioned the OMB memo M2127, I'm trying to keep my emotions in check here. But I share her enthusiasm and I highly recommend that those interested in this topic read and digest the concepts that are embedded in that memo. Foremost among them I think is that evaluation and evidence-building is a critical agency function. that it should be embedded into the way the agency does business. They should take seriously evidence building use in policymaking. But I also want to call out congressional appropriations. Now of course Congress enacted the law. It's a lot easier to get a management law enacted than it is to get authorizers and appropriators to pay much attention. But if you look closely through reports accompanying agency appropriations as I have. Sorry to geek out on you, there are a number of repeated references and calls for evidence from agencies requesting appropriations. So I think this will continue to grow. And I think it's a very good sign that with Congress also calling attention to this that the maturity of evidence building across government will continue to evolve. So let's get to the yellow, where I think we've made progress. But not quite where we want to be. The commission recommended the establishment of a national secure data service. And it also recommended naming a government governing committee of the public federal department states agencies in academia. So there is an advisory committee on data for evidence building that operates today. We're expecting a report from that committee in the next couple of days. I'm not sure it's the driving force we want it to be to oversee progress in this arena. Advisory committees aren't always the most effective tool to catalyze government action. But nonetheless we have this government committee in place. The National Secure Data Service. We studied a number of international governments' efforts to provide a safe place in which researchers could come to access the government's most sensitive datasets. And it was really heartwarming to see that there were so many existing experiences to learn from as we were coming up with this recommendation. So the theory is you can have a central place that is a proven secure place to allow researchers to match datasets in order to unlock insights. That that would accelerate the evaluations that could be conducted and reduce the cost. A pilot national secure data service has been proposed. And passed the House as part of the National Science Foundation of the Future Act. And that legislation has legs, as they say. Because it's part of a competition with China investment in technology legislative package that I expect will move sometime later this year. So now we get to where we haven't seen as much progress. One of the things that frustrated the committee especially the researchers on the committee is that there are a number of statutory bans on the sharing of data for any other purpose than for which it was collected. Income data, education data. Those are just a couple of examples. And the commission listed a number of critical datasets that it hoped we would see at least some limited secure access to. That has not happened. Streamlining data access for researchers a little bit part and parcel with the National Secure Data Service but also these statutory bans. But not a lot of progress has been made in that arena. And improving coordination of the federal government's evidence-building activities. This was specifically focused on OMB. And at OMB, you've got a Performance and Personnel Office, an Office of the Federal CIO, a Chief Statistical Official, an evidence team. Those four functions aren't perfectly integrated. And I think that their activities not being as coordinated as they could, limits their impact. And so I think those and actually better collaborating across OMB, including with the budget offices, would provide enormous fuel for evidence-building activity. So in my next slide I simply listed a couple of resources. There's the report from the Commission on Evidence-Based Policymaking. The foundations for Evidence-Based Policymaking Act text, which if you're really interested, these are three memos from OMB that I provided you links to. And I think that really does show the evolution. And M2127 really is OMB's evidence building aspirations coming out into the sunlight. And again I commend it to you strongly. Evaluation.gov has already been mentioned. I think Michelle mentioned that. And then the Advisory Committee on Data for Evidence Building. This is where you can go to see the activities of this advisory committee with a number of public meetings available. And the reports and resources that they have for you, you can find at that website. So I hope this was useful. I look forward to entertaining questions. [Amanda Prichard:] Thank you so much. Our next panelist is Dr. Ramayya Krishnan the Dean of Heinz College of Information Systems and Public Policy at Carnegie Mellon University where I did my graduate work. Dr. Krishnan's research interests focus on consumer and social behavior in digitally instrument environments. And has led the establishment of funded research centers focused on data-driven decision-making and key societal domains. He serves on the IT Services Advisory Board, chaired by Governor Tom Wolf of the state of Pennsylvania. And is a member of the Comptroller General's Educators Advisory Board. He earned a bachelor's degree in mechanical engineering from the Indian Institute of Technology Madras. And a master's in industrial engineering and operations research. And a PhD in management science. Both from the university of Texas at Austin . Dean Krishnan is here today to share his thoughts on the nexus of data analytics and public policy. Dr. Krishna, the floor is yours. [Ramayya Krishnan:] Thank you so much Amanda. It's a pleasure to be here. And I'm honored to be on this panel with my colleagues. As others have noted, I think it's been great working with the GAO, both in on the Educational Advisory Panel. But also on the other work, most recently on data curriculum, as well as the AI governance reports that have been produced. Can everybody see my slides? Okay. So what I wanted to do was offer something complementary to what my colleagues have presented thus far. So in addition to the Foundations of Evidence-Based Policymaking Act, which you've heard from all three of my colleagues thus far. I also wanted to add to it the National AI Act of 2020. We've been in discussions with the team leading AI.gov. I should note that you know at the Heinz School, as a public policy school, this topic of bringing together and studying how to better improve public policy through the use of data, through good evidence-based practices, and quantitative methods has been core to what it is we have done. Both to educate amazing students like Amanda, but in addition we've also been working with executives in government on these types of topics. So, the talk that I would like to, the points that I'd like to make are going to be complementary to what my colleagues have presented by providing very specific examples of the use of data and analytics in action. Building on both the Evidence-Based Policymaking Act of 2018, which really calls for both the evidence-building activities that have been spoken to. But also the open government data aspects of this act. And the kinds of capabilities both in terms of technology, but equally well in terms of personnel. So for instance, the federal evidence building activity calls for leadership via the evaluation officer. I think evaluation.gov was mentioned and Kathy mentioned evaluation officers. As well as officers with statistical expertise. And the Open Government Data Act calls for chief data officers. I'll mention these in passing. Now, I should note that based on prior participation in these panels I've put together a set of slides. I won't go through all of them. But for those of you interested, please contact me either directly, or through Amanda. And I'm happy to share these slides with you. I'm going to use three examples. One from local government. One from the federal government. And one from state government. And the examples will span both policy implementation as well as policy evaluation and policy formulation. And in addition to the first two examples are resonant with the discussion thus far. Are also time permitting, I wanted to briefly mention how to support policymaking in a crisis. Which is something that we have had you know, over the past 18 months. We've been working with the state of Pennsylvania on recovery and reopening. And I thought some of the aspects there were interesting to highlight as well. Okay let's dive right in. The first example that I wanted to offer is work that we've been doing with the Allegheny County Department of Human Services. In 2018, "The New York Times," profiled them by talking about how they're using statistical methods and tools to help prevent kids from being abused. And the statistics nationally are disheartening at one level. There are 4.3 million referrals involving 7.8 million children yearly. Of which 3.5 million children receive an investigation or an alternative response. It might be useful. this is an example of policy implementation using data and analytics. And it will be useful to see how exactly does a system like this work. Two things to highlight. One is a point that the data that are required to support this activity. So you see a caseworker here receiving a call and the operational decision-making task here is to decide whether to investigate the allegation or to screen out the allegation. And the data that is assembled by the Department of Human Services comes both from the county as well as from the state. And as you might imagine, it over represents information about people who are poorer. So in that sense, two points that were made earlier, the aspects of bias in the data is an important consideration. And important to keep in mind. The second is that whenever a call comes in, there's a vast amount of data. For any given call, there are 800 variables or features associated with each referral. And it's hard for an individual to actually process all that information. And that's why you have an algorithm or you have a statistical model that's going to come up with a risk score which is then going to be used to determine what should be done with the call. So think of this as actually the way in which the policy of whether something should be screened out or screened in is actually implemented. Now in this example, this is work that we did with the Allegheny County Department, it's really important that trust has to be established. Both between the caseworker and the tool, but importantly between how the agency is working and the community that it's serving. So two kinds of trust. One is between the caseworker and the tool that she's using. But then between the agency and the community that it's serving. And in this, verification and validation is a really important aspect. Before you can deploy these more advanced data analytics tools, it's really important that verification and validation be done. When they first deployed this technology and tool in Allegheny County, there were errors. And fortunately the humans made up for the error. So this is a good example of human in the loop decision support. And what the data that the user was shown was not actually the right, the data which we did, in the evaluation work that we did, we actually determined what the problems were. But so verification and validation is a really important component of building trust. The second is that here it's not about learning or building a tool that is going to make any automated decision. Though one hears about the use of these kinds of technologies in criminal justice and in other settings. But here it's really about supporting a human by learning a risk model. The details of this I don't have the time to get into. But trust in the system doesn't mean adhering blindly or delegating to the system and following through on what it's proposing. But we need a broader concept of trust. And agencies need to have both the expertise and the capability to establish trust for use within policy implementation. But equally important is how it can establish trust with the community that it's serving. So that's example one. So this is both an opportunity and a challenge. And I think this quote from Shaw's book on algorithmic accountability really makes this point very well. Which is that 'these tools that are going to be deployed, this requires a license to operate from the public.' And therefore this is an important point about trust. The second example goes to the second part of the Foundations for Evidence-Based Policymaking Act which is about open data. And here the particular project that I wanted to highlight for you is work being done by a colleague and by a not-for-profit organization called Coleridge, which really worked with a collection of federal government agencies to establish the value of the data that they're making available. So for instance, the United States Department of Agriculture, Department of Commerce, Geological Survey, NOAA. These were the four agencies that were making data available. And wanted to understand how these data were being made use of by researchers and by the research community. And with support from NSF, NIH, and some philanthropic organizations what Coleridge did was to establish a competition on Kaggle, which allowed for individuals to compete. And this is about expanding the capability available to the agency. So in this particular case by virtue of harnessing or tapping into the community, you're getting them to build out using natural language processing, they were able to identify which datasets from which agencies showed up in which publications. Which then allowed for the agencies to effectively build dashboards to show the GAO and Congress about how they were actually making use. How their datasets that were being made public and in machine readable form, were being made use of. And it led to this kind of a dashboard for instance about the number of publications and how the data was actually being used. This is for agriculture. The first one was USDA. And the idea of creating these dashboards is a key idea. And the main takeaway is the opportunity of expanding the scope of who can actually help assist with building out the evidence in this case, that open data actually had value. A parenthetical point I want to make here is that the National Security Commission on AI published a report recently that called for, just like the Army as an army reserve corps, to create a data science reserve corps in support of the DoD. We have made proposals as part of the National AI Act, which calls for federal government deployment of AI across all public sectors to create something like this data science reserve corps to support the federal government, and state government, as well as a local government. So that you have individuals being able to provide their expertise and grow the capacity of government. The last example that I have I'm going to go through this quickly, is about supporting policymaking in a crisis. You know, I led the task force supporting Governor Wolf and his administration and following the executive order on April 1st, the Governor Wolf's administration pursued a data-driven approach. Which looked to balance public health as well as trying to understand what the impacts were on economic activity. As well as to try and identify the social services that were required. So if you think of example one as policy implementation and the second one as actually a kind of evaluation. Here it was really about trying to formulate policy under very tight time constraints, with incomplete information. So one of the things, and this is a notional dashboard that I'm presenting to you to preserve confidentiality. That we created that effectively allowed for the governor and his secretaries to identify what situational awareness was about public health, about economic activity, and what the likely impacts would be. Especially when they were thinking about how to bring back a particular industry. In this example, what if we wanted to open up construction in a given county, or statewide. If we did that, what are the implications for, do we have enough hospital capacity and ICU beds, and PPE, etc. Or, if we did this, what are the implications for unemployment. What is the implications for supplemental nutrition like SNAP program, or assistance to needy families. And as you can imagine, to do this, you needed access to administrative data. So I think Robert made the point about data sharing. This was a very crucial component of what it is that had to be pulled together to support and create this kind of dashboard to provide situational awareness. And much of our initial risk measures that we developed were based and derived purely on government data. But we found that government data, particularly state administrative data, as well as federal administrative data was not really designed to support policy making in a crisis. We had to cobble together public sector data with private sector data so that we could get granular and quick feedback from the private sector. There are issues with private sector data. They're not as representative and it's not complete as public sector data. But the combination of the two is particularly valuable. This is actually data from Safe Graph, which provides mobility data on where people were assembling. And this is a proxy for public health exposure risk, which we were able to provide very quickly. And similarly, spending data to give a sense for what the economic activity was in different NAIC codes. In this case, grocery stores, and fast food restaurants, and the like. The main takeaway here is that there's a need, in support of this kind of what I'm calling resilient evidence-based policymaking, there has to be work done ahead of time on data sharing and integration to support situational awareness and for supporting this kind of policymaking. We can't afford to cobble these things together when a crisis demands that. And we can talk about this in more detail if there are questions, but I think the concept of a data lake versus a data warehouse was particularly relevant I think because data warehouses predetermine the use cases that you want to create data to support. Data lakes allow for more unanticipated cases which often was the situation and this this type of support for policy making in a crisis. I know I went through a lot using these three examples I'm happy to take any questions during the Q&A period. So thank you once again for the opportunity. [Amanda Prichard:] thank you so much. Our final presenter today to close us out is Dr. Lawrence Evans, the managing director of GOA's Applied Research and Methods team. Dr. Evans holds a bachelor's in economics from Colgate University and earned his PhD in economics from the University of Massachusetts Amherst. Here at GAO, he oversees a team that designs and executes methodologies that help inform and improve government operations including GAO's Center for Program Evaluation. His presentation today will provide his insights on realizing the promise of evidence-based decision making. And I'll share your slides on the screen. [Lawrence Evans:] Great. And I'm going to pivot. I'm going to be very quick. I'm going to try to go through each slide in 30 seconds so we can have some robust Q&A. This was an impressive set of presentations thus far. And I think I can speak on net here. As the slides come up, you know, my story is simple here, with respect to the promise of evidence-based decision-making and evaluation. It's important to have an infrastructure in place that supports quality evaluation and decision-making. No technology, no technique, no skillset will save us if the organizational culture and the people conducting the work are not dedicated to and position for objective analysis that puts the public interest first. Okay so that that is essentially my message here. Slide two is just an expansive definition of evidence. We heard about this one just moments ago. The thing I want to emphasize here is that there is an evidence hierarchy, right. Not all evidence is equal. Anecdotal information is not of the same order as a quasi-experiment, a natural experiment, or a well-designed randomized controlled trial, right. And importantly, very rarely is evidence definitive. So the strongest evidence and Dr. Newcomer made this point clear, comes from a portfolio of high quality credible sources rather than a single source, okay. That's the point there. And I make that point, it comes back to us in a moment. Okay. So you want to rely on quality evidence to evaluate a program or policy or make decisions. But you immediately confront a litany of issues, right. Ranging from complexity to more difficult issues like how you handle undue external influence. Where the very stakeholders that you need to engage bring conflicts of interest and other things that might make it difficult to sort through the evidence there. Truth decay, right, which is incredibly important. So that speaks to a general environment where opinion and fact are on equal terms right. Weak and strong evidence are on equal terms for a considerable portion of the population. So those varying degrees of credibility that should be attached to evidence, based on how it's produced disappears, right. The meta-analysis is trumped by one study that can't be replicated. The natural experiment is trumped by a non-random survey with a 12% response rate, right. So it's the enemy of evaluation and can lead to distrust in traditional sources of quality information. And can impact the willingness of program administrators to conduct evaluations. I'll point out just quickly here, the people doing the work can be a significant issue. Motivated, reasoning, ideology, conflicts of interest, cognitive biases. All of these things can lead us to unintentionally discount evidence, interpret things erroneously. And not provide the kind of transparency around the assumptions that we need. Okay. So how do you navigate such an environment? It's organizational culture. We always say culture each strategy for breakfast, right. It's the professional standards. It's the control environment. The atmosphere in which people conduct their activities and carry out their control responsibilities, right. And you see here at GAO, this is how we mitigate some of those challenges right. We've got our core values. We've got the yellow book that Michelle talked about earlier. We've got a rigorous quality assurance framework. A dedication to continuous learning. You know, work independent, non-partisan, non-ideological, fact-based. And that robust quality assurance framework includes issues around staffing engagements, to ensure that we collectively have the competence to execute the work. All the way to the external experts. Some of them that are here. And that capacity competence issue is critical, right. We've got about 15 mission teams. And we combine those with subject matter expertise, with those that have the technical expertise to execute. Okay so again, so those toughest problems are made infinitely more problematic by us as individuals. This makes the yellow book more important. And these are the standards that we apply to individuals who do work at GAO. Namely we want our folks to be professionally skeptical. We want that skepticism to be visited upon things that confirm your beliefs, as well as those that counter what you initially believe, right. And it's critical. Professional behavior, I should say, is important too. Because you don't want to do anything that would give a reasonable third party a reason to question your objective. So there you have it. And that's why we talk a lot about organizational culture, culture of learning. I want to end just by plugging on the next slide an important glossary that we that put out recently that could really help agency officials better understand fundamental concepts related to evaluation. And enhance their evidence building capacity. This was something produced by our Center for Evaluation Methods and Issues in conjunction with our Strategic Issue Team that Michelle represents here today. Okay so I don't want to hold off the questions. Let's get to it. [Amanda Prichard:] All right. Well thank you all so much. This has been wonderful. And looks like we've got about 10 minutes left of our panel today for question and answer. So I'll start us off with a question of my own and then if there are questions from the audience, please feel free to put them in the panel. In the chat function, and I will relay them to our presenters. So with the theme of this panel in mind, of using multiple sources of evidence for decision making, where do you see the most potential for growth in the coming years? And I'll let whoever feels like they've got a great answer just to go ahead and start. [Robert Shea:] Are you asking which method, Amanda? [Amanda Prichard:] I think more of like what areas of evidence-based policy-making, where are there opportunities for us to better build that evidence-based culture of decision making at all levels of government? Whatever aspect of evidence-based policymaking you are most excited about. [Robert Shea:] So I'll chime in. I think those of us who work in this arena sometimes are under the misapprehension that the concepts are more widely known. and I think behavioral science is a perfect example. Those who are in that field, see its adoption accelerating. But I think it should accelerate much more rapidly I think there are many, many more opportunities to leverage behavioral science, behavioral insights to improve the way we serve the American people. And especially in an effort to reduce inequity. I think if we did a lot better job examining how behavior impacted access to the government's benefits, and services we'd go a lot to diminishing those inequities those barriers. [Kathryn Newcomer:] I'm just going to put in a plug for the new evaluation officers. I'm very excited to see more of them. Because evaluation is sort of a lot of us think of it as sort of a trans discipline that can bring together the economist or the computer science folks, the data folks in other words. as well as the program management folks to get together to think about the theories of change of what they're trying to accomplish. And where the data need to be. And you know plug the holes in our knowledge. And so I'm excited about the fact that OMB is saying hey, we need evaluation officers at the bureau level. We need them in the smaller agencies. Because if you get the people with those kinds of skills, I think that they can really help the other folks for example frame good evaluative questions. And dig in to figure out how do we get to where we want in terms of equitable outcomes, for example. [Michelle Sager:] And this is Michelle. I'll echo all of that. I just have two quick points. I think picking up on the evaluation officers in addition the chief data officers and the statistical officials. The more that you bring those together through the councils that exist as a result of the Evidence Act, I think that helps infuse all of that thinking throughout the government. And then the second point I would make is as the learning agendas go out in the public domain next February, as that becomes connected to the budget process, then as resources are attached to thinking about what are the big questions that we're trying to ask and answer and how are we going to get there. I think connecting the dots among those areas is really exciting. There's been a lot of talk over that, about that for many decades but this has the potential to really make it happen. [Ramayya Krishnan:] One quick item there is just like the Klinger Cohen Act funded the creation of CIOs throughout the government. I think building on what Michelle said, I think there's a real opportunity for education and growth both in the federal government but equally well we are seeing in the state government as well as in local government. [Amanda Prichard:] Absolutely. So I we've got time for one more question. I know Dr. Evans you need to jump off. But one of the questions that we usually ask experts like yourself that you are, a lot of you mentioned in your presentations as well that there are a lot of obstacles to adapting this evidence-based culture, widely across the government. So if you had a magic wand and could implement one policy change to better infuse evidence-based policy making and data-driven decision-making throughout levels of government, what would it be? What would your instantaneous change be? [Kathryn Newcomer:] I would this is perhaps not exactly a policy change. But actually funding, actually provide the money and the resources. and ensure you get people with the right skills into the evaluation office. It was interesting that you just said that Klinger Cohen gave the funding. Did they really give the funding in the bill? Or did they require that the CIO's be established. See there's a difference in government. And I'm looking at Robert Shea's picture because he's been there, done this. He was with OMB during part. And so he knows. There's one thing between saying thou shalt have, whether it's a learning agenda or a CIO, or an evaluation officer. And actually funding it. There's a difference there. [Robert Shea:] And you mentioned though, Kathy, in your slides that the memo that we fanned out on calls for agencies to request of OMB, investments in evaluation, evaluation capacity. It's not normal for OMB to hold its hand up and say hey, ask us for more money. So that's a real important development. But if I had a magic wand, I would eliminate silos across OMB's management functions. To get them better working together, focused on evidence building and use. [Michelle Sager:] And that all sounds so if I could have an additional magic wand, I would add if we could truly infuse the respect for a learning culture. A culture where evaluation is really appreciated. And it's understood that we may not always get it right the first time, but that by doing that we're learning what does work. And sometimes what doesn't. And that that's really important. [Amanda Prichard:] I've got one audience question here. What is driving evidence-based policy approaches more? Innovation in research methods and institutional reforms? Or technological change in terms of emerging tools for data gathering, processing, and dissemination. [Ramayya Krishnan:] So I don't know whether there's any one. It's not it's not an either or, which of those two is driving it. I think there's both, you know the general sense among the public, because they they're used in their other interactions with the private sector. You know, you think that you can do this with Amazon, or they've gotten through the pandemic where there's a lot of this work with data and analytics, that they feel like this is something that could be used to support government services being delivered more equitably and more effectively. So for instance, there's a large number of people who are eligible but not enrolled for a number of government services. And I think government agencies would like to have the capability to identify these households. And identify what the root cause is as to why they are eligible but not enrolled. And then, try and allocate resources to address that. But in the absence of these kinds of capabilities, they're finding it hard to do it. So I think these are examples where it's a combination of both these things coming together. Plus, a greater willingness among both agency staff and the public and leadership to actually use these resources. [Amanda Prichard:] All right. Well I guess we've got one minute left. So any final comments that anyone wants to close us out with? [Kathryn Newcomer:] I was just going to say that although all of the technological, and the data availability, the IT, and the laws, and OMB guidance, that's all fabulous. But it really boils down to leadership. And you know, you can provide those data. You can provide the techniques. But it's all going to get down to political will. And I don't mean leadership like as a president or a governor. But getting down with you know the executives within organizations. It's you know, you need the political will. It's a little p, a little p. Not R or D, but little political will. [Amanda Prichard:] Well it looks like we're at time. I wanted to say thank you again to all of our presenters. You've been wonderful. Thank you so much for sharing your time and your expertise to celebrate GAO centennial. Again, we will be posting the recording of this session online for those who joined late or would like to see it again. And with that, thank you all so much. Have a great rest of your day. [Ramayya Krishnan:] Thank you. [Robert Shea:] Thank you. [Kathryn Newcomer:] Thank you. [Michelle Sager:] Thank you. [Kathryn Newcomer:] Happy birthday, GAO.