Building Responsible Artificial Intelligence
Exploring the state of responsible technology in the age of AI
M.A. in Political Science, European Union Policy StudiesPhoto: Saperstone Studios
Graduate Assistant Francesca Ragonese (EUPS 2023) sat down with Evi Fuelle (EUPS 2014) to discuss the ever-changing field of technology policy, particularly in the age of Artificial Intelligence. Below is a summary of their conversation:
The rapid expansion of artificial intelligence (AI) and the recent emergence of ChatGPT has opened up conversations about the various benefits and potential consequences of these tools. As the influence of AI grows, there are rising concerns about the implications of these technological innovations for both digital, security and other policy areas. EUPS alumna Evi Fuelle works at the forefront of global technology policy and navigates how to ensure responsible AI that safeguards fundamental human rights and puts users at the center of technological innovation. In an interview with current EUPS student Francesca Ragonese, Evi shares her path to working in technology policy, and provides insights on emerging policy considerations surrounding AI governance.
Prior to the EUPS program, Evi received her B.A. in International Affairs with a minor in Political Science from James Madison University. Having always been interested in diplomacy, foreign affairs, and engaging with people from different cultures, Evi decided to pursue the EUPS program. Upon returning to the United States, she worked in government consulting as a media analyst, but she soon found that this was not her passion and decided to take her career in a new direction. She joined an organization known as Women in International Trade, where she was able to connect with a mentor in the U.S. Department of Commerce (DoC). From this connection the opportunity arose for Evi to pursue a fellowship at the International Trade Administration, working with the Trade Promotion Coordinating Committee. During this time, Evi capitalized on the EUPS alumni network to also pursue an internship at the Transatlantic Business Council, in order to continue engaging specifically on transatlantic relations.
Reflecting on this transition in her career, Evi emphasized that “in my experience, it is more important to be passionate about who you are working for, rather than what personal title you have,” highlighting the importance of working in an environment that encourages growth. Using the analogy of a lilypad, Evi noted that “everyone’s career path is different and rarely linear, it’s more like hopping between lily pads, each time hopping a little closer to your broader goals.”
After continuing to pursue mentorship opportunities and being vocal about her passion for a full-time career in global policy, Evi’s next career shift led her to a position at the Information Technology Industry Council (ITI), where she worked for four years, and transitioned from focusing on the trade of physical goods to the trade of digital goods. Observing the re-negotiation of the U.S.-EU Safe Harbor Framework (now Privacy Shield), and witnessing the aftermath of the Apple-San Bernardino case, Evi found a passion in digital/technology policy. At ITI, she was once again inspired by a mentor, former CEO Dean Garfield. Evi was empowered to work with ITI’s various tech member companies to found ITI’s first Artificial Intelligence working group, publishing the first industry-wide AI policy principles in October of 2017.
Having found a passion in transatlantic tech policy, the next “lily pad” was a position that opened up at the European Union Delegation to the United States (EUDEL) in Washington, D.C. where Evi worked for three years as a Digital Economy Policy Advisor, finding mentorship in her former boss, Peter Fatelnig, and colleagues on the EUDEL Trade team. Working for the European Commission on critical pieces of regulation such as the Digital Services Act (DSA), Digital Markets Act (DMA), and Artificial Intelligence Act (AIA) deepend her passion for responsible, trustworthy AI that serves humanity.
Finally, eleven months ago Evi joined Credo AI, a Responsible AI Governance Platform, as the Director of Global Policy. To enhance responsible AI governance, Credo AI uses “Credo AI Lens” - an open source python library - to conduct data set and AI model analyses, which are then explained, mapped, measured and managed in the Credo AI Responsible AI Governance Platform. With Credo AI's contextual AI Governance Software Platform, organizations can maximize their AI return on investment and minimize AI-related risks by ensuring their AI is fair, compliant, safe, secure, auditable, and human-centered across the entire AI lifecycle.
Evi credited her time at ITI and EUDEL for her robust interest in technology policy, and particularly her interest in AI, and noted that working with experienced colleagues on the AI working group at ITI and EUDEL inspired her to further pursue her curiosities in the field. Now, at Credo AI, she continues to pursue the question: “how can we think about human reactions and human interactions in the digital realm?”. Evi’s work focuses on collaborating with her team to interpret and translate national policies or regulations that can help to determine ways to identify, measure, and manage risks related to AI. Defining what terms like “trustworthy” and “transparent” mean for AI-enabled applications requires a cross-cultural conversation centered around values and ways to mitigate risks, both at the domestic and global level.
Responsible AI is an emerging field of research that is still growing, and policymakers are still learning “what good looks like” when it comes to AI. I was especially interested in Evi’s thoughts regarding the emergence of ChatGPT and the future of AI. She emphasized that this is an ever-evolving, active field of research, and that policymakers are eager to set guardrails for large language models or foundation models like ChatGPT, which have been widely adopted and touted as part of an “AI Gold Rush.” Evi noted that, “One of the key questions in AI policy at the moment is, ‘how are foundation models (FMs) and large language models (LLMs) distinct from ‘traditional’ machine learning, and what are the unique risks associated with FMs/LLMs?’ How are they defined? Stanford has a relatively widely accepted definition of (and research surrounding) foundation models, but it is not yet universally accepted. Secondly, what are the new risks that FMs and LLMs pose? Some of the risks are similar to those of ‘traditional machine learning,’ but FMs/LLMs pose new risks to both inputs (i.e. risks of bias and manipulations of training data, intellectual property and copyright issues, transparency, and privacy issues related to unorganized data used to develop the models), and outputs (i.e. misalignment, false content, toxicity, hateful/abusive content, disinformation, content generation without disclosure, and confabulations). IBM recently published a paper on policymaker considerations specific to FM/LLMs that is a great resource.
Policymakers have consistently grappled with the question of liability across the AI value chain, and different regions have approached AI regulation differently (the EU AI Act, UK National AI Strategy, and the various regulatory options the United States is considering as a follow-up to the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights). Post ChatGPT, policymakers are now asking about responsibility for stakeholders throughout the ‘Generative AI stack,’ including the responsibilities for LLM/FM providers (i.e. companies like Anthropic, Baidu, Google, Meta, NVIDIA, or OpenAI) versus a developer of an application based on an LLM/FM (i.e. companies like Copy.ai, HyperWrite, Jasper, Mem, or Writer).”
In her current position, Evi noted the value of working with amazing teams of engineers and data scientists who are real experts in this space, and are thinking deeply about questions like this. Going back to her initial point she highlighted that “I’ve learned so much from and been empowered by my team, including our CEO, Navrina Singh. It’s really important to respect, enjoy, and learn from the people you are working with - to me that is key to empowering yourself to grow in your career over time.”
Evi also utilizes the knowledge and skills she gained throughout the EUPS program to solve these key problems. She explained that employers, especially in the field of policy analysis, value the ability to “comprehend and succinctly but accurately summarize complex forms of governance and networks of legislation.” Knowledge of the EU institutions themselves, the policymaking process, as well as the ability to analyze and synthesize a great deal of complicated legislation are marketable skills that students of the EUPS program can continue to use to their advantage in the professional world. In her current position, these skills have aided Evi’s ability to thoroughly understand key regulatory proposals such as the EU AI Act (she shared her recent Credo AI blog on the EU AI Act as an example of this).The curriculum of the EUPS program also helped alumni to build the necessary skills to solve complex issues in a cross-cultural setting.
Thank you, Evi, for sharing your experience and insights with me. I appreciate your continued connection with the EUPS program and willingness to act as a guide for current students aspiring to start professional careers in public policy. It will be interesting to see how Responsible AI companies like Credo AI and FM/LLM tools like ChatGPT develop over the next few years, and I look forward to seeing you at the forefront of global technology governance efforts!