WASHINGTON, D.C. [Brown University] — To maximize the benefits of artificial intelligence while cutting down on its potential harm to society, Brown University computer scientist Suresh Venkatasubramanian urged members of Congress to be proactive about the technology and establish guidelines and protections so AI-based systems are developed and deployed responsibly.
“The truth is AI systems are not magic: AI is technology,” Venkatasubramanian said at a March 8 U.S. Senate hearing. “Like any other piece of technology that has benefited us — drugs, cars, planes — AI needs guardrails so we can be protected from the worst failures, while still benefiting from the progress AI offers.”
The hearing on Capitol Hill was organized by the Committee on Homeland Security and Governmental Affairs, which is the Senate’s primary oversight committee, and focused on the risks and opportunities of AI. Members of the committee invited Venkatasubramanian to testify as a witness given his extensive background on the issue.
Venkatasubramanian is a researcher and educator immersed in studying the development and impact of technology and AI. He’s spent the last decade researching the impact these systems have on people’s rights, opportunities and ability to access services. He recently served as an advisor to the White House Office of Science and Technology Policy to put together “A Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.”
During his testimony, Venkatasubramanian outlined terms surrounding AI, how the systems work and the many ways they can fail and already do, such as exhibiting discriminatory behavior. He stressed to lawmakers that machine learning algorithms, which crunch massive amounts of historical data to produce future predictions, are already being widely incorporated into everyday life, impacting people in a variety of real-world situations.
“Virtually every sector of society is now touched by machine learning, and the most consequential decisions and experiences in our lives are mediated by algorithms,” Venkatasubramanian said. “Where we go to school, how we learn, how we get jobs, whether we can buy a house, what kind of loan we get, whether we get credit to start a small business, whether we are surveilled by law enforcement or incarcerated before trial, how long a sentence for a convicted individual is and whether we can get parole — the list goes on and on and keeps expanding.”
Because AI impacts such a wide swath of society already, Venkatasubramanian believes protections should be established now, before it’s too late. He shared a list of recommendations that legislators could use to govern AI, including rigorous and independent testing, full transparency on algorithms being used by developers, and limits on AI’s use of personal data.
“Congress should enshrine these ideas in legislation not just for government use of AI, but for private sector uses of AI that have people facing impact,” he said. “My work is to imagine technological futures. There is a future in which automated technology is an assistant: It enables human freedom, liberty and flourishing — where the technology we build is inclusive and helps us all achieve our dreams and maximize our potential. But there’s another future in which we are at the mercy of technology, where the world is shaped by algorithms and we are forced to conform — in which those who have access to resources and power control the world and the rest of us are left behind.”
Venkatasubramanian knows which future he wants to imagine and what it will take to work towards it. “I believe that it is our job to lay down the rules of the road — the guardrails and protections — so that we can achieve that future,” he said.
The hearing also included experts from RAND Corporation and the Center for Democracy and Technology discussing the impacts of AI and need for governance. Venkatasubramanian’s testimony and a full video of the proceedings are available on the Senate Committee on Homeland Security and Governmental Affairs website.