Fund seeks to advance public awareness of AI

Article By : Julien Happich

Rather than focusing on common AI applications, the fund will break down silos among disciplines and take an informative role for society as a whole.

AI’s rapid development brings along a lot of tough challenges. Thus, an Ethics and Governance of Artificial Intelligence Fund has been created with an aim to advance public understanding of AI. The latest initiative was initially funded with ₹179.99 crore ($27 million) from the Knight Foundation, and lists LinkedIn co-founder Reid Hoffman, the Omidyar Network, the William and Flora Hewlett Foundation and Jim Pallotta, founder of the Raptor Group as it members.

The MIT Media Lab and the Berkman Klein Centre for Internet and Society at Harvard University will serve as the founding anchor institutions and are expected to reinforce cross-disciplinary work and encourage intersectional peer dialogue and collaboration.

Rather than focusing on common AI applications, the fund aims to break down silos among disciplines and take an informative role for society as a whole, complementing and collaborating with existing efforts and communities, such as the upcoming public symposium “AI Now,” which is scheduled for July 10 at the MIT Media Lab.

It will also oversee an AI fellowship program, identify and provide support for collaborative projects, build networks out of the people and organisations currently working to steer AI in directions that help society, and also convene a “brain trust” of experts in the field.

The Media Lab and the Berkman Klein Centre for Internet and Society will leverage a network of faculty, fellows, staff and affiliates to address society’s ethical expectations of AI, using machine learning to learn ethical and legal norms from data and using data-driven techniques to quantify the potential impact of AI.

“The thread running through these otherwise disparate phenomena is a shift of reasoning and judgment away from people," said Jonathan Zittrain, co-founder of the Berkman Klein Center and professor of law and computer science at Harvard University. “Sometimes that's good, as it can free us up for other pursuits and for deeper undertakings. And sometimes it’s profoundly worrisome, as it decouples big decisions from human understanding and accountability. A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it.”

First seen on EE Times Europe.

Leave a comment