Don’t let industry write the rules for AI

Indústria AI
Não deixe a indústria escrever as regras para o AI

Views: 28

Technology companies are running a campaign to bend research and regulation for their benefit; society must fight back

 Courtesy NATURE Yochai Benkler:  Industry has mobilized to shape the science, morality and laws of artificial intelligence. On 10 May, letters of intent are due to the US National Science Foundation (NSF) for a new funding programme for projects on Fairness in Artificial Intelligence, in collaboration with Amazon. In April, after the European Commission released the Ethics Guidelines for Trustworthy AI, an academic member of the expert group that produced them described their creation as industry-dominated “ethics washing”. In March, Google formed an AI ethics board, which was dissolved a week later amid controversy. In January, Facebook invested US$7.5 million in a centre on ethics and AI at the Technical University of Munich, Germany.

Companies’ input in shaping the future of AI is essential, but they cannot retain the power they have gained to frame research on how their systems impact society or on how we evaluate the effect morally. Governments and publicly accountable entities must support independent research, and insist that industry shares enough data for it to be kept accountable.

Algorithmic-decision systems touch every corner of our lives: medical treatments and insurance; mortgages and transportation; policing, bail and parole; newsfeeds and political and commercial advertising. Because algorithms are trained on existing data that reflect social inequalities, they risk perpetuating systemic injustice unless people consciously design countervailing measures. For example, AI systems to predict recidivism might incorporate differential policing of black and white communities, or those to rate the likely success of job candidates might build on a history of gender-biased promotions.

Inside an algorithmic black box, societal biases are rendered invisible and unaccountable. When designed for profit-making alone, algorithms necessarily diverge from the public interest — information asymmetries, bargaining power and externalities pervade these markets. For example, Facebook and YouTube profit from people staying on their sites and by offering advertisers technology to deliver precisely targeted messages. That could turn out to be illegal or dangerous. The US Department of Housing and Urban Development has charged Facebook with enabling discrimination in housing adverts (correlates of race and religion could be used to affect who sees a listing). YouTube’s recommendation algorithm has been implicated in stoking anti-vaccine conspiracies. I see these sorts of service as the emissions of high-tech industry: they bring profits, but the costs are borne by society. (The companies have stated that they work to ensure their products are socially responsible.)

From mobile phones to medical care, governments, academics and civil-society organizations endeavour to study how technologies affect society and to provide a check on market-driven organizations. Industry players intervene strategically in those efforts.

When the NSF lends Amazon the legitimacy of its process for a $7.6-million programme (0.03% of Amazon’s 2018 research and development spending), it undermines the role of public research as a counterweight to industry-funded research. A university abdicates its central role when it accepts funding from a firm to study the moral, political and legal implications of practices that are core to the business model of that firm. So too do governments that delegate policy frameworks to industry-dominated panels. Yes, institutions have erected some safeguards. NSF will award research grants through its normal peer-review process, without Amazon’s input, but Amazon retains the contractual, technical and organizational means to promote the projects that suit its goals. The Technical University of Munich reports that the funds from Facebook come without obligations or conditions, and that the company will not have a place on the centre’s advisory board. In my opinion, the risk and perception of undue influence is still too great, given the magnitude of this sole-source gift and how it bears directly on the donor’s interests.

Today’s leading technology companies were born at a time of high faith in market-based mechanisms. In the 1990s, regulation was restricted, and public facilities such as railways and utilities were privatized. Initially hailed for bringing democracy andgrowth, pre-eminent tech companies came under suspicion after the Great Recession of the late 2000s. Germany, Australia and the United Kingdom have all passed or are planning laws to impose large fines on firms or personal liability on executives for the ills for which the companies are now blamed.

This new-found regulatory zeal might be an overreaction. (Tech anxiety without reliable research will be no better as a guide to policy than was tech utopianism.) Still, it creates incentives for industry to cooperate.

Governments should use that leverage to demand that companies share data in properly-protected databases with access granted to appropriately insulated, publicly-funded researchers. Industry participation in policy panels should be strictly limited.

Industry has the data and expertise necessary to design fairness into AI systems. It cannot be excluded from the processes by which we investigate which worries are real and which safeguards work, but it must not be allowed to direct them. Organizations working to ensure that AI is fair and beneficial must be publicly funded, subject to peer review and transparent to civil society. And society must demand increased public investment in independent research rather than hoping that industry funding will fill the gap without corrupting the process.

Nature 569, 161 (2019)

doi: 10.1038/d41586-019-01413-1

Be the first to comment

Leave a Reply

Your email address will not be published.


*