Tech companies just need to adapt the bug bounty system they already use to detect vulnerabilities in code.
In May 2021, Twitter user @capohai shared a screenshot of a Google search for “what do terrorists wear on their head,” which returned, as the first result, the Palestinian keffiyeh scarf. At the same time, the French senate had just voted to prohibit women under 18 from wearing hijabs in public, and President Macron’s LREM political party had pulled support from candidate Sarah Zemmahi for wearing a hijab in a campaign ad. How many others asked Google the same question and took its response as validation of their own prejudices or as an objective statement of fact? How many others were hurt by the results?
The outrage over Google’s tacit equation between Palestinians, headscarves, and terrorists spilled from social media into the news, but when the same search is performed today, the keffiyeh is still the top result.
It makes sense that @capohai turned to Twitter because one of the only options for most people who notice tech companies’ unethical behavior—privacy breaches, furthering hate speech and disinformation, biased behavior, and more—is to post about it on social media. But as this example shows, the retribution model does not work to actually correct ethical violations.
Looking at the bigger picture, there have been calls for greater regulation of the tech industry, which is deeply necessary, but legislation may take a long time to pass and be implemented and is generally insufficient to stop the unforeseen ethics failures endemic to technology. Since algorithms tend to express our (bad) values in unexpected ways that require constant updating and fine-tuning to correct, regulations, no matter how deftly and broadly written, cannot foresee and stop all future issues.
But there is an option that does not rely on either social media outrage or new regulation. Tech companies are actually already configured for handling ethics issues at scale. They just need to adapt their existing bounty system.
Right now, hundreds of companies and organizations, great and small, offer bounties ranging from thousands to millions of dollars to those who find vulnerabilities in their code that bad actors could exploit. Google’s bounty program even covers applications sold through its Play store. Apple, which only recently began a bounty program (with compensation of up to a million dollars for the most serious types of exploits) takes a similar approach. In its program notes, the company states that it will “reward researchers who share with us critical issues and the techniques used to exploit them,” providing public recognition and matching donations of the bounty payment to charities.
Imagine how much better the products and services of Silicon Valley could be if these companies grouped ethics violations under “critical issues and the techniques used to exploit them” and began offering corresponding bounties. After all, ethical violations can cause just as many problems for a company and users as a bit of leaked code. The above language wouldn’t even need to be changed. And the ethics bounty program could use the rest of Apple’s rules, which include: 1) You must be the first to report it, 2) you must clearly explain and show evidence of what happened, 3) you can’t disclose it publicly before Apple gets a chance to patch it, and 4) you can get a bonus if the company inadvertently reintroduces a known problem into a new patch.
For users, a bounty system would encourage people to search for ethics violations and report them more quickly. For companies, this system could help them locate and address problems before they cause harm to more customers, generate negative press, and potentially destabilize governments. Granted, some companies may be unfazed by negative press, the loss of customers, and the furthering of prejudices, but they are still likely to be motivated by the long-term stability and goodwill such a program could create. Having a public record of responding thoughtfully to ethics issues in the past can also help a company if it wants to recruit talented workers and grow into other markets and industries.
There are many situations in which ethical concerns will prove challenging to fix holistically, but short-term modifications—like manually changing a search result and explaining why—are often possible while companies work on long-term solutions.
This bounty system is necessary on a global scale, as far too little attention is paid to the vast number of online ethics violations that are committed in languages other than English and that do not cause direct harm in the United States. In addition, a bounty system that makes it easier for average users to contact companies and see results from their report can change the public’s understanding of these technologies and the role people play in continuously maintaining, fixing, updating, and transforming them. The internet is not a static and stable object, and imagining it as such only makes it harder to transform it into something that works better for everyone.
An ethics bounty system may sound unfeasible, but it’s far more realistic and easier to quickly implement than many other ideas being discussed right now, including the governmental regulations that we also need. After all, users will continue to find ethics issues and post about them, regardless of what Silicon Valley does. A bounty system would just provide a less damaging and more beneficial process. And while regulations are generally national in scope, this system could help communities around the world. (A consortium of AI researchers has already recommended paying developers who find bias in their systems, and Twitter held an “algorithmic bias bounty” contest last year, but such a program should be far more sweeping and open to all contributors.)
Ideally, the ethics bounties would be just as lucrative as current bug bounties and be tied to the level and types of harm they could cause. To avoid further cover-ups and blatant lying, this could all be governed by a nonprofit center staffed by a large and globally diverse group of ethicists charged with taking advice and their cues from those communities most likely to be affected by a potential ethics violation.
This center would be charged with determining the extent of ethical harms, setting the price of bounties, and informing the individual companies what should be done. While ethics panels inside companies are not typically permitted to release information on their investigations or enforce their findings, this independent center would need to be responsible for releasing details about ethics violations after individual companies have had a chance to respond on their own. It would be important to make these ethics problems public to help others avoid making the same mistakes.
An ethics bounty system could help Silicon Valley make amends and repair the communities and individuals its products have hurt. In the process, it could help make these companies more directly accountable to the communities they serve.
All Rights Reserved for Jonathan Cohn