Until the collection of data is regulated, having power over your own information is more of a nightmarish responsibility than an appealing right.
The testimony of Facebook whistleblower Frances Haugen sparked the latest flare-up in a never-ending series of revelations on how companies and governments mine and commercialize our personal data. In an attempt to put consumers back in the driver’s seat, recent updates to data protection regulations such as the GDPR in the European Union and the CCPA in California have mandated transparency and control as critical pillars of privacy protection. In the words of the European Commission: “It’s your data—take control!”
Empowering consumers by giving them a say is a noble goal that certainly has a lot of appeal. Yet, in the current data ecosystem, control is far less of a right than it is a responsibility—one that most of us are not equipped to take on. Even if our brains were to magically catch up with the rapidly changing technology landscape, protecting and managing one’s personal data would still be a full-time job.
Think of it this way: Being in charge of your sailing boat is absolutely wonderful if you are drifting along the Mediterranean coast on a beautiful day. You can decide which of the many cute little towns to steer toward, and there are really no wrong choices. Now let’s imagine being in charge of the same sailing boat in the middle of a raging thunderstorm. You have no idea which direction to go in, and none of your options seem particularly promising. Having the “right” to control your own ship under these circumstances might not be very appealing, and could very easily end in disaster.
And yet, that’s exactly what we do: Current regulations drop people in the middle of a raging technology sea and bless them with the right to control their personal data. Instead of forcing the tech industry to make systemic changes that would create a safer and more amenable ecosystem, we put the burden of safeguarding personal data on consumers. Taking this step is protecting the creators of the storm more than the sailors.
For users to be able to exercise control over their personal data successfully, regulators need to first create the right environment that guarantees basic protection, in the same way the Securities and Exchange Commission regulates the investment world and protects individuals from making bad decisions. Under the proper conditions, individuals can choose among a series of desirable outcomes, rather than a mix of undesirable ones. In other words, we first need to tame the sea before handing individuals more control over their boats. There are a few steps that regulators can take immediately to calm the waters.
First, we need to make it costly for companies to collect and use personal data by taxing companies for the data they collect. If they have to pay a price for every piece of data they gather, they will think twice about whether they really need it.
Regulators also need to mandate that defaults are set to sufficient levels of protection. Users’ data should be guarded unless they choose otherwise, a concept termed “privacy by design”. Nobody has time to make privacy protecting their full-time job. Safeguarding information needs to be easy. Privacy by design reduces the friction on the path to privacy, and guarantees that basic rights are automatically protected.
Finally, we need to push for a widespread implementation of existingtechnological advances that allow consumers to “have it all.” Because of the way our brains are wired—most of us favor concrete and certain rewards in the now over abstract and uncertain rewards in the future—our privacy concerns don’t stand a chance when pitted against the desire for immediate insights, convenience, and service. The shift towards client-side processing of algorithms and user-based computer models, a technology known as federated learning, could help users accomplish both: benefit from their data without sacrificing their privacy. Fact is, you don’t need to upload all your data to a central server to get personalized recommendations and convenient service that’s tailored to you. We all have mini supercomputers in our pockets, and with the help of new technological advances, it’s possible to run and improve recommendation algorithms locally on individual phones without any data ever leaving its safe harbor.
The main argument against these common-sense steps is that stricter regulations would ruin all the amazing benefits that come from companies using our data. But innovations like Google’s GPS-based navigation system, or Alexa’s voice recognition are merely the brightest lights in a dark cloud of data-mining. You don’t benefit from your weather app accessing your photo gallery and tapping into your microphone. You don’t benefit from Facebook saving every keystroke of yours (including those that you decided to delete). And in most cases, you also don’t benefit from your data being sold to third parties, either directly or indirectly, in the form of advertising. Taxing companies for the collection of personal data, shifting the default to sufficiently high levels of protection, and using technologies to process personal data locally would force companies to find ways to create real value for their customers. Lip service and abstract promises would no longer do the trick. If companies don’t create value, they don’t get data. And if they don’t find ways to allow their customers to enjoy their product without compromising their personal information, another more progressive competitor will.
Creating a more user-friendly, human-centered, protective data ecosystem is critical for consumer empowerment. And it’s a step that must be taken beforewe can possibly live up to the regulatory promises of taking control of our own data. Systemic changes like the ones I suggested don’t come easy. They require courage and persistence. But it is only when the sea is calmed that having control over one’s data becomes a “right” again, and not just a responsibility we are bound to fail taking on all alone.
All Rights Reserved for Sandra Matz