One of the most essential data privacy regulations that has been brought in during the last two decades could be fundamentally wrong when it comes to how it perceives the privacy and security of data.
Data Is the New Oil
A century ago, oil companies were the ones to dictate the course of history as their power was fueled by the most valuable commodity back in the day. Governments from all over the world were quick to regulate the oil market and put a stop to giant companies that were rapidly expanding.
Nowadays, companies such as Amazon, Facebook or Google are often compared to oil companies since they currently control what seems to have become the most precious thing in the 21st century — data. Being in control of data means being in control of people, and history seems to be repeating itself as some governments are doing their best to tame this beast that many believe could become dangerous for everyone once unleashed.
In other words, data plays a huge role in our society, and the one who owns the biggest amounts of crude data is most likely to extract valuable information from it and monetize it. Once extracted, refined, and piped, the final product will be ready to fuel the information economy, but deep down among zeros and ones lie pieces of info related to me, you, and every other person who consciously or unconsciously decided to share personal information with the world.
If you’re not paying for something, you’re not the customer — you’re the product being sold. Today, many business models rely solely on using your data to create monetary value without providing true compensation.
Crucial questions all of us should be asking ourselves are: Who owns pieces of information about us? Is it still us or is it the company that extracts and refines the new oil? How can their manipulation affect individuals or the entire society?
Regulation as Defense
Although there is no single answer everybody agrees on regarding the questions above, we can all agree that things will get out of control if we don’t take some steps to protect private data as much as possible.
There are two approaches we should consider when it comes to data protection:
- The “Don’t be Evil” approach
- The “Can’t be Evil” approach
“Don’t be Evil” is basically an approach by the European Union and its lawmakers who came up with the GDPR. The phrase was actually coined by a Google employee Paul Buchheit (or Amit Patel according to some sources) who was aware of the power his company was going to obtain one day.
He was advocating for the adoption of the code of conduct that wouldn’t be based around exploiting users, which was a common practice among their competitors somewhere around the year 2000. In the case of the GDPR, the phrase “don’t be evil” is actually a pretty accurate description of what this regulation was aiming at.
This was a somewhat expected move since the “Don’t be Evil” approach is the only way to solve problems, and it is adopted by every possible legal system in the world. But is it the right way to approach data privacy?
However effective, it creates more problems than it solves. Although the primary goal of the GDPR is to protect citizens from corporations that are powered by their data, things haven’t been going well since the introduction of the regulation.
Generally speaking, laws are quite effective when it comes to regulating behavior, but in order for a law to be successful, its proposed standards need to be implemented on a large scale. The main issue here is that proposing standards is simply not enough if there is no effective way to execute them successfully and patrol around for possible regulation breaches.
One of the major problems the GDPR created is an even bigger disparity between giants such as Facebook and Google and smaller companies. According to this Bloomberg article, technology firms from the EU saw a 17.6% weekly venture deals reduction. Start-ups and new companies were the first to receive the blows struck by the GDPR.
On the other hand, giant companies that hold almost all the data only further profited from it due to the fact that they have enormous legal firepower to fight all kinds of regulations. Furthermore, brands such as Google, Amazon, and Facebook are trusted by the majority of people who visit those platforms, whereas many people would prefer not to share their personal data with sites representing small companies.
Another problem that many fear might arise is that the high number of restrictions is going to either kill innovation or make it difficult to prove whether a form has been filled out or not. Innovation has been a very powerful force in business for a very long time. That force is synonymous to progress in every possible way, but some heavily-regulated industries such as the healthcare industry were often denied this progress because new companies had their wings clipped by the regulators. The IT industry is still unmatched when it comes to passion, high spirit, and overall vigor with which it moves forward. Being regulated is definitely going to slow it down, but what the GDPR suggests is not the only possible way out.
The GDPR assumes there is always a central entity that owns data that needs to be processed. However, it is sometimes better for internet users that there is no such entity at all, and that they remain the single owners of their data. However, this type of approach is almost entirely excluded from the GDPR consideration that doesn’t accept the fact that there is room for an approach that works with the help of technology. The technological aspect is completely overlooked by the GDPR as it assumes the legal one more practical.
The GDPR assumes the majority of people will act in good faith and the ones who do not will be punished. But when someone is being punished, the regulators only deal with the consequences since the damage has already been done. Therefore, punishing people for misconduct would not repair the damage they had done, and the consequences of their actions would linger on until repaired. Simply put, regulators would have twice as much work. Plus, nobody likes being punished.
That cannot be a legitimate approach to the data protection issue that is, after all, created as a result of technological advancement. But what if technology can jump in to save the day and offer a valid solution that would not create problems such as the ones discussed above?
There is a great movie directed by and starring Ricky Gervais called The Invention of Lying, the plot of which is set in a fictional world where all people tell the truth all the time. Long story short, the step in evolution where people learned to lie was obviously skipped in this made-up universe, and all of its residents are programmed to speak only of “what is”. The plot follows an office wage slave whose neurons somehow get reconnected and he becomes the single person in the world who is able to talk about “what isn’t.”
If you like romantic comedies with sci-fi elements, this is a great watch for a Sunday afternoon, but that’s not our point here. The point is that the movie is a great introduction to the “Can’t be Evil” approach, except that it’s not humans who will be the subject of such an experiment, but machines.
Although it would be disastrous for the humankind to be devoid of lying, it wouldn’t for technology, and that’s what the “Can’t be Evil” approach is all about.
In other words, people cannot “be evil” if the technology they use simply doesn’t let them. Therefore, the only logical solution to the data privacy problem is to address it at the technological level, as that is the only option that disables any other possibility, except for the one where the user is the sole owner of his/her data and is in total control of it. This approach would put an end to the “moral issue” addressed in the “Don’t be Evil” approach.
Moreover, the “Can’t be Evil” approach is the only scalable solution. It would eradicate problems that are connected to scalability, such as untrustworthiness of news sources, an increasing number of successful cyberattacks on important institutions, loss of personal data, centralization of power by a couple of “big players,” and more. Simply put, the amount of malpractice related to data has long surpassed the capacity of the “Don’t be Evil” approach to monitor such wrongdoings.
When it comes to globally adopted products, there are too many challenges to be overcome for effective application of laws on a global scale.
Privacy and Security by Design
Instead of coming up with regulations that would include monitoring the new Internet and using third-party tools for protection, it is better to build privacy and security in the core protocol and make them an integral part. This approach is often referred to as “Privacy by Design” or “Security by Design.”
Many new technologies and systems are adopting “Privacy by Design” and “Security by Design” approaches, which means that when a product is being designed, privacy and security are treated as top priorities and they are virtually equal to the product’s primary purpose. Adopting these approaches is made easier with Security by Design Principles that you need to stick to if you are going to make such products.
One of the areas where the “Security by Design” practice is really becoming essential is the Internet of Things. Now that all things around us have access to the Internet and can be subject to cyberattacks, manufacturers have to pay attention to security as soon as they start working on their products.
Let’s take a car as an example. When manufacturers build standard cars, they incorporate a sort of “Security by Design” approach such as alarms, automatic locks, airbags, remote keys, and more.
Cars, just like every other product nowadays, are keeping up with the latest technological advancements. In fact, many would even argue that cars are running in front of technology, as the automotive industry is often one of the pioneers of technology-related breakthroughs. Therefore, automobiles are rapidly connecting to the Internet of Things these days, with many automotive giants planning their future models to be online as well.
Now that many cars are part of the IoT, they can be attacked by hackers, malware, and whatnot, and that is yet another perspective that needs to be taken into consideration as soon as the manufacturing process starts. In other words, manufacturers of connected cars need to have cybersecurity of their cars in mind during all stages of construction, just like they keep in mind standard security protocols.
One piece of technology that was built with both privacy and security in mind is blockchain. Since it is a decentralized technology, privacy and security are already in its nature. Blockchain doesn’t include a single point of attack that would endanger data since data is stored in a decentralized manner across all the nodes that are part of the blockchain network. In order to explain how blockchain can jump in and offer better solutions than the GDPR, we need to dive deeper into this technology. Read on!
CRUD vs. CRAB
The “Security and Privacy by Design” approach, which is part of the three pillars of blockchain (transparency, immutability, and decentralization), creates an innovative perspective of how we interact with software and databases.
Before we discuss that perspective, let’s take a step back and try to put our current slant into words. CRUD or Create–Read–Update–Delete has been the epitome of the way we’ve been thinking about software and data storage since the dawn of time. These basic operations of any persistent storage system are what we learned to account for when we design software, as well as security and privacy policies. We take CRUD for granted.
At the same time, blockchain operations can be described as CRAB or Create–Retrieve–Append–Burn. Blockchain cannot perform the Update function on transactions, it can only Append a new value and add a new transaction.
The most crucial and the most important aspect of CRAB is the Burn concept. Let’s take a look at an excerpt from the original CRAB blogpost describing burn.
Burn: Deleting something from a blockchain conflicts with the immutability. We can stop the ability to TRANSFER the asset by transferring it to an un-spendable public key. We generated an artificial public key that looks like this: BurnBurnBurnBurnBurnBurnBurnBurnBurnBurnBurn. The likelihood to generate a vanity address (and know the private key) that is 11 times “Burn” is extremely low.
It is very clear that the data itself is never deleted in blockchain. If something was deleted from blockchain, it would be in direct opposition to one of the three pillars and would pretty much stop being blockchain. That is why nothing can be deleted, only denied the ability to be transferred, as described in the paragraph above.
The only thing that is really lost in the Burn operation is the key to control the transfer of data. So when using the Burn operation on a data asset, we are only losing the ability to transfer that data asset any further. However, that data asset still stays in the same place.
Moreover, by using the Burn operation, we make the data pseudonymous or completely anonymous, depending on the use case and the architecture behind the product. Let’s take a look at how pseudonymization and anonymization are described on the official EU site:
The GDPR and the Data Protection Act 2018 define pseudonymization as the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that (a) such additional information is kept separately, and (b) it is subject to technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable individual.
Although pseudonymization has many uses, it should be distinguished from anonymization, as pseudonymization only provides limited protection for the identity of data subjects in many cases as it still allows identification using indirect means. Where a pseudonym is used, it is often possible to identify the data subject by analyzing the underlying or related data.
Data that has been irreversibly anonymized ceases to be “personal data”, and processing of such data does not require compliance with the Data Protection regulation. It means that organizations could use it for purposes beyond those for which it was originally obtained and can even keep it indefinitely.
Clash or Coexistence?
You can always choose to keep any user data off-chain and never clash with the GDPR in any way. The GDPR does allow you to use blockchain in that way too, like any other technology. It just fails to maximize the effectiveness of blockchain.
Moreover, the GDPR does raise a valid point. User’s identity and personal details should not be part of any product.
The concept of “identifiability” is closely linked with the process of anonymization. Even if all of the direct identifiers are stripped out of a data set, meaning that individuals are not “identified” in the data, the data will still be personal data if it is possible to link any data subjects to information in the data set relating to them.
Data can be considered “anonymized” from a data protection perspective when data subjects are no longer identifiable, having regard to any methods reasonably likely to be used by the data controller — or any other person to identify the data subject. Data controllers need to take full account of the latter condition when assessing the effectiveness of their anonymization technique.
This means that you can, in theory, keep your user details on blockchain and still have a product that does not store any user details that can directly or indirectly lead to the user being identified, even on a public blockchain.
You can also keep your user data off the chain, but you will fail to comply with the GDPR standards. Having a database that supports CRUD is not always enough.
There are two ways to approach this problem.
One way is to keep user data completely off-chain and deal with disputes as they come along. The storage containing the data should be able to perform CRUD operations and the architecture of the system should also be in line with the GDPR practices.
The second way, and probably a better way, is to completely devoid the user identity from the application itself, making all data you collect completely anonymous by nature.
The Future Is Self-Sovereign
In order to separate the identity of the user from their data, a concept of self-sovereign identity is introduced. According to this Metadium article, SSI is a concept where users are the sole owners of both their digital and analog identities, thus being able to control processes related to using and sharing their personal data. Those who hold self-sovereign identities will, therefore, have a chance to reveal only bits and pieces of the necessary data for every type of interaction or transaction that takes place and is related to their identity.
This model enables identity holders to present claims to identifiers with no intermediary meddling in the process, and that is the perfect spot for blockchain to come in and save the day. Naturally, this system comes with a set of principles as devised by Christopher Allen.
Therefore, the solution to data privacy that also adheres to the regulations imposed by the GDPR is at the level of technology once again. Companies that create products can create private blockchains that don’t actually own user data. Instead, the data would be part of the external self-sovereign identity that is essentially not part of the product.
At the moment, the team at uPort is exploring the possibility to create a self-sovereign identity with a decentralized identity platform based on blockchain technology. With the “Privacy by Design” approach, uPort will enable users to control their identity and how it is “shaped, shared, and sustained” by using their identity manager.
For example, users will be able to create a sort of “proxy identity” for every app they use. The proxy identity is not part of their central identity, as it bears only the data requested by the app. This approach is both compliant with the GDPR and enables businesses to collect data they need.
Blockchain is private by design, which means that public and private keys are used extensively on blockchains. However, there is no inherent linkage between them and no personally identifying information, which works well with the GDPR, but that makes auditing difficult and enforcement almost impossible on public blockchains.
In order for blockchain to be more friendly to our current way of doing things, there needs to be a way to manage identities on the chain and link them to other pieces of identifying information for multiple reasons, including auditability.
In the EU, we have a law that covers that — that is not the GDPR but eIDAS. The EU eIDAS regulations are legally binding and allow for the use of electronic signatures across borders in the internal EU market.
eIDAS states that everyone creating an account, posting content, writing a review, or buying an ad would be required to identify themselves electronically in order to demonstrate that they are indeed who they claim to be. This identification can be done by video or in another manner that can securely associate the signer with the online account or the information shared using a secure private key.
That would put an end to fake accounts and bots and create more accountability for the people who share information online.
As a law, eIDAS also leaves a lot of room for misuse as it can be very conflicting with the GDPR in terms of its main principles — online anonymity.
Another piece of technology that is integral to Blockchain and can help us be compliant both with GDPR and eIDAS when dealing with online identity is called zero-knowledge proof, and it may play an important role when it comes to the GDPR. According to the creator of Zcash (ZEC) cryptocurrency, Zooko Wilcox, it may be something that the regulators will actually be after in the future, despite the fact that it is often frowned upon and feared.
Wilcox used SSL encryption as the main argument while talking about how governments will probably accept technologies such as zero-knowledge in the future and make them an industry standard. He claimed that both internet privacy advocates and regulators had the same goals, as both sides regard privacy as a basic human right.
Another piece of technology that has become part of Zcash and improved with the introduction of Sapling is selective disclosure. It enables users to disclose information related to transactions for auditing purposes. Apart from that particular case, all data remains anonymous. That could prove beneficial, especially for regulators who fear that anonymity could lead to lawlessness.
Conclusion — It’s up to You!
Many people still don’t realize how the GDPR is going to affect data and privacy. The lawmakers had honest and noble intentions to protect people’s privacy when they created the GDPR, but their lack of comprehending possible technological solutions made the regulations an obstacle rather than an aid.
Those regulations will have an impact on individuals and businesses alike, which is going to echo in the years to come. You can choose to go for a short-term solution that would be compliant with the GDPR and lose a lot in the process or explore the blockchain-backed long-term solution. If you opt for the latter, you will not only avoid all the unpleasant clashes with the regulations but also go in line with them without slowing the whole process down.
To sum up, the GDPR is not one of the “bad guys.” The regulations and blockchain can coexist peacefully and even become best friends. The two are well off at the moment since blockchain is as useful as any other database, but there are opportunities for further enhancements that the GDPR needs to seize as soon as possible in order to make its friendship with blockchain even more fruitful and, above all, long-lasting.
In other words, the use of blockchain with the GDPR could be significantly improved if the latter were updated and the focus shifted to a formula that takes the technical solution into account as well.
It is up to you to decide how you want to use blockchain with your product as long as you design its system in a way that is compliant with current regulations. On the other hand, the legislation is expected to be adapted to the latest technological advancements in order to make room for further progress.
This article is part of our blockchain awareness posts where we try to help newcomers and people interested in blockchain use cases enter the space more easily. Follow us and subscribe for more upcoming articles such as this one, and feel free to join the conversation on Twitter and LinkedIn.