“Digital freedom stops where that of users begins… Nowadays, digital evolution must no longer be offered to a customer in trade-off between privacy and security. Privacy is not for sale, it’s a valuable asset to protect.”
Stephane Nappo


“Ultimately, saying that you don’t care about privacy because you have nothing to hide is no different from saying you don’t care about freedom of speech because you have nothing to say.”

Edward Snowden


Data privacy is a hot issue for a couple of reasons both based on the foundational thought we, the people, keep getting fucked by our own data. Basically, other people make money off our data by them using our data to get us to spend money. How fucked up is that? Beyond that there are some real pragmatic issues that with all of our data floating out there beyond our control it can be used in some quite nefarious ways. All that said. Some people with integrity are attempting to change it all. The majority of this discussion is focused on regulation, i.e., regulating how companies gather and use our data.

Here is the problem. While most of us care, we don’t <and, more importantly, a bunch of the big tech boys who gather & use the data, don’t actually care>. For a variety of reasons any time people are offered opportunities to try and manage their data, they don’t (or certainly don’t to the extent that would enable some larger structural changes to how much data is accumulated and used).

I speculate this mostly happens because our personal data is one big black box of shit that most of us feel like we have zero control over, sense we may actually benefit from sharing it – on occasion gaining some convenience – and the penalties for sharing don’t seem to be so bad.

Basically, we’ve given up believing we can control, not clear on benefits of limiting, or even understand, pragmatically, what occurs when we tighten our data privacy. That trifecta encourages most of us to just do what we are doing and just sit around and bitch that evil technology companies don’t have any ethics.

To be honest. Even I, who knows enough to be dangerous about privacy and data, when I see the terms and conditions I very rarely accept because I don’t know what it will cost me. What I mean by that is by limiting something I assume it will limit something else. Its kind of like limiting cookies or certain website features, you get less popups (sometimes), but ease of website use seems to get a little wonky. Just to be clear. It is quite possible I am either conflating causation and correlation or even that because I actually took an action, my mind is creating an expected response (which isn’t really there). But. Maybe that makes my larger point. Because I don’t exactly know the consequences my mind will create them, they will most likely be negative, and that doesn’t exactly encourage me to do it more often in the future.

So.  How do we get people to care? How do we get people to think it will matter what they accept or reject will deter the powerful unseen “tech” from doing whatever they want in the blackbox?


“Here in your mind you have complete privacy. Here there’s no difference between what is and what could be.”
Chuck Palahniuk


Let’s think about an indirect approach.

People have to see the value, before caring, before doing anything. So, I have some thoughts on how to nudge the mindset.  I believe there are two indirect things that could affect people’s mindsets.

  1. Origin ID.

Its not flashy nor does it appear to be directly related to communicating the importance of data privacy, but in my mind, it begins to establish some validity to one’s own data within the black box. In other words, if all of a sudden, I know for sure the data that I am receiving (information is data) is valid and not some bot, well, then all of a sudden I start thinking “hey, mine has been evaluated and is valid too.” Its an indirect way of establishing some value proposition. I would be remiss if I didn’t point out that, collectively, it also begins establishing some shared sense of truth (or truthiness) and that is an excellent foundation for establishing my personal value within that shared sense of truth.

The internet world needs to clean up its validation origin act and this seems like a good place to start the data privacy ballgame.

To be clear. I am relatively sure the moment we validate origin ID (proof everyone is human and verified ID) someone with less than good intentions will subvert the system, but anything has to be better than the current anarchist system. In fact. Maybe that is my last point in this idea. Creating a system that doesn’t feel like anarchy, alone, furthers the concept of data privacy by suggesting it could possibly be controllable. I think this idea is attainable in a reasonable amount of time. The trick to this one is who is trustworthy enough to not only validate but have some records of everyone’s origin ID. My gut tells me blockchain technology offers a viable solution, but someone smarter than I would have to noodle that.

  1. Business use (with employees).

This one is trickier than Origin ID but, in my mind, if businesses can show an employee that their data has actual value in their own personal progress/growth within a business it becomes a platform for understanding how their data has value in a grander narrative. Maybe think about this as ‘if I can show on a smaller more personal case study than communicating a larger case study is the logical next step.’ Personally, I think its nuts that a business does nothing with data of their employees to better understand, better develop and better assess each employee’s potential growth opportunities. I think its even nuttier that it doesn’t happen in organizations talking about “digital transformation.” I mean if you are going to create a digital infrastructure wouldn’t you want to connect it with, uhm, the digital imprints of the people/humans who will be optimizing that infrastructure? And, yet, while I think its nuts, I also think this particular idea is years away from being implemented.

** note: I actually have outlined a Knowledge Distribution business model where AI-driven information flow is customized and tied to personal data of each employee suggesting it is something like a Human Nanofactory and offered something called ‘technology-distributed information’.

In the end.

While the world desperately needs a globally coherent plan of action for data privacy, it will not happen until the people of the world actually care and want it and will actually do something when given an opportunity. Along that point, I bet if you asked 100 random people they would (a) probably think data privacy is a good thing and, simultaneously, (b) have it number 101 on their list of 100 things they need to be thinking about.

I say it that way to make the point that this is actually a good equation for the future of data privacy. The ability to make something more important to someone increases exponentially if their existing mindset is that something is most likely a good idea. All we have to do is to make them care and, in my mind, it should begin by making their own data more tangible to them. Like, origin ID and the business they work at showing how their data can be used. Look. I am sure there are hundreds of better ideas than the ones I offered but I feel relatively confident my ideas would go a long way to getting people to care. So just think about it.

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Written by Bruce