=====
“Digital freedom stops where that of users begins… Nowadays, digital evolution must no longer be offered to a customer in trade-off between privacy and security. Privacy is not for sale, it’s a valuable asset to protect.”
Stephane Nappo=====
“Ultimately, saying that you don’t care about privacy because you have nothing to hide is no different from saying you don’t care about freedom of speech because you have nothing to say.”
Edward Snowden
=====
Data privacy is a hot issue for a couple of reasons both based on the foundational thought we, the people, keep
getting fucked by our own data. Basically, other people make money off our data by them using our data to get us to spend money. How fucked up is that? Beyond that there are some real pragmatic issues that with all of our data floating out there beyond our control it can be used in some quite nefarious ways. All that said. Some people with integrity are attempting to change it all. The majority of this discussion is focused on regulation, i.e., regulating how companies gather and use our data.
Here is the problem. While most of us care, we don’t <and, more importantly, a bunch of the big tech boys who gather & use the data, don’t actually care>. For a variety of reasons any time people are offered opportunities to try and manage their data, they don’t (or certainly don’t to the extent that would enable some larger structural changes to how much data is accumulated and used).
I speculate this mostly happens because our personal data is one big black box of shit that most of us feel like we have zero control over, sense we may actually benefit from sharing it – on occasion gaining some convenience – and the penalties for sharing don’t seem to be so bad.
Basically, we’ve given up believing we can control, not clear on benefits of limiting, or even understand, pragmatically, what occurs when we tighten our data privacy. That trifecta encourages most of us to just do what we are doing and just sit around and bitch that evil technology companies don’t have any ethics.
To be honest. Even I, who knows enough to be dangerous about privacy and data, when I see the terms and conditions I very rarely accept because I don’t know what it will cost me. What I mean by that is by limiting something I assume it will limit something else. Its kind of like limiting cookies or certain website features, you get less popups (sometimes), but ease of website use seems to get a little wonky. Just to be clear. It is quite possible I am either conflating causation and correlation or even that because I actually took an action, my mind is creating an expected response (which isn’t really there). But. Maybe that makes my larger point. Because I don’t exactly know the consequences my mind will create them, they will most likely be negative, and that doesn’t exactly encourage me to do it more often in the future.
So. How do we get people to care? How do we get people to think it will matter what they accept or reject will deter the powerful unseen “tech” from doing whatever they want in the blackbox?
===
“Here in your mind you have complete privacy. Here there’s no difference between what is and what could be.”
Chuck Palahniuk===
Let’s think about an indirect approach.
People have to see the value, before caring, before doing anything. So, I have some thoughts on how to nudge the mindset. I believe there are two indirect things that could affect people’s mindsets.
- Origin ID.
Its not flashy nor does it appear to be directly related to communicating the importance of data privacy, but in my
mind, it begins to establish some validity to one’s own data within the black box. In other words, if all of a sudden, I know for sure the data that I am receiving (information is data) is valid and not some bot, well, then all of a sudden I start thinking “hey, mine has been evaluated and is valid too.” Its an indirect way of establishing some value proposition. I would be remiss if I didn’t point out that, collectively, it also begins establishing some shared sense of truth (or truthiness) and that is an excellent foundation for establishing my personal value within that shared sense of truth.
The internet world needs to clean up its validation origin act and this seems like a good place to start the data privacy ballgame.
To be clear. I am relatively sure the moment we validate origin ID (proof everyone is human and verified ID) someone with less than good intentions will subvert the system, but anything has to be better than the current anarchist system. In fact. Maybe that is my last point in this idea. Creating a system that doesn’t feel like anarchy, alone, furthers the concept of data privacy by suggesting it could possibly be controllable. I think this idea is attainable in a reasonable amount of time. The trick to this one is who is trustworthy enough to not only validate but have some records of everyone’s origin ID. My gut tells me blockchain technology offers a viable solution, but someone smarter than I would have to noodle that.
- Business use (with employees).
This one is trickier than Origin ID but, in my mind, if businesses can show an employee that their data has actual value in their own personal progress/growth within a business it becomes a platform for understanding how their data has value in a grander narrative. Maybe think about this as ‘if I can show on a smaller more personal case study than communicating a larger case study is the logical next step.’ Personally, I think its nuts that a business does nothing with data of their employees to better understand, better develop and better assess each employee’s potential growth opportunities. I think its even nuttier that it doesn’t happen in organizations talking about “digital transformation.” I mean if you are going to create a digital infrastructure wouldn’t you want to connect it with, uhm, the digital imprints of the people/humans who will be optimizing that infrastructure? And, yet, while I think its nuts, I also think this particular idea is years away from being implemented.
** note: I actually have outlined a Knowledge Distribution business model where AI-driven information flow is customized and tied to personal data of each employee suggesting it is something like a Human Nanofactory and offered something called ‘technology-distributed information’.
In the end.
While the world desperately needs a globally coherent plan of action for data privacy, it will not happen until the people of the world actually care and want it and will actually do something when given an opportunity. Along that point, I bet if you asked 100 random people they would (a) probably think data privacy is a good thing and, simultaneously, (b) have it number 101 on their list of 100 things they need to be thinking about.
I say it that way to make the point that this is actually a good equation for the future of data privacy. The ability to make something more important to someone increases exponentially if their existing mindset is that something is most likely a good idea. All we have to do is to make them care and, in my mind, it should begin by making their own data more tangible to them. Like, origin ID and the business they work at showing how their data can be used. Look. I am sure there are hundreds of better ideas than the ones I offered but I feel relatively confident my ideas would go a long way to getting people to care. So just think about it.




I am not a past guy and I believe “authentic” is one of those words that is currently being abused in a variety of definition-type ways, but, I would offer a reminder to everyone that if you want something authentic it is actually the past <I will expound on that in a minute>.



bring it to life? I would suggest more often than not this is exactly what we do. So, then we go about fixing the system, or fine tuning it to match the strategy, only to find the obstacles we foresaw were not really the inhibitors we thought (or by fixing them we created some unintended consequence instead).
I just said that.
related to business value provided and in this case that translates into “we are paying him because he contributes to the likeability in our culture” (maybe suggesting he contributes in some way to social cohesion). Which leads me to bad. Bad in that everyone else in the company senses that if you don’t really have anything to contribute, but figure out how to be likeable you can pull down a sweet salary and get healthcare.
===



This is about Geronimo and it’s not. Geronimo was a Chiricahua Apache who, after his family was murdered by Mexican troops, pretty much dedicated himself to revenge as a warrior. Ok. At the same time he dedicated himself as what we would call “anti-establishment” in today’s world. He just wanted to be left alone on lands he believed was his tribes, to live with people he loved, and live a life he loved. My point is it is difficult to talk about Geronimo and some fairly heinous actions without at the same time acknowledging the context, the environment, within which he did those things.

In ‘the experience economy’ or ‘experience as value’ world far too many people are simply laying out ‘experience’ as some amorphous wonderful blob of ‘do it well’. Sure. Sometimes it is “customer experience”, sometimes user experience, but more often than not someone stands up in front of a big screen and suggest “experience is the new value.”
good way. Conceptually this is adding dimension to a linear, or horizontal, time continuum. I bring that up because many businesses map out ‘customer journeys’ <which can be a helpful tool> and, yet, that linearity can make you miss the experience within, which is expandable, and reflects essential parts to value. The best example I have of this is when I speak with UI/UX people and suggest ‘frictionless’ can actually diminish value and that purposeful friction moments can actually expand value.
But in order to continuously improve, or even more importantly, exploit opportunities, those people who have been optimized as a “part” need to have a free exchange of ideas with the “whole” if you desire to optimize the system itself. And should a business desire to attain the next level of its potential simply using the employees it has, this free exchange includes a free exchange of mistakes and unrealistic imagination. The latter is important because what may appear to be unrealistic in one individual’s imagination maybe be attainable and realistic when the ‘inspired idea’ is confronted by the whole. This means even the most ‘doer’ organization, one focused on execution, can become a collection of ideas which does incorporate the innovation necessary for continuous improvement but also has the ability to incorporate non-innovation ideas, a different configuration of existing resources and abilities, which is equally effective in terms of profitability and usefulness (using what exists is always more applicable than something new because no one has to learn something new).
Evolution is always in search of a weakness and systems are always evolving. This means they are dynamic in and of themselves with components working, and failing, and being replaced, and improved, continuously. The constraints are typically the infrastructure (capital expenditures the institution seeks to optimize its investments) and leadership mindset. So, while people, humans, may manage to probabilities the reality is constricted, or constrained, by the institution itself (which actually increases the likelihood of missed opportunity and/or catastrophes). Evolution, left to its own devices, tends to enhance an organization – efficiently and effectively. Should a business solely focus on execution, evolution is stifled and growth and progress has a ‘cap’.




unite the complexity into one seemingly unsolvable issue. Counterintuitively, the latter is actually the path to meaningful progress even though I suggest it is ‘unsolvable’. You do not ‘solve’ complexity, you use ‘ands’ to navigate and untap complexity’s potential.
Systems are persistent buggers. In fact, it is not unusual the persistence of a system is due solely to the existing mindsets, the language, the accepted ‘terms of agreement’ of how it works and should be worked, or, basically, what people consistently (almost as a default) think about it. This persists, the power/construct dynamics, as long as the terms of that agreement appear and feel favorable and the system thrives <or ‘works’>. As soon as the terms falter it begins to effect how people think about it and the system can become dysfunctional <or less functional than it was>. This persistency is also self-induced by the relationship of the system, people and productivity. Systems naturally deviate to the mean constantly dampening any deviations. In basic terms what this means is that systems naturally arc to existing productivity and discourages changes people may make to the system. Yes. Once a system is in place, and works, it is 
While principles provide some boundaries the natural temptation within any system (as noted in my first points) is to maintain the system if ‘it works’ <even if ‘works’ is suboptimal>. So, part of the criteria people need to assume is the ability to identify the parameters that matter (every business has things that make them successful) and blow the rest of the shit up. It’s an ongoing version of creative destruction in which you destruct something to create and create to grow in terms of impact. To be clear. Anyone can blow shit up, the true test of blowing shit up is destroying, or destruction, TO create. In other words. destruct paradigms, attitudes, beliefs, behaviors, mindsets, even the way things have always been done, in order to effectively set yourself apart from where you were before.
to claim people are lazy or complacent or ‘sheep’ <following the crowd>, but more often than not people are constantly sifting through everything they are seeing and hearing and encountering — slowly but surely building up their own self <or, in a negative sense, tearing their own self down>. From a business perspective it is a sense of productivity. i.e., attitudes and behaviors that create the productivity that contributes to the system and is of the system.