
==
“Hatred is not the norm. Prejudice is not the norm. Suspicion, dislike, jealousy, scapegoating ― none of those are the transcendent facets of the human personality. They’re diseases. They are the cancers of the soul. They are the infectious and contagious viruses that have been bleeding humanity for years. And because they have been and because they are, is it necessary that they shall be? I think not.”
Rod Serling
==
“If you wanted to dismantle an entire generation from the inside out, it wouldn’t take much. Forget bombs or economic sabotage—too messy, too obvious. No, if you really wanted to destabilize a generation, you’d start with something far more subtle: you’d convince them that the world revolves around their feelings. You’d make them believe that every thought, every impulse, every fleeting emotion is sacred, and that the universe is obligated to rearrange itself to accommodate whatever’s going on in their heads.”
==
Hatred. Prejudice. Suspicion. Dislike. Jealousy. Scapegoating. All are feelings. Just as another tautophrase suggests, “the heart wants what it wants,” this tautophrase exists, “the mind wants what it wants.” This becomes important when you are faced with a world run on feelings. What I mean by that is that ‘feelings’ is a fragile construct so if you live in a 24/7 world, cable news/talk radio/social media/search engines/algorithms, which is constructed to game and use your feelings, well, you/we are fucked. In fact, in a ‘feelings world’ individual purpose is replaced by individual validation. Or let’s just say even the most well intended Purpose construct gets deconstructed in a feelings validation world. And when feelings are the compass, we become emotionally fragile and hyper-focused on self and ‘making it’ becomes less an objective, but rather a construction where you do not have the hammers, nails or architectural plan. This gets compounded because the worldwide web is a 24/7 sharing machine. We barely notice this constant input and output of information, this ceaseless sharing, yet, it has psychological, social, and emotional consequences. Our feelings become our stories and we instantly share our thoughts, feelings, experiences and stories with hundreds or even thousands of people. This ceaseless sharing subtly shapeshifts your behavior (even if you think it doesn’t). Multiply that by the millions of strangers who, day by day, offer you unsolicited input and you have entered into the ‘feelings overload’ zone. Simplistically, you have dematerialized.
Which leads me to concrete dematerializaton.
It was Annie le Brun who coined this term. It constitutes the progressive loss of all tangible relationship with the world. Basically, it encourages us to accept feelings as THE impression of living. Its kind of a subversion of a world of experiences where everything of value is in the experience not fundamental, pragmatic, tangible value received. It becomes a substitute for reality (and makes it easier to substitute value in what we may see as a failing world, i.e., a failing world of feelings. A ‘world’ can become a failing world if it ‘feels’ like it is failing (even if progress is occurring, prosperity is increasing and by any tangible measure living and life is improving). It just ‘feels‘ wrong. Oddly, ironically, this feelings world deprives us of real emotion inspired by originality and reality. It is a more banal view of reality and the world because there is no personal relationship to time and space – other than feelings. And because feelings are so nebulous and intangible, we begin shaping ourselves based on fragmented, crowd-sourced feedback, adjusting in real time like a social marionette. Ultimately our feelings aren’t really ours, but rather a conglomeration of strangers’ expectations. This has always been true, but as Toffler pointed out in Future Shock, in the past the audience feeding our feelings input system was finite. In today’s social media world that input machine is infinitely larger in scope. The entire world offers every impulsive input into your world. Visibility-on-steroids leads to vulnerability-on-steroids which leads to defensiveness-on-steroids which leads to feelings overload (feelings-on-steroids). We are not completely unaware all this is happening and, yet, we end up living semi-aware that all our ‘feelings’ are a little skewed, a little infected, a little warped by strangers, and,, generally speaking, fucked up. We grow into gnarled versions of ourselves in this fucked up algorithmic soil.

Which leads me to powerlessness.
Just like we are powerless against cancer, we are powerless to the manipulations of algorithms. It takes a great power, and objective responsibility, to fight back against algorithms. Unfortunately, it’s a responsibility that most of us are poorly equipped to handle. We tear ourselves apart over and over again not exactly knowing how or when to stop engaging with algorithms. Our privacy, and private thoughts, are now public. Yeah. In today’s world it takes an exorbitant effort to maintain privacy. We have to actively choose not to share, to resist the temptation to engage online, to keep our thoughts and experiences to ourselves. The truth is most of us are not willing to put in that effort and, in fact, almost believe sharing is the status quo. And here is where it becomes a danger in a ‘feelings world.’ When sharing is the default, we share without thinking providing a constant stream of outputs to a world waiting to offer unsolicited input. From there it begins feeling like, well, every feeling, in every moment, is a guiding star. Minor feelings feel like existential crises. This is where true powerlessness sneaks in. When people believe that their feelings define reality, they’re incredibly easy to manipulate. Outrage and fear are powerful tools, and when you feel them deeply, you will ‘feel’ like you deserve to take some extreme measures to alleviate those feelings. This obsession with feelings is a psychological quicksand. The more you try to control the environment to protect your feelings, the more vulnerable you become to any shift in the public context. In fact, you get pulled further and further away from reality. You stop looking at hard facts, inconvenient truths, or anything that might challenge the feelings bubble you exist in. In addition, you become an easier target for the algorithms and the dubious actors who reinforce your feelings, no matter how disconnected from reality it might be. Uhm. But it ‘feels’ right and, well, making it gets tied to those feelings. Making it, in this world, elevates feelings above logic, where validation is found through input, not tangible output. In fact, in this feelings world if your output doesn’t match up with what you believe you deserve, or ‘feel’ you deserve, it is no longer your problem – it’s because of someone or something else impeding what you feel is yours. You just become a pawn in the feelings game and ‘making it’ is nothing real; just a feeling. While ‘making it’ is a different post for a different day, suffice it to say if your idea of ‘making it’ is based on feelings and external definition, you are fucked.
“Human attempts to construct moral order are always precarious: If righteousness too often leads to self-righteousness, the demand for justice can lead to one guillotine or another.”
Susan Neiman, Moral Clarity: A Guide for Grown-up Idealists
Which leads me to civic responsibility.
Civic responsibility is a way of insuring that even if you are self-focused, you don’t fuck over the collective ‘we’ (the community or society writ large). Civic responsibility is about caring for others, for everyone, not just oneself. I would be remiss if I didn’t point out the easiest way to defeat societal norms is to limit access to civic education. This doesn’t mean someone is less smart, but they are less well educated in basic civic principles which, inevitably, makes it possible to abandon the morality of a civil society. Without the basic education civic responsibility is often replaced by nostalgia or its partner – history. Ah. History is a tricky thing. History provides important context, but history also exerts a dangerous narrative gravity. If you expect the present to be a continuation of the past you aren’t actually looking at the present through clear eyes. The past suggests you can reasonably expect things. It suggests that things will obviously occur. The reality is the past only offers some patterns and patterns are only as good are only good until they’re broken. The past never predicts the future it simply offers possibilities and probabilities. What I would say is that civic responsibility is a drug to combat the cancer of the soul of society. Civic education diminishes hatred, prejudice, suspicion, dislike, jealousy, scapegoating. Civic education tamps down, well, personal feelings and enhances feelings of community. And, its quite possible, that a combination of civic education and civic responsibility increases one’s ability to fight back against the “feelings algorithms.” Ponder.


Which leads me to the superficial surface of human instincts.
Ah. Technocrats. This is a companion piece to “circulation of elites” just that this time I am focusing on technology and monopolistic technology autocrats who think of themselves as the elites to guide us all to a better future. Now. The bible of how technocrats have hijacked capitalism is Technopoly by Neil Postman, but the concept of technocracy can be traced to William Henry Smyth, a California engineer, who introduced the term “technocracy” in his 1919 article titled “Technocracy – Ways and Means to Gain Industrial Democracy,” which was published in the Journal of Industrial Management (Corporate Finance Industry). Anyway. Technocrats not only have an odd view of capitalism, but of society in general. Generally speaking, they have an inordinate dislike of constraints which only encourages technological advancement without any constraints, or any real moral boundaries, as they believe the market will naturally sort itself out. That is a dangerously naive thought. As Leo Marx said:
They are reengineering society, but just to be clear, they do so in the pursuit of scale and profit over safety concerns and public accountability. But maybe what should concern us most is HOW they seek to reengineer. Its not just that their objectives are a bit naïvely dangerous, but they also are crafting a structure of numbers and measurements (its kind of an extension of a belief that machines are the future so the way the future should be shaped is in machine terms). They are shaking the entire etch-a-sketch of society and they don’t care if any of the sand is lost while shaking. They have a structured ideology of mathematical measurement to justify anything and everything. It’s a warped vision of even the warped Taylorism that got us to where we are today. It is almost like we are being led by an illiterate group of people in that their only literacy is in numbers, scale and machine technology wrapped in a loose, but extreme, libertarian etch-a-sketch. They weave webs of ‘progress’ under the guise of numbers – usage, engagement, and views – while ignoring whether those individual usages, engagements & views, are actually good for human beings. “It will all sort out” and
“scale flattens negative affects” represents the technocrat mantra. They ignore the relationships between morality (what is good) and the power of the tools they are crafting. Once again, if you squint hard enough what they say seems reasonable, but stop squinting and it all becomes a blur of dangerously unconstrained technological innovations and systems. They will claim they are not doing this, but they are sacrificing progress for humanity and society for a hollowed out, but profitable, progress in business (because they see their ‘business’ as not really a business but a societal infrastructure development – for which they get paid for). I say all that so we can stop being surprised by the horrible existing leadership. They have no vision for what they are creating. They are just creating things assuming a vision will emerge from their technological wizardry. The past is all unworthy of consideration (even the important things of the past), they believe the future will be shaped not through some grand strategy but rather by chasing things – at exponential scale/pace – that seem beneficial, and numbers become proof of performance (note: numbers are not people). All of this is done rather than creating and sustaining a substantive healthy society.
optimizing the body’s potential and any number of human enhancing features, but they do so with the objective to increase productivity and wealth. Technology is certainly only increasing its role in the weave of lives, personal and professional, and the general infrastructure of how life is lived. There is no lack of technology ‘plenty.’ But one should reflect upon the harsh truth that nothing cuts deeper to a society, or any business in fact, then the unhealthy relationship between ‘want’ amid ‘plenty.’ Technocrats cannot envision anyone ‘wanting’ in the future when the reality is, unless constrained in some key ways, while technology will be plenty, plenty of people will end up ‘wanting’ for something other than technology. I will say that the one thing the technocrats are clearly right on is ‘technological driven future is inevitable.’ Technology will only become more and more ubiquitous in our lives; often in some fairly invasive ways. Some of the issues this creates cannot be ignored and, I would argue, should be resolved before we go off to the technological races (this is an 
blockchain and more. The trouble is while we get dazzled by some imaginative fantastical futuristic headlines, the reality is most technology business models are driven by lack of imagination and building toward immediate returns and feeding immediate consequences. What I mean by that is every innovative widget they develop is paid for with algorithms that drive immediate returns and immediate consequences. This is the underbelly of an instant gratification economy in that it is fed by (a) drive immediate engagement and (b) exploit human tendencies to engage. In other words,
detracting from completeness <or the best>, fault points to things that actively impair progress from completeness or the best it can be. It is something that exists and inherent to the nature of who they are, what they do and how they think. This fault means technology, and technology businesses, will continuously only offer us empty innovations – empty of real meaningful long-term returns – which will continue to empty, and mar, society as a whole.
In the end, I would suggest the technology elite shortsightedness view of immediate returns/immediate consequences, is a constraint, not just a fault.
When technology first arrived on the scene, particularly in terms of ubiquitous networks, social media, emails, anything internet based, I felt like many problems would be solved, civilization would just get smarter, we would make better societal decisions, and the world would just become a better place. I never believed that everything would be solved and we would attain some utopia, but like many of us, I was envisioning a better world because of this ubiquitous technology. And while many things have improved, and certainly foundationally, we still have the opportunity to significantly improve globally and societally, some things have certainly gone wrong. In many cases very wrong. I’m not sure I got the following things wrong, but I certainly overlooked what could affect the arc of the goodness. So, to paraphrase Marshall Mcluhan, let’s now take a quick tour of the walls knocked over by technology.
And while I’m a student of Alvin Toffler, and I clearly understood his point of view with regard to cognitive overstimulation, I imagine I did not see his point with true clarity until reality struck. The reality that the ubiquitous information machine was just simply too overwhelming for almost everyone’s brains to cognitively to assimilate in any useful way in addition to the fact technology wasn’t going to help us along the way. I never envisioned technology would step in and amplify a significant number of incredibly crazy stuff which created the cloud over the incredibly non crazy smarter stuff which would have made a better society.


I imagine this metaphor summarizes much of our current public narrative. As we all argue over what we deem the practical questions of the day, it is like we are the house dog munching peacefully on the meat while the entire house is looted. Much of this discussion centers around the role that the computer and technology play in our lives. That is the wrong discussion. We need to know in what ways it is altering our conception of learning, the reality of how we think, our values and, ultimately, how we shape our views of reality. The truth is technology alters the structure of all – our interests as well as the things that we think about. But maybe most importantly is that they alter the structure of how we actually think and what we value.

Our neural investment is grounded in our formed images of reality and our formed images of reality are typically grounded in experiences. This does not necessarily mean things that we have actually encountered and done, although those things do create deeper memories, but it could be words we’ve encountered, pictures we’ve encountered and opinions we’ve encountered. All of those things subconsciously gather together in certain parts of our brains and create some memories. Many of those memories are not causal, but actually a concoction of disparate experiences which coalesce into some memory. I worded it that way to suggest that sometimes memories are not exactly the most exact things. And they absolutely are not true reflections of reality, but they are the best that we have. And from those memories we find value. What I mean by that is those memories are valued by our brain therefore when we bring these memories to the forefront either when we’re ready to make a decision or reflect upon the present situation; they represent value. That value drives our attention. The sobering thought to end that discussion is, and this may sound odd when discussing very personal experiences and memories, garbage in & garbage out. Effective neural investment is an attempt to manage the garbage in so that what comes out is just a bit less garbagey.
All durable dynamic systems, including our brains, have this sort of structure; it is what makes them adaptable and robust. Navigating the long now leverages longevity to optimize value in the Now through what we pay attention to and, as a result, what experiences we have. I suggest all of that because if your ‘experiences’ are shaped by short term, your memories are a bit more fragile and brittle (and certainly of less deep value) and therefore the memories you lean in on when making choices are, well, more brittle and fragile. If we begin to become a bit more thoughtful with regard to what we pay attention to, to how we engage with our experiences, the memories created will exhibit more robust value from which, when tapped, will make future decisions more robust and resilient.
collective intellectual there’s a solution. We may not have it today but it exists. But, once again, people need to be purposeful because, stated or not, technology is pretty purposeful. It purposefully exploits, often degrades, or even destroys, how someone builds their reality, their mental self-confidence, their trust in processes and systems, and the approaches required for the efficient and effective functioning of communities and societies.
The reality is that the rate of technology change is no longer speeding up, what is speeding up is the social impact of all the technologies as they not only connect with each other, but become increasingly embedded in everyone’s social connectivity AND as a way of doing things. The embedded aspect is a bit important because technology has become a tool we use to shape our lives and livelihoods.

I almost called this ‘refinding technological optimism.’ Okay. Maybe it is more about anti-technological dystopia. I am not suggesting we be utopian, just that I question why we should ditch optimism. I thought about this in a conversation with Faris Yakob as we pondered our possibly naïve optimism about technology in the early 2000’s. Anyway. Two of the books I consistently pluck off my shelf to remind myself that technology dystopia has not always been the norm: 
To be optimistic is to believe in human ingenuity with no foreseeable limit. Technology can be crafted as an unending cascade of advancement. What this means is a belief that each advancement can not only eliminate the present technological issues, but also stretch the limits of what is currently possible. In its constant stretching both good, and bad, can occur but the good is constantly erasing the bad. Yeah. It’s an understanding that technology is both empowering/enabling and oppressive/constraining; often simultaneously. But within optimism is a rejection of a conclusion that the world is ugly and the people are bad. Optimism rejects dystopia as well as the status quo. I believe the status quo never invents the future, and vision, creativity and innovation crafts the future. Yeats is correct, 
I believe there about the same number of neurons in the brain as there are a number of stars in universe. We have used technology to explore space, to explore the brain, to explore the body, to explore the capacity of humans. Technology is the ship which can carry us to the farthest parts of the universe. My real fear and pessimism reside not with technology, but with ourselves – human beings. We have met the enemy and it is us. The truth is technology simply amplifies all the worst things of human beings. I’m not speaking of evil, although it lurks in the depths of the Internet, I’m speaking more about conformity. The internet defines how things should look like and scores of people line up to conform to that likeness. The same thing occurs with ideas and, well, everything. We are imitation machines. More access to all this information and imagery and words just simply encourages us all to become average, i.e., to all become the same. At some point we will all look like each other, speak like each other, and even use all the same words. That is my fear. My optimism resides in the belief that people are not average, they do like to be distinct, and they like progress to something new and better. Yeah. All progress is grounded in some spectacular risk, some spectacular mistake, or some spectacular idea that encourages everybody to zig while everyone else is zagging. And my optimism also encompasses technology because technology mirrors humanness.

A major difference between humans and computers is that at any given moment people are not choosing among all possible steps. What this means is when humans think of possibilities or ‘desired futures’ they are not even close to making the optimal choices. Instead, we, people, typically lean in on a deeply nested hierarchy of ‘knowns’ in our minds which we recognize as ‘better’ paths, or stepping stones, toward the future we desire. These nested ‘knowns’ are typically bounded biases (self-sealing beliefs). The 
Which leads me to the fact good decisionmaking is not always the optimal decision.