
===
“If the world isn’t sane, technology will not solve it.”
Jaron Lanier
===
“The new technology mimes the prime procedure of human learning and knowing.”
Marshall McLuhan 1968
===
The title of this piece is a play on Kevin Kelly’s book “What Technology Wants.” The alternative title to this piece was I thought a lot of wrong good things.
When technology first arrived on the scene, particularly in terms of ubiquitous networks, social media, emails, anything internet based, I felt like many problems would be solved, civilization would just get smarter, we would make better societal decisions, and the world would just become a better place. I never believed that everything would be solved and we would attain some utopia, but like many of us, I was envisioning a better world because of this ubiquitous technology. And while many things have improved, and certainly foundationally, we still have the opportunity to significantly improve globally and societally, some things have certainly gone wrong. In many cases very wrong. I’m not sure I got the following things wrong, but I certainly overlooked what could affect the arc of the goodness. So, to paraphrase Marshall Mcluhan, let’s now take a quick tour of the walls knocked over by technology.
The first thing I can point to is money. Somehow, I seem to forget that, well, ideas are usually beneficiary agnostic and typically created with a specific benefit in mind that the people who use them will seek to make money from those ideas. So, inevitably what happens is the sincerely good ideas for society that cannot make somebody significant wealth may not completely go away, but they certainly do not end up at the forefront. Therefore, the other ideas, regardless of whether they’re good for society or not, are wielded as tools of gaining wealth. And as we all know wealth is amoral. It does not care how you attain it. I’m not sure that I’m naïve, or was naïve, I think I felt like technology itself, and maybe even ideas, had more power than humans. I think this point alone should remind all of us that the future really is not about technology, the future is in the heads and the minds of humans.
The second thing I got wrong was most likely driven by the fact that I am not a technology person despite the fact I talk about how technology affects society, humans, progress, and economics. I did know the technology would be subverted into economic endeavors. It made sense to me mostly because I felt like many of the tools that were being developed could navigate the efficacy path and actually improve communication, work and information flow. But, as I pointed out in the first thing, as soon as technology enters into the economic sphere money subverts its use towards, well, money. But that’s not the point of the second thing I was wrong about. I thought that the knowledge evenly distributed, and information evenly distributed, would make the intellectual tide rise higher. And as intellectual tide rose we would, generally speaking, have better sensemaking skills, we would make better choices, and some of the better things for society would just become more obvious to more people. I’m not suggesting that I felt or believed that technology would make democracy ubiquitous across the globe, but I did feel regardless of an individual country’s ideology, within those constructs, better decisions and better choices would be made. The world would not be perfect, but it would certainly be better. My lack of understanding of technology itself and algorithms and what we now call artificial intelligence bit me in the ass on this one. Information was never evenly distributed and, in fact, incorrect information tended to be more amplified and distributed. And, worse, I didn’t recognize incorrect information absorbed would become an attractor for technology to actually feed more incorrect information to anyone who initially absorbed the former.
And while I’m a student of Alvin Toffler, and I clearly understood his point of view with regard to cognitive overstimulation, I imagine I did not see his point with true clarity until reality struck. The reality that the ubiquitous information machine was just simply too overwhelming for almost everyone’s brains to cognitively to assimilate in any useful way in addition to the fact technology wasn’t going to help us along the way. I never envisioned technology would step in and amplify a significant number of incredibly crazy stuff which created the cloud over the incredibly non crazy smarter stuff which would have made a better society.
The third thing I missed was how innovation relentlessly pursued incremental consumer quasi-improvements and wants instead of actual needs. Yeah. At the core of this is of course making money. That said. I imagine I focused too much on the potential good things rather than the sheer quantity of the less-than-useful versus not-the-best-but-good. Technology innovations tended to be incremental, asymmetrical, and relentless. Their work was not done in giant leaps, but in relentless, small, almost unnoticeable, increments. Platforms were developed to absorb the onslaught of new technological widgets, but even the platforms became overwhelmed. Maybe that is one of my points. If the technologists were incapable of figuring out how to manage technological innovations, what chance did the everyday human have? As a subset in this was how I didn’t envision the speed of obsolescence. What I mean by that is in the race to get things to market even the really good things technology did and offered became obsolete in the blink of an eye. Because most of the technology was actually incremental it became incredibly easy to make them obsolete with other incremental improvements. This had cascading consequences on both business and people. Business, once they had made a financial commitment, their commitment was for the most part unmoving. Their commitments, for the most part, couldn’t easily absorb the new innovations so they simply, pragmatically, accepted where they were as an improvement from where they had been. They had to accept better, but not the best, all the while wallowing in the anxiety that someone, somewhere, was better. As for people, once they learned something, there was something else they had to learn – at speed. There was no time to develop an integrated routine. I would be remiss if I didn’t point out all of this is good things happening, i.e., truly better technology – with some bad consequences. Asymmetrical adoption, inconsistent application, shifting foundational knowledge, it was almost like quicksand of general cognitive stress and anxiety that we were always behind. My point is while technology has wrought a lot of wrong, the underpinnings are still good, i.e., there is a lot of wheat amongst the chaff.
Still. But. I thought a lot wrong about good things happening.
Still. But. I am actually suggesting my thinking wasn’t wrong about technology, it was more about size and speed. What I mean by that is small things are different than when they are large. A tribe is, well, a tribe, but a mega-city is a mega-city – made up of multiple tribes interacting, cultures within cultures and communities within communities. I forgot social dynamics. This isn’t about complexity, or complicated, or even chaos, this is simple social dynamics. Platforms started for small, like-minded groups. They had some set of shared expectations, but when the ‘small community’ grew past their original audiences into wider audiences, and often at exponential pace, money took over some of the internal ‘scale’ aspects and expectations were subverted and no one had any time to say “hold on a second.” Regardless. Any smaller community will be affected by interaction with the larger community – especially if that larger community spans the globe. Any smaller community will be affected even if they rage against the larger machine because, well, the machine has learned to feed off that rage (Liv Boeree).
===
“The universe has no single secret. It does not even hold a single nest of secrets to which some one study holds the key.”
Mary Midgley
===
Which leads me to what I want from technology (and, hopefully, what we all want).
The only way for “us” to bring about a better future is through some sort of technological humanism.
To make that case — ultimately a case for restraint and more considered progress – we should build an optimistic story of how humanity would use technology to solve problems and thrive and that it can be integral in bringing in about visions of a fairer, more equitable future. I don’t believe technology is “the savior”, but I still believe technology can be integral to a better future.
“Survival now would seem to depend upon the extension of consciousness itself as environment.” Marshall McLuhan 1972
Yeah. I’m writing this and I’m not a technologist, but certainly of a generation that was technological optimistic, curious and ready to implement, with best intentions and some pretty good tactical objectives, some amazing things.
I tend to believe, at that time, most people thought technology was created to solve problems and in solving those problems society and the world would just become better. That didn’t exactly happen and I would argue that somewhere along the way society’s natural inclination for dystopian thinking overcame any quasi-utopian futuristic objectives, i.e., we focused on the bad consequences rather than the good consequences, so that all technology was seen as having a negative effect. As usual, a healthy future is found somewhere in between.
To be clear (part 1). I always, unequivocally, thought move fast and break things was only the harbinger of some unforeseen, but inevitable, doom. To be clear (part 2). I don’t think we are residing in a world of doom, but we certainly haven’t attained the drastically improved society of people living in harmony that many of us envisioned would occur. Part of the issue, as I have noted a number of times in the past, is that many of us were confident a better society would simply emerge. We were wrong. Not that the good parts of society do not exist, but maybe because technology had a nasty habit of over-empowering the more nasty parts of society. Or maybe we never really defined what we thought a better society was nor even offer up a vision to everyone, so that (1) technology overtly created things that would nudge us toward it and (b) show a scenario to society at large that they would incrementally nudge themselves toward somewhere within the 30000 decisions they make daily. This mistake was we treated everything as positively emergent. And in that space, I was wrong. Horribly wrong (a pragmatic business lesson lurks within there).
All that said. I imagine one of my larger points would be that rather than dwelling in the despair and the dystopian view of technology, maybe more of us should take a step back and remember the technology can offer all the positive benefits that we dreamed of at the very very beginning. Technology has never lost its ability to create a better society. Technology has never lost its ability to ensure a smarter, more intelligent, more critical thinking, public. Technology remains an incredibly useful and powerful tool to enable a better society. I say all that despite all the negative things that technology has wrought.
Let’s be clear though. The future does not really reside in technology. The future resides in humans. Decide what we want society to be, what we want civilization to be, how to be able to make people better thinkers, better sensemakers, better choicemakers, is a human choice. I am not suggesting a one worldview. I am not suggesting some global government. All I’m suggesting is that humans can decide what a good human should be. And then we design technology to enable that.
Even with all I have said today, while some technology has created some problems, we clearly are very dependent on the advantages and benefits it has provided. I imagine tucked in all the issues is a truth that the solutions technology provides outweigh the problems it has created. That doesn’t mean we should only focus on the benefits nor does it mean we should focus only on the problems, just that we should maintain some perspective.
Personally, I believe we society people and technology are in the liminal space – possibly the space defined by Alvin Toffler in Future Shock. What I mean by that is all the shock aspects that he outlined are exploding in this time in space that we live in. Or as Czech president Vaclav Havel said:
“It is as if something were crumbling, decaying and exhausting itself – while something else still indistinct were rising from the rubble.”
With that I will end with Marshall McLuhan.
“Perhaps the terrifying thing about the new media for most of us is their inevitable evocation of irrational response. The irrational has become the major dimension of experience in our world. And yet this is a mere byproduct of the instantaneous character in communication. It can be broad under rational control. It is the perfection of the means which has so far defeated the end, and removed the time necessary for assimilation and reflection. We are now compelled to develop new techniques of perception and judgment, new ways of reading the languages of our environment with its multiplicity of cultures and disciplines. In these needs are not just desperate remedies but roads to unimagined cultural enrichment.”
Ah. “Roads to unimagined cultural enrichment.”
I imagine I purposefully end on an optimistic note because in the end I believe technology has the means to offer more good than bad. Ponder.



Let me begin by saying Jane Fonda has been irrelevant to me my entire life. Okay. Maybe better said she has been on the periphery of what I truly care about.
Jane has always been a lightning rod for issues.
And maybe that is where the line “home is where you hang your hat’ comes into play. In its simplicity it is actually suggesting that it really isn’t your hat that matters it is when you accept that you can be who you are and that ‘who’ is all you can be that you have found home. And while Thérèse was really suggesting that the material world was simply your journey and heaven, or God, is your destination, the overall thought is truer than true.

are worth half a shit as a troublemaker, you will most likely reside in an 80/20 world. 80% of the trouble you make won’t give you any satisfaction if you are seeking a ‘win. So the 20% wins need to be enjoyed.
losses. Let’s call those ‘nudge opportunities.’ But your Maslow self will be defined by the big wins and losses. If you want to survive, you have to get good at 2 things:

Uhm. Is that a reach goal … or a settling goal?
We don’t reach far enough to access the true colors to cover our achievements in to make it worth looking at over and over again.
while the last one I wrote sounds exactly like what everyone wants, there are no guarantees in Life.
====
Human-ness: what it meant to be human and how to intentionally be human. It didn’t start with technology, but then again it did. Technology has introduced all the distractions necessary to forget we are human. To be clear. This is different than a ‘different than when I was growing up’ discussion (past), this is a discussion about our future and our intentions with regard to being human – individually, societally and in business. The debate, the discussion, should ignore the definitions of technology and focus on the definitions of humans – not generational mumbo jumbo – because there is no contrast between generations (in any meaningful way), the contrast resides in the liminal space we currently stand in –
Technology is first and foremost used for educational purposes. Now. We can debate the definition of education (beyond the institutional aspects), but for the most part people interact with technology to learn and do. [ponder. This makes technology a transformation tool, but to what? There is certainly a role for undirected education/learning but inevitably if we seek to have a better system, the system should have an identified strategic objective. Far too often we make technology benefit into some simplistic ‘convenience’ tool. Why shouldn’t we expect technology to enable a learning revolution? This will demand a different type of leadership – one that is not passive but rather one that leads a revolution into the future. Since the preservation of the status quo tends to be equated with either protecting traditional values or principles, most leaders have learned (from experience) that ensuring a transformation unfolds slowly permits them the luxury of maintaining positions of power longer. A learning revolution demands a new type of leadership one that is active, enlightened and engaged. Any revolution is part push and part pull but technology offers a new dynamic environment in which opportunities can be exploited, in pursuit of a grander vision or strategic objective, if one is willing to actively engage with them. I have said this before but this new type of leadership is not about charisma, but rather about framing and thinking conceptually. The revolution only occurs if someone can frame the issues in terms that are directly relevant to the communities. The concepts are framed in a way that are easily articulated, understood and assimilated into individual (and collective) objectives. This is a bit grander than alignment (although alignment is certainly a key aspects) but rather it is about finding the coherence necessary for energy gravity grabs hold to increase progress.
Design carefully.
Within these intentions the people IN the organization have a variety of paths they can choose to walk on – and clearly see where paths do not lie. I hesitate to call these principles because, well, they seem simply like intentions. With intentions understood a business can have a community of people interested in working coherently (some people may call this culture) and pursue quests to fulfill those intentions. Intentions put some boundaries on the unevenness while actually encouraging unevenness which increases velocity toward some vision. Intentions put some boundaries on technology.
Intentions matter. What I mean by that is if we do not embrace a human centric world, intentionally, technology will be increasingly less likely to (a) be optimally effective and (b) optimally useful to the betterment of humans. Establishing the future is not about technology. It is about humans, society, culture and institutional tradition. The decisions for our future are both top down and bottom up, simultaneously, in which vision and pragmatism are aligned (and resources are equitably dispersed).
manufacturing ideas of which the benefits were easily recognized and the barriers to its adoption also easily recognized. It’s the latter that I speak of today.
people energy being generated from automation).


of Covid on, well, everything, I say one word: amplify. It has simply amplified everything – uncertainty, change, technology (some people call it ‘digital transformation), existing business vulnerabilities, Life vulnerabilities as well as business strengths, opportunities & risks, etc., as well as certainty.
biggest lesson of 2020. The quest goes on, despite the fear, whether you are staring at certainty or uncertainty.
some way 2020 encouraged us to find our inner Tinkerbell.
She wasn’t always nice. She was feisty. She was willing to break rules. She had an imagination. I am not suggesting you shouldn’t be nice, but freedom does mean some shaking up of things, some discomfort and some conflict. Look. I am suggesting we need to ‘break some norms’, break some of the Life rules of emergent living (and, no, I do not mean not social distancing or wearing masks, etc.; but rather break some of our expectations of how Life is supposed to be lived), and break out with fear in hand.
I think more of us need to seek our inner Tinker Bell in 2021. And in doing so we have a chance to refind the magic in Life and embrace fear and guide ourselves to new and better adventures not alone but together. Maybe in 2021 we stop valuing certainty over uncertainty, stop embracing uncertainty, and simply grab our fears, place them in pocket, and get up and go.


Change in business scares the shit out of any manager & leader.



Well. There is no lack of articles on generational gaps in business and, yet, almost every one of them focuses on simplistic “generational characteristics”, “old versus young” and “what millennials want” and shit like that. Sure. Useful but I would argue all young people have always wanted a version of the same thing “do good meaningful shit without all the old people bullshit.”
Please note … I am not suggesting these 50somethings have to be as good as the young at technology or whatever new innovative techniques out there yet to be discovered, in fact, it may benefit them to not be or even try. Their value is in their heads and experience and the nudging of ‘what can be’ using selected knowledge from ‘what was.’
exponentially challenged with change and are not dealing with it very well <i.e., not letting go very well>. I believe it was a French sociologist, Emile Durkheim, who developed a psychographic method to establish different socio-cultural groupings <I believe it is called the Sinus Milieu>. Anyway. Basically it is a model that challenges us to think about behavior, preferences and cultural practices. The main premise behind the model is called ‘the lock-in principle.’ The principle simply states that if we get used to something we do not want to change our habits <or attitudes an beliefs> even if we are presented with something new or different that might be better. Simplistically it consistently shows <to a point that it is almost an unequivocal behavioral truth> that habit is stronger than the desire for improvement.

replaced with complicated constructs that leave most people in the dark.