===
“It is my thesis that machines can and do transcend some of the limitations of the designs and in doing so they may be both effective and dangerous.”
Norbert Wiener, 1960
========
“As machines learn they may develop unforeseen strategies at rates that baffle their programmers.”
Norbert Wiener, 1960
===
It’s not very insightful to note that algorithms inherently increase efficiency. Similarly, it doesn’t seem that insightful to
note Life, people and business, are inherently inefficient <despite all their efforts to be efficient>. I think the insight resides in the fact this creates a recipe for disaster. Disaster in that what is easy, or even useful, is not necessarily good for us.
Anyone who has even researched AI slightly would see that even if AI systems learn from accurate, representative data, there will still be issues if the data reflects underlying biases and inequalities of the system.
Oh. And it does.
Oh. And, yes, I am going to suggest efficiency is a bias and an inequality driver.
Look. We idolize efficiency and we increasingly have tools which offer some sense of efficiency, yet, the attempted ‘flattening’ of natural inefficiency, generally speaking, only creates, well, dysfunction <and, generally speaking, persona dissatisfaction>.
How the hell does that happen? Simplistically an algorithm offers a binary choice – A or B. Yeah. To do so it evaluates a variety of inputs but, inevitably, it has to choose a 0 or a 1, this or that. Not weighted, a choice. And sometimes that choice is a guess. Yeah. A guess. My favorite line in Brian Christian’s fabulous book, The Alignment Problem, is “the computer is alarmingly confident when it guesses.” When you simplify everything down to a black & white ‘this or that’ you can sometimes simply flip a coin. Now. say you are a algorithm and you not only flip a coin, but you flip a
coin 6 straight times. Yeah. You can see the possible problem there. Circling back, let’s assume each of those 6 coin flips are driven by efficiency. Yeah. You can see the possible problem there. Let me stretch the efficiency issue out a bit more. Efficiency demands a division of labor, resources and energy. So, if the algorithm is driving all those things toward the ‘most efficient’, well, there are always consequences to a choice.
As a counterpoint, effectiveness, or efficacy, is found in things that are often fairly inefficient <at least in the short term or present>. Effectiveness is the natural enemy to the incrementalization of standardization.
Range:
Development is not linear, and diversions that set you back in the short term frequently become powerful tools in the long term.
More importantly, effectiveness is the natural enemy of efficiency. How? Well. Efficiency arcs toward improvement of structural character, not improvement of market dynamics. Effectiveness is grounded in the external inputs, efficiency from internal outputs. Basically, efficiency strips away systems dynamics and flattens organizational principles.
Risk is quantifiable, uncertainty is not. It’s one thing to say there is a 50-50 chance on something going wrong. It’s another to not know the chances at all.
How does it do that? Algorithms grounded in efficiency minimize the optimal organization ability to expand and contract. the tendency of everyone in business is to contract (under the guise of ‘focus’). The contracting oversimplifies, but makes it mentally manageable and certainly drives an efficiency objective. Yet. Navigating complexity <which is the standard business> demands one actually expand their situational awareness of the moment.
I imagine I am also discussing the human aspect of algorithms. What I mean by that is algorithms are meant to be a tool for people, not a means to absolve responsibility.
And what do I mean by that?
A collection of people can be stupider than an individual (often even stupider), and, an individual can be stupider than a collection of people. The trick is to always to find when one is smarter than the other.
This gets compounded by the fact technology, well crafted, actually makes people stupider. How? It actually lessens our critical thinking by offering easier solutions (knowledge gifted is not the same as knowledge earned). We need to be sure to break the stupid loop. Because the smarter technology gets (algorithms actually iterate based off of interactions), the less smart people get (we do the hard work of thinking less), so we need to actively use technology to make people smarter. We need to amalgamate company wisdom and institutional learning into critical masses it cannot be ignored and demands some interaction. It seems counterintuitive because we are activating algorithms to do the hard work of ‘thinking’ <assessing an overwhelming amount of data> and, yet, we should actually be thinking harder ourselves <people>.
Look. Algorithms can be used for the forces of human behavior team bad (default, easy answers, let tech decide) or forces of human behavior team good (curiosity, expand upon imagination, innovate).
All I know is algorithms are going to play a big role in navigating business efficacy and shaping business.
—
“The future of companies, regardless of size, will be shaped by algorithms.”
Mike Walsh
—
The key word is “shaped.” Ultimately it will be humans who use the shapes.
Anyway. Here is my wish.
I hope we will be “directed to act” by algorithms, but not managed by. The latter demands acceptance of algorithm as qualified to make us to do something behaviorally, the former demands we accept algorithms as something that ‘informs’ our doing. Somewhere in between is the decision of how much we, people, are accountable for thinking. Algorithms inherently encourage us to believe business is not best when it is random. Yet. The best businesses resist the urge to suppress randomness.
Which leads me to temporal convergence and temporal divergence.
Temporal convergence is progressive and affirming. By contrast, “temporal divergence,“ is where we cannot see a path from where we are “now” to achieve the goal-state. Temporal divergence may result from insufficient capability/ capacity to navigate with any confidence toward any goal. Also, one may not know if one is in a state of temporal divergence. The difference between these two things is the difference between survival and death in a business. I bring
this up because algorithms, driven by efficiency, are temporal, but you cannot actually see whether they are converging or diverging. Well. At least until it’s too late.
All businesses will exist, in some form or fashion, grounded in algorithms. I am fairly sure that’s a given. What is not given is what role efficiency will play in the future of business. Now, I am not naive, efficiency focus will not go away overnight, but efficiency, relentlessly pursued, is simply a path to commoditization. The challenge will be to not get consumed by efficiency driven algorithms and to realize algorithms do not give answers, but outline options. To realize algorithms don’t define redundancies, but rather define the outlines of systems. And, in the end, realize efficiency is the lazy way out and understand algorithms can make us even lazier so the double hit from the dull axe of efficiency will bludgeon us, individuals and business, into a bloody pulp even as the dashboards scream success. Ponder.




On November 18th 2009 I wrote these words:
Today I will talk a little about what I perceive as a unique time, at least in our lifetime. It is a point in which general uncertainty is colliding with personal uncertainty which is colliding with leadership uncertainty.
To help me I pulled out my battered copy of Hayakawa’s “Use the Right Word” trying to find the right words and on page 550 there it was – significant. Synonyms for significant are listed as consequential, grave, important, momentous, serious, vital and weighty.
certainty.
things in our lives.
And I am not saying this because I believe “fear is a great motivator” <because I would suggest that it is really survival that is a great motivator … not fear>. I suggest these things because fear, more often than not, actually freezes us … makes us do nothing. If we face it, we name it, we say ‘fuck you’ to it, we will do what it takes and needs to be done — 






====
Human-ness: what it meant to be human and how to intentionally be human. It didn’t start with technology, but then again it did. Technology has introduced all the distractions necessary to forget we are human. To be clear. This is different than a ‘different than when I was growing up’ discussion (past), this is a discussion about our future and our intentions with regard to being human – individually, societally and in business. The debate, the discussion, should ignore the definitions of technology and focus on the definitions of humans – not generational mumbo jumbo – because there is no contrast between generations (in any meaningful way), the contrast resides in the liminal space we currently stand in –
Technology is first and foremost used for educational purposes. Now. We can debate the definition of education (beyond the institutional aspects), but for the most part people interact with technology to learn and do. [ponder. This makes technology a transformation tool, but to what? There is certainly a role for undirected education/learning but inevitably if we seek to have a better system, the system should have an identified strategic objective. Far too often we make technology benefit into some simplistic ‘convenience’ tool. Why shouldn’t we expect technology to enable a learning revolution? This will demand a different type of leadership – one that is not passive but rather one that leads a revolution into the future. Since the preservation of the status quo tends to be equated with either protecting traditional values or principles, most leaders have learned (from experience) that ensuring a transformation unfolds slowly permits them the luxury of maintaining positions of power longer. A learning revolution demands a new type of leadership one that is active, enlightened and engaged. Any revolution is part push and part pull but technology offers a new dynamic environment in which opportunities can be exploited, in pursuit of a grander vision or strategic objective, if one is willing to actively engage with them. I have said this before but this new type of leadership is not about charisma, but rather about framing and thinking conceptually. The revolution only occurs if someone can frame the issues in terms that are directly relevant to the communities. The concepts are framed in a way that are easily articulated, understood and assimilated into individual (and collective) objectives. This is a bit grander than alignment (although alignment is certainly a key aspects) but rather it is about finding the coherence necessary for energy gravity grabs hold to increase progress.
Design carefully.
Within these intentions the people IN the organization have a variety of paths they can choose to walk on – and clearly see where paths do not lie. I hesitate to call these principles because, well, they seem simply like intentions. With intentions understood a business can have a community of people interested in working coherently (some people may call this culture) and pursue quests to fulfill those intentions. Intentions put some boundaries on the unevenness while actually encouraging unevenness which increases velocity toward some vision. Intentions put some boundaries on technology.
Intentions matter. What I mean by that is if we do not embrace a human centric world, intentionally, technology will be increasingly less likely to (a) be optimally effective and (b) optimally useful to the betterment of humans. Establishing the future is not about technology. It is about humans, society, culture and institutional tradition. The decisions for our future are both top down and bottom up, simultaneously, in which vision and pragmatism are aligned (and resources are equitably dispersed).

tangible, and, excepting some few cases, cannot be transacted INTO something of value (Tesla is attempting to change that formula). It is currently solely wealth creation and mainly speculative wealth creation (wealth invested to create wealth). Value creation is grounded in labor, or work, in creating some value to the marketplace. Or, as Mariana Mazzucato would suggest, something that contributes to a GDP and something that does not (Bitcoin does not). this is an important discussion because Bitcoin evangelists talk about becoming THE global currency and what they are today is light years away from what they say they want to be.
quite possible I am missing something, but I will also say that I hear a fairly naïve view of the transition from a speculative investment tool to a value exchange tool. That transition will not be a simple liminal space, but rather, more likely, a jarring event to the existing value of the Bitcoin. Why? Because the existing marketplace value, existing country currencies (fiat) and existing marketplaces values (prices) and existing mental values (people attitudes on what to pay) have a significant grip on Value.
“You’re more likely to act yourself into feeling than feeling yourself into action.”
I don’t understand how when people talk about purpose or meaning they do not talk about doing.

Conclusion

There has to be some reality to ground some imagination.











This may begin sounding selfish, but it is not.









