algorithms and efficiency

 

===

“It is my thesis that machines can and do transcend some of the limitations of the designs and in doing so they may be both effective and dangerous.”

Norbert Wiener, 1960

========

“As machines learn they may develop unforeseen strategies at rates that baffle their programmers.”

Norbert Wiener, 1960

===

It’s not very insightful to note that algorithms inherently increase efficiency. Similarly, it doesn’t seem that insightful to note Life, people and business, are inherently inefficient <despite all their efforts to be efficient>. I think the insight resides in the fact this creates a recipe for disaster. Disaster in that what is easy, or even useful, is not necessarily good for us.

Anyone who has even researched AI slightly would see that even if AI systems learn from accurate, representative data, there will still be issues if the data reflects underlying biases and inequalities of the system.

Oh. And it does.

Oh. And, yes, I am going to suggest efficiency is a bias and an inequality driver.

Look. We idolize efficiency and we increasingly have tools which offer some sense of efficiency, yet, the attempted ‘flattening’ of natural inefficiency, generally speaking, only creates, well, dysfunction <and, generally speaking, persona dissatisfaction>.

How the hell does that happen? Simplistically an algorithm offers a binary choice – A or B. Yeah. To do so it evaluates a variety of inputs but, inevitably, it has to choose a 0 or a 1, this or that. Not weighted, a choice. And sometimes that choice is a guess. Yeah. A guess. My favorite line in Brian Christian’s fabulous book, The Alignment Problem, is “the computer is alarmingly confident when it guesses.” When you simplify everything down to a black & white ‘this or that’ you can sometimes simply flip a coin. Now. say you are a algorithm and you not only flip a coin, but you flip a coin 6 straight times. Yeah. You can see the possible problem there. Circling back, let’s assume each of those 6 coin flips are driven by efficiency. Yeah. You can see the possible problem there. Let me stretch the efficiency issue out a bit more. Efficiency demands a division of labor, resources and energy. So, if the algorithm is driving all those things toward the ‘most efficient’, well, there are always consequences to a choice.

As a counterpoint, effectiveness, or efficacy, is found in things that are often fairly inefficient <at least in the short term or present>. Effectiveness is the natural enemy to the incrementalization of standardization.

Range:

Development is not linear, and diversions that set you back in the short term frequently become powerful tools in the long term.

More importantly, effectiveness is the natural enemy of efficiency. How? Well. Efficiency arcs toward improvement of structural character, not improvement of market dynamics. Effectiveness is grounded in the external inputs, efficiency from internal outputs. Basically, efficiency strips away systems dynamics and flattens organizational principles.

Risk is quantifiable, uncertainty is not. It’s one thing to say there is a 50-50 chance on something going wrong. It’s another to not know the chances at all.

How does it do that? Algorithms grounded in efficiency minimize the optimal organization ability to expand and contract. the tendency of everyone in business is to contract (under the guise of ‘focus’). The contracting oversimplifies, but makes it mentally manageable and certainly drives an efficiency objective. Yet. Navigating complexity <which is the standard business> demands one actually expand their situational awareness of the moment.

I imagine I am also discussing the human aspect of algorithms. What I mean by that is algorithms are meant to be a tool for people, not a means to absolve responsibility.

And what do I mean by that?

A collection of people can be stupider than an individual (often even stupider), and, an individual can be stupider than a collection of people. The trick is to always to find when one is smarter than the other.

This gets compounded by the fact technology, well crafted, actually makes people stupider. How? It actually lessens our critical thinking by offering easier solutions (knowledge gifted is not the same as knowledge earned). We need to be sure to break the stupid loop. Because the smarter technology gets (algorithms actually iterate based off of interactions), the less smart people get (we do the hard work of thinking less), so we need to actively use technology to make people smarter. We need to amalgamate company wisdom and institutional learning into critical masses it cannot be ignored and demands some interaction. It seems counterintuitive because we are activating algorithms to do the hard work of ‘thinking’ <assessing an overwhelming amount of data> and, yet, we should actually be thinking harder ourselves <people>.

Look. Algorithms can be used for the forces of human behavior team bad (default, easy answers, let tech decide) or forces of human behavior team good (curiosity, expand upon imagination, innovate).

All I know is algorithms are going to play a big role in navigating business efficacy and shaping business.

“The future of companies, regardless of size, will be shaped by algorithms.”

Mike Walsh

The key word is “shaped.” Ultimately it will be humans who use the shapes.

Anyway. Here is my wish.

I hope we will be “directed to act” by algorithms, but not managed by. The latter demands acceptance of algorithm as qualified to make us to do something behaviorally, the former demands we accept algorithms as something that ‘informs’ our doing. Somewhere in between is the decision of how much we, people, are accountable for thinking. Algorithms inherently encourage us to believe business is not best when it is random. Yet. The best businesses resist the urge to suppress randomness.

Which leads me to temporal convergence and temporal divergence.

Temporal convergence is progressive and affirming. By contrast, temporal divergence, is where we cannot see a path from where we are “now” to achieve the goal-state. Temporal divergence may result from insufficient capability/ capacity to navigate with any confidence toward any goal. Also, one may not know if one is in a state of temporal divergence. The difference between these two things is the difference between survival and death in a business. I bring this up because algorithms, driven by efficiency, are temporal, but you cannot actually see whether they are converging or diverging. Well. At least until it’s too late.

All businesses will exist, in some form or fashion, grounded in algorithms. I am fairly sure that’s a given. What is not given is what role efficiency will play in the future of business. Now, I am not naive, efficiency focus will not go away overnight, but efficiency, relentlessly pursued, is simply a path to commoditization. The challenge will be to not get consumed by efficiency driven algorithms and to realize algorithms do not give answers, but outline options. To realize algorithms don’t define redundancies, but rather define the outlines of systems. And, in the end, realize efficiency is the lazy way out and understand algorithms can make us even lazier so the double hit from the dull axe of efficiency will bludgeon us, individuals and business, into a bloody pulp even as the dashboards scream success. Ponder.

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Written by Bruce