===

“Since corrupt people unite amongst themselves to constitute a force, then honest people must do the same.”

Leo Tolstoy

Today’s piece began with a stack of book and an idea.

The books were The Economic Singularity (Calum Chace), A Human Algorithm (Flynn Coleman), The Dictionary of Dangerous Ideas (Mike Walsh), The Alignment Problem (Brian Christian) and, well, Who Owns the Future (Jaron Lanier). In general, they are all filled with dangerously good ideas with regard to thinking about the future.

My idea is “the future will always arc toward humans and humanness (not necessarily humanity)”.

With all the dangerous ideas and my idea, which is probably dangerous in and of itself, I set off pondering what the future would look like and maybe more importantly how we would get there.

All that said.

I have altered my views on the effects of automation and robots over time. Suffice it to say I may have been a bit too ‘human-optimistic’ and now I still believe we will inevitably get to a good space, just that it is going to be pretty rough for humans until we get there. That doesn’t mean good things will not happen and there won’t be some benefits, just that the cost of running the gauntlet from here to there is gonna be high.

For some it will be about toys and enhancements but for many, many, others it will be about lost jobs and lost ‘meaning’.

** note: by ‘meaning’ I mean that society, and business, will be relatively flippant with regard to what humans actually do as they discuss what robots/automation can do making people feel like whatever skills they may offer doesn’t really have any meaning and make swaths of the population will feel either marginalized or villainized. As I have said before, “the ‘meanings of lives’ are at stake whenever we discuss the future of work, yet, far too often we speak of retraining/reskilling as if its changing clothes all the while ignoring the fact you are naked in between for all the world to see. Psychologically not something most of us desire.”

But when it is all said and done, I believe we will look back and:

  • See we underestimated how important efficiency, and profit, was to business in an increasingly commoditized world
  • See we overestimated people’s ability to improve personal choice against the onslaught of AI and internet bots
  • See we were far too quick to adopt ‘new & shiny’ and be guinea pigs in real in-market trial & error (of things that had consequences far beyond the use-case)
  • See that technologists, ones developing the AI/automation/robot innovations, consistently overestimated the value of what they brought to humans and underestimated the value of humans

I believe what bothers many of us, even those of us who view much of technology as having massive benefits for the human race and humanity, is that, well, a tech-apocalypse isn’t off the table for the future. This nagging feeling nudges at us again and again. And maybe that is our conscious reminding us of where I began today – corrupt people gather and, therefore, honest people need to also.

All that said.

The reality is ‘robots’ have already joined the human world – smart coffee makers, in car navigation, retail locator apps, drones (for shipping as well as war), self-driving trucks, “if you like this, you may like this” – seen and unseen – algorithms and even the ‘metaverse.’ It’s here and part of our lives. I imagine the only question from here on out is how far we permit it to go.

Given my opening, here are some scenarios:

Toddler-education.

I’ll leave older education to someone else. In my view we should focus on ‘pre-traditional school age’ children. Research shows time and time again the earlier you begin education the better it is for the child (and brain). At some point someone, some technologist, is going to develop a stuffed animal robot which will be able to interact with a child answering questions and offering contextual knowledge/information. The more and more successful it became the more and more technologists will view it as opportunities to offer “more” and they attempted to make this animal companion responsible for all education. This means some well-meaning technologist suggested it shouldn’t just be informational, but also ‘an emotional intelligence companion’ (attempting to help build the emotional underpinnings, EQ, to a successful human being). It will become a cross between Simon Sinek and Encyclopedia Britannica with a twist of Aristotle. Farther down this rabbit hole some entrepreneur suggested customizing the programming to match up with the parent’s ideology <religious beliefs, patriotic views, in other words, engrained existing biases> making this a business idea focusing on niche sales. Instead of broadening early education this furry likeable animal robot actually began constricting and indoctrinating. Here is the good news. After running this wildly unhealthy technology gauntlet, humanity will circle back and remind themselves that a curious young child learning more basic knowledge creates the foundation for a happier, healthier, productive citizen – of a country and the world. Children will have furry animal robots more like a librarian than some life companion or cult leader.

Where did the jobs go?

I firmly believe AI and robots will augment people working to the benefit of the worker, the consumer (whoever receives the outputs of the human-tech partnership) and humanity. But the path to that is not going to be easy. Business which for years has been neglecting value creation in its pursuit of efficiency (profit maximization) ended up relentlessly pursuing AI/automation/robots as a means to not purposefully replace humans, but to purposefully optimize margins within an existing cost structure. Sure they gave a head nod, initially, to human creativity and imagination, but eventually hunkered down and inevitably pursued AI/programming to replicate it (well enough) to feed incremental innovations well enough to maintain its place in industry (because everyone else was doing it). Business, as business does, embraced a zero sum tragedy of commons attitude and it was a race to see how much AI/automation/robotics they could use before the other guy used more. People simultaneously lost jobs in droves and by trickle effect depending on the industry. Uhm. But then we hit peak ‘where the hell did my job go’ status when board of directors realized they didn’t need an Elon Musk or Jeff Bezos-type running a globalized business, they just need a well programmed robot. Even the so-called business titans were out on the street. Capitalism crashed into robot commoditization and not only did business suffer, innovation suffered and, well, consumers suffered as a consequence because nothing truly got better. Oh. And entrepreneurs got the life sucked out of them because new only mattered if it could be quickly scaled or quickly replicated.

What I am saying is business will do what business does, go too far in their pursuit of making a business reliably and replicably profitable.

But here is the good news from the future. At some point business will realize what actually generates value and enables progressive value creation is, well, humans. The robots were redesigned, again but better, to not replace humans, but seek to have the same sized labor force as we do today but everyone is optimized because they work in partnership with some technology. Robots, automation and AI will simply make people better at what they do, businesses will be better in general and everyone is happier because productivity is enhanced BY humans simply being the best they can be.

Find the liars.

Half truth and half truth does not equal truth on green blackboard

I have said for years that we should be looking to the rising generation of people to solve, and resolve, the idiocracy occurring in the social media online world. I stand by that. But it is going to be a painful road getting there. In the future we will look back and laugh at the naivete of the general human population. We will wonder how humans accepted lying, and confused a lie with the truth, or just accepted ‘truthiness’. But mostly we will look back and wonder how humans believed technology, left unaccountable, was good for truth. Heck. We will look back and wonder how humans believed humans were good for truth. As the years went by slowly but surely even the people who said “no algorithm can tell me what to think and do” began to realize, well, yes, algorithms can shape what I think and do. And as they realized this, and how idiotic they had been, they will get angry. They got angry because if politicians perpetuate Big Lies with no consequences and bots share fake new/stories with no consequences, then they realize there is no trust. And no trust sucks. So, what ended up happening is a group of people, mostly younger people who craved some honesty and truths, said “dicks will be dicks and I cannot control what dick-like lies/half truths they will spin and say, but I can develop an app that filters everything it takes in and tell us when it is a lie or a half truth or even the good ole truth.” Now. It took years to work out the kinks, but the app immediately filters out the quacks, extremists, nutjobs, conspiracy theorists and trolls. That was a big win in and of itself. While everything was still overwhelmingly interconnected this “find the liars” software overwhelmingly made the truth connectedness more visible. While there could be an argument for whether automation should craft the news or humans, in the future we found that all people mostly cared about was the truth and being able to trust what they learned. The interesting thing about the future is that nutjobs and conspiracy theorists still exist, they just keep getting bludgeoned to death by the ‘find the liars’ software instead of some idiotic twitter feed argument. Humans saved humanity through, well, an app. Go figure.

Dude, where’s my car?

Self-driving, autonomous, cars/vehicles. I have changed my views on this over time. I used to think they were not only inevitable but that people would actually be happy handing over the driving responsibility to their car. I’ve rethought that. Anyway. Let’s look back from the future. Technologists, sitting at their desks envisioning a better world, collectively and relentlessly pursued a non-human vehicular world. They said, “what the fuck, we have cars, we have people, why are people driving them instead of doing something else with their time?”** and instead of replacing cars they simply said:

  • I bet there is a way to make some money off this
  • we can make some cool cutting edge 5-dismensional metaverse AI systems” (which may make me famous)
  • people are always bitching about not having enough time, this gives people more time to use other social media/technology widgets we have developed if they don’t have to actually drive while in a car”
  • oh, what about “safety

** note: I would note that someone also thought the same thing with mass transit, railway transportation, shipping and planes, all of which are mostly self-whatevering, we still have humans in the control sections cockpits and in key segments have full crews.

Well. Setting aside that this whole idea takes an existing world (everyone owning cars) and simply looks at it in a fairly myopic way, technologists went to work. They quickly found simplistic effective solutions for getting cars from here to there. And the first ones to pay the price were the drivers who got paid for driving – long haulers, Uber drivers, delivery vans, etc. Businesses, eyeing autonomous vehicles through their own self interest lens, saw it as “let’s make some more money” and, well, human jobs bit the dust. But it got a little weird for everyone when really well designed autonomous vehicles were 99% incredibly successful – and that 1% was actually killing or injuring people, well, at about the same level as human drivers had been doing their past job, i.e., driving. This is where technologists went back to the drawing board and flipped the equation. They said “well, if safety is the primary objective, won’t we make money by doing that?” Ok. They didn’t say that. Ethel, at the front security desk, one of the few humans still working at the company, said it to one of the technologists as their security key card wasn’t working and she needed to re-key it for them. Anyway. This new drawing board developed smart cars with people driving which decreased human errors in driving situations and, well, driving became safer, traffic flows improved slightly <which in urban areas drove value through the roof> and people started enjoying driving a bit more. Humans will eventually take back control of getting from here to there but along the way a bunch of people are gonna lose their jobs and their lives.

Let’s make everyone in the company a decisionmaker.

The advent of the IoT ecosystem, forming a web of interconnected systems where objects and gateways will be able to interact using open standard protocols, enabled edge computing. Basically, it meant you could distribute effective decision making, not just decision making, because you could efficiently distribute data/knowledge/institutional wisdom to whomever was closest to the business decision/situation at hand. From there it began going a bit sideways. Initially, innovative leaders bought into edge computing. This was a business decision, not a human decision. The business world had been convinced that time was money and there was less time, and shorter attention spans (none true) and were convinced that, through technology, decisions could be made faster and better and they would make more money. So, they told everyone they were now a decisionmaker. Oddly, this wasn’t very effective. Most people on the edges of the organization had never really been enabled to make many decisions and ended up being slower, less risk taking and, well, actually bad at a lot of decisions.

Technologists seized this as the “Edge Computing 3.25” challenge and went to work.

They set up Cloud computing to become a ‘super brain’ and designed individual terminals to actually crunch data and issue dashboards personalized to each specific role. All that wasn’t enough. Some genius designed an ‘eliminate risk algorithm’ which appeared as a button on each distributed decisionmakers’ screen. Eureka. I get to make a decision, but eliminate risk. This algorithm calculated all possible outcomes given all the data it scooped up and offered up a guide to making the right choice. Leadership lauded the innovation and bought it up like children in a candy store. Several years later internal surveys suggested employees weren’t happy nor were the choices made extremely good. The employees were suggesting they weren’t really enabled to make decisions, the computer was, and they were simply implementing. Leaders took note and, well, fired everyone. They thought “well, if they really aren’t making decisions, who needs them (I gave them a job, they didn’t do it, fire them for cause).” Accountants applauded because all the Edge Computing was a sunk cost and employees just seemed like an inefficient expense. Several years later leadership noticed not only were many of the choices not so good, but some of them were extraordinarily bad. They brought in the now ancient technology futurist, Brian Christian, who chuckled and said “When a computer guesses, it is alarmingly confident.” At some point people started realizing risk actually increased because the algorithm simply picked yes or no even if all the data it crunched from the data ocean it had access to suggested one choice was better by 2% points (51 to 49). Business, never one to quit when a dollar is to be made, said “Wired Magazine still tells me bringing decisions closer to the actionable moments is good business so let’s bring back humans but make them smarter.” So, technologists sharpened some pencils and came up with a Brain Robot. It looked like a helmet but technologists convinced everyone it was a robot enhancing the brain. All employees wore these helmets all day long and the sensors in it collected any data anyone could ever imagine with regard to brain, physiology and neuroscience. Facebook came along and claimed “this is the metaverse.” Humans just thought they looked weird and felt weird but also felt better about making decisions. Problems began occurring when (a) employees started head butting each other with the helmets damaging sensors and (b) technologists started noticing that the sensors, when sensing the human being overwhelmed with anxiety (common with decisions) or overstimulated with too much information, crafted the most simplistic output possible. The robots, programmed to help make choices, designed output to ensure a choice was made. Similar to the ‘eliminate risk algorithm’ business leaders found, well, computers suck at making choices and decisions. Here is the good news. After all this investment in technology and firing/hiring people business finally decided that enabling people to make some decisions, not all, is good and people are actually good at it (and get better with practice). They also found it was cheaper and better to make people’s brains smarter by making them more data literate and craft dashboard summaries which offered an array of “approved by system” acceptable choices. They also found that value creation decisions are more often an accumulation of “art” decisions, not science decisions.

Nothing will ever be safe.

In the future business ideas actually become safer. Yeah. But, as business does, they didn’t reach that conclusion easily. This is actually about Quantum computing, but I will begin in an odd place – security. As I began this piece, there are good guys and there are bad guys and the bad guys will use whatever is available to do bad things if good guys don’t get their shit together and do something. The issue here? Algorithms running on quantum computers make nothing safer, let alone safe.

As one expert suggested: “quantum computing forms an unprecedented threat to one of these traditional strongest links in cybersecurity: asymmetric encryption”. Suffice it to say that quantum algorithms that can run on quantum computers makes what used to be difficult calculations become easily possible and impossible calculations become possible. When the first quantum computers became commercially available and security standard changes being time consuming, we (humans) of course fucked it up because a bunch of humans, technologists prompted by greedy business leaders, raced to do things with this new opportunity with their focus not on security, but making money. And while the good people went running after a shitload of esoteric, fascinating, possibly meaningful questions, the bad guys used this opportunity to, well, make themselves money. And make money in some not-so-ethical ways.  I imagine this all falls under the ‘cybersecurity’ heading and I know just enough about cybersecurity to be dangerous. That said. Viewing this scenario from the future one can clearly start quoting Frost “two roads diverged” and watch how quantum computing simultaneously enabled businesses to do some things they have never done before and make them even more vulnerable. We can see how business technologists gleefully embraced quantum computing because, well, “let’s make money.” The problem we will also see is that the bad guys are also sitting there going “let’s make money” as well as some of them will make money not just by stealing your money, but by stealing what you are thinking and saying. And they will not advertise they did so, yet, start using YOUR ideas in ways that you didn’t envision, faster and with the knowledge of what YOU are/were going to do <insert a business ’yikes’ here>. Technologists, huddled in their own little worlds, ignored the fact that what makes businesses successful isn’t technology – its knowledge. Over time, given that learning is truly the only sustainable advantage, the quantum thieves began to inherit the business world. Well. That is until they didn’t. We saw that not all the heads of CPG companies, retail companies and head of the IBMs lost their jobs to some robot and those that didn’t finally got their shit together and said “hey, this isn’t sustainable, having someone steal our ideas makes us losers, not technology winners.” Those humans then went out and out-quantumed quantum and wrangled back ownership and security. I bring this one up because humans are, well, human. We don’t care about shit until it affects us. Quantum computing is one of those “sound interesting but doesn’t affect me” things. In the future we will lookback and remind ourselves of a couple things that Norbert Wiener said in 1960 before quantum occurred: “It is my thesis that machines can and do transcend some of the limitations of the designs and in doing so they may be both effective and dangerous” and “As machines learn they may develop unforeseen strategies at rates that baffle their programmers.”

Business will get its shit together at some point because this isn’t really about security, its about humans. There is little that humans, at least in business, care more about than their ideas. This is because (a) being smart is one of the few feel-good aspects of business and (b) ideas mean money. But it is going to suck for quite some time because of quantum computing.

I need to call my doctor.

In the future we have resolved most of the health data privacy issues mostly because of two things:

  • Early on there were initiatives like education partnerships to improve AI practices which embedded ethical, useful, practices for the medical industry
  • Over time people recognized that businesses would actually pay more to the people who showed up more often healthy and productive. Incentive to do that they began wearing more health wearables, shared more information and actually, well, felt better and was healthier.

So, today, I’ll focus on robot physicians. Yeah. As in physicians that ARE robots. This slippery slope to robot physicians found its origins in, well, the healthcare system. While article after article was being written about how empathy was the key to great healthcare, people waited days if not months for appointments, hours in waiting rooms, reading Highlights Magazine or months old Good Housekeeping, only to see a nurse practitioner who handled the nasty details and then a doctor who seemed to try and care but was actually looking tired and wanted to cut-to-the-chase. In addition, we were more often walking into these appointments having spent hours of personal research online and more than willing to debate any solution offered us. Insurance cost an arm and a leg and the actual bills were undecipherable except the total which made us sick all over again. Against that backdrop the medical community diverted – one path offered the same shit and another path became high end, one-to-one, television doctor-like recognizable, service. The latter path began developing systems in which algorithms sifted thru reams of health data so that the doctor could simply share the best-of-the-best estimate of what was wrong and the appropriate solutions. As time went on the doctor became less a doctor and simply a chatbox for the AI. It was at this point that some entrepreneurial genius said “why can’t I just have everyone – the people still languishing on the other path – stop visiting real doctors and just interact with the AI/robot?” It is at this point where the half life of a doctor not only halved, but was snuffed out. In a matter of years, simply due to the sheer volume of human/robot interactions (because this path had 80% of the human population) that the robots could actually do the one thing doctors could still do – have conversations, recognize cues and offer the ‘care responses’. Well. They could do it well enough that humans started going to robots in droves. Heck. The robots even stopped by homes. The last nail in the coffin for doctors were the accountants. Let us not forget healthcare is a business and, well, making money and profit is good. No doctors, no/less malpractice, no human error, well, doctors are out on the street. Sure. Some of the humans raised some concerns but if you remember the origin story I offered, going back wasn’t that appealing to most people. This is where I offer up the word that became the savior for real human doctors – trust. I bring that up because at some point in our future as doctor robots were healing the shit out of the bulk of the human population, we began to find out that the manufacturers, who of course “want to make money” were accepting huge amounts of money from pharmaceutical companies to program these robots to dispense with advice that benefited that pharmaceutical company. Oh. Then we found out programs had been hacked and data was being used by, well, it didn’t matter. Anyone who hacks to get that information isn’t hacking for your benefit. This was all annoying until one election where all of a sudden the leading candidate had to deal with fake news about their health (or was it fake?). Nothing like a loud prominent voice blasting out all the niggling concerns we people were already having about this robot doctor world. Robot doctors, once the solution to all healthcare woes, all of a sudden were no longer trusted. Here is the good news. We got our doctors back, the human ones, and they worked with all the AI and robots and all the good stuff that technology can do for healthcare, they assessed it and talked with us. Yeah. Real conversations and real issues with better solutions (because of technology). Now. One would think we would have got to this point faster or even simply eliminated all the in-between stuff, but, remember, humans will always err on going too far before they find the right equilibrium. But. If I’m right. There is a chance for better healthcare at some point.

So, who owns the future?

As I finish this, I am reminded of the fabulous Theo Priestley Tedtalk, “Would you be happy to Follow a Robot Leader?” and think about what we humans will accept everywhere in the robot /automation world. What are our limits? In the end I will suggest in the short term we will almost always get it wrong. In today’s worldview with uncertainty abounding and the world seemingly becoming more and more complex it is incredibly easy to imagine a computer, or some well-designed technology, making better decisions than a human. And where we will consistently get this wrong, for a while, is thinking about this in a binary way – a computer versus a human. It will take us awhile to get to the level of understanding and consciousness in which we grasp that a smart, well-designed robot/AI augmenting a smart, human, decisionmaker – a partnership – offers us the best of all worlds to enable a better world.

To answer my own question, of course, humans own the future. We will just offload far too much of the future, and present, onto technology along the way to finally understanding, and accepting, this. Please note I am predicting nothing, only positing what will be given what I know about business. And maybe that is my larger point. Business is the fulcrum for our relationship with technology. They will be the ones thinking, innovating and producing the things which will impact us, as humans, and humanity. And given what I know about business, well, their idea of what benefits people and what actually benefits people doesn’t align until we make it align. In other words, we know the corrupt people will unite amongst themselves and the fate of our future depends on the non-corrupt uniting together. Oh. And let’s do our best to make sure the robots do not unite and own the future.

 

 

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Written by Bruce