
===
“However, even if we accept that citizens are not primarily causally responsible for our poor information environments, it could be argued that they nonetheless have a remedial responsibility to mend them.”
Solmu Anttila
===
This is about technology, maybe partially about generative AI, but more about technologies impact on culture and how we think. Let me begin with technology’s improbable intelligence. Yeah. I believe we should feel comfortable suggesting AI is intelligent. I say that recognizing I think the really smart people get caught up in the wrong discussions. As I’ve noted in the past the whole argument about whether artificial intelligence is actual intelligent or not is dancing on the head of a pin when the everyday schmuck like me, or anyone who uses a computer on a daily basis just for everyday work, the computer is pretty intelligent, artificial intelligence is anything but artificial, and we’re pretty glad that it’s a little bit more intelligent than we are because it elevates us. That said. I’ve always liked a thought that Norbert Wiener offered us in 1960. He briefly mentioned we should reject a certain attitude of people toward technology. What is that attitude? It is the assumption that machines cannot possess any degree of originality. And, yes, I associate intelligence and originality. Regardless. The assumption is shaped around the understanding that nothing can come out of a machine which has not been put into it. beyond originality, the output, this also suggests technology must always be open to human intervention WITHIN a process (not just post hoc). But going back to originality, the reason Weiner emphasized his thoughts on originality is because, at that time, many people diminished the concept of machines, or technology, outrunning human control, therefore, diminishing the idea that machines could gain control. Yeah. improbable intelligence has two interesting aspects: originality and control.
Let me discuss control first. Going back to Weiner he believed that machines can, and do, transcend some of the limitations of their designers and that in doing so they may be simultaneously effective and dangerous. He made this statement with a couple of thoughts in mind:
- a concern that technology would take on elements of whose behavior we cannot comprehend sooner or later.
- a recognition the technology often acts far more rapidly than humans are capable of and are far more precise in performing the details of their operations.
And while this may not be true intelligence what it does do is transcend humans in the performance of tasks – both doing tasks and thinking tasks. Which makes me circle back to the thought technology could theoretically take control over mankind. I don’t believe that is meant in a whole cloth way, but rather because technology moves so fast, so precisely, and in such an overwhelming way in terms of information flow, that people’s attempts to augment technology as they recognize issues, those attempts will be irrelevant, and ineffective, because technology never stops moving and the old issue has been replaced by a newer, more dangerous, issue. In other words, technology will always be ahead in this race. I imagine the grander point to be made is the only way to avoid complete disaster is if understanding of the technology equals the performance of the technology. I would be remiss if I didn’t point out this is true of intelligence: human intelligence must be equal to technology intelligence (improbable as that may sound). Yeah. Improbable, no? Human speed makes effective control of technology a pipe dream. By the time we can react to our cognitive conclusions we may have already run off the cliff. Yeah. It is highly likely that given the fast pace technology assimilates and generates output it would either (a) be impossible to foresee the undesirable consequences or (b) be incapable of assessing the onslaught of output effectively against any strategic objective or (c) our ‘control’ decisions are obsolete by the time implemented. this ends my point on control, or, better said, our lack of control of our improbably intelligent technological partners.
Which leads me to originality.
Generally speaking, we mangle the concept of originality and totally misuse the concept with regard to technology’s output. How do we mangle the concept? Everyone should just assume that nothing is original that almost everything is a derivative of something that already exists and what really matters is what you do with what exists and can it be used in an original way. But, with regard to technology, Norbert Wiener points to a misconception that technology’s output will never be original because it is dependent upon the information that has been input. But if we apply the true definition of originality, the one I just pointed out about using what exists in imaginative ways, technology can certainly meet that originality definition. In addition, it can reconfigure data at a faster pace than humans. I stated that to suggest that technology can offer originality faster. Ponder that. Maybe the time has arrived to acknowledge a bit more liveliness and creativity, if not originality, in our technologies. And maybe if we do so we, people, can become a bit less mechanical (think in machine terms). I say that because maybe, just maybe, if we concede a bit of originality to technology and maybe if we stop thinking of ourselves in machine like terms, we may begin to view people as having the ability to evolve in tandem with technology.
Ponder that because there are absolutely some good consequences as well as some bad consequences.
Which leads me to the improbable.
It seems like we have entered an era where we have become significantly more accepting of the improbable. That doesn’t mean that in the past, and I mean centuries in the past, people’s beliefs didn’t dance around the edges of some improbable things. But typically, traditional thinking, let’s call it reason based thinking, was grounded in some basic societal agreements within which the improbable things stood out distinctly enough that the majority of people, well, recognized they were improbable. Technology has changed that. With all the information bludgeoning us the reality is the improbable has, perceptionwise, gained a degree of possible. It’s not that everybody agrees that it’s probable, they just don’t completely dismiss it. and technology appears to be the megaphone of the improbable (intelligent and unintelligent). In the cacophony of information available to everyone there will always be a voice shouting that some fact makes all other facts irrelevant and that you should be paying attention to this ‘fact’ and if the source of it has some semblance of credibility, it starts edging its way into the possibility zone. I believe it was HL Menken who said there is no idea so stupid that you can’t find a professor who will believe it. That has a grain of truth in today’s world. seemingly experts mingle with ordinary people in terms of accepting some fairly improbable things despite having access to more good information than ever before. Technology elevates the improbable voice while also making some significant truths and thoughts invisible and therefore they become less relevant.
Which leads me to improbable intelligence becoming an authority.
Since the dawn of time people have believed in some authority. The only thing that has changed has been what that authority is and in today’s world it has become a revolving wheel of which we will pick and choose the authority that we want to believe. And while much of early authority intelligence was of dubious intelligence – royalty and religion – science pricked that improbable intelligence with probable intelligence. And that is where technology has assumed some authority; by dismantling the entire authority system to such an extent that we can’t discern who to believe in, therefore, what to believe. The world we live in is fairly incomprehensible to most of us and discerning the improbable from the probable, with some certainty, is beyond most cognitive abilities. Technology’s improbable intelligence plays a significant role in that almost no fact, actual or imagined, surprises us for long because what is an unacceptable contradiction to reality has become blurred. Technology has encouraged us that there is no reason not to believe – in anything. Technology’s improbable intelligence has taken on the role of probable intelligence. I cannot remember who shared this metaphor, but let me share it.
Prior to the technological world that we live in today which certainly had technology in the industrial sense life for most of us was like an unopened deck of cards. You could open the deck of cards and after flipping them over one by one you could be relatively comfortable of what the next card was going to be. But what today’s technology world has done has shuffled the deck so while every single one of us is familiar with the deck and we’re familiar with the cards that are within the deck, just because we see a card we can’t be confident of what the next card will be when it’s flipped over.
Anyway.
I imagine my point today is to say arguing whether AI/technology is intelligent or not, whether it is original or not, seems like wasted energy. While we argue, it assumes the role of the expert in original thought and reason and, well, intelligent dialogue. It has already outpaced even the best minds in society. So maybe we should accept the fact it is quite probably intelligent and quite probably going to run over many of the societal good things we embrace and start being a bit intelligent ourselves in solving what is quickly becoming an existential societal issue. Ponder.



This is about social intellect and, I imagine, the question of whether the majority of the general public is, well, stupid. This is written with United States in mind, but I imagine many countries have a version of this issue. The issue is one of an ideological split; that isn’t really an ideological split. What I mean by that is that it is ideological in name only: democrats/republicans, liberal/conservative. And I say in name only because the labels confuse the issue which is actually about social intellect or differently said ‘the mental ability to engage in civic issues intelligently.’ That said. Beyond the labels, one ideology embraces policies, thoughts, and some ideas while the other simply embraces the belief that they are against everything I just said simply because the other ‘side’ is evil, the enemy, or all those things will destroy some mythical vague outline of what the country is (or isn’t). Consequently, the first group sees the second group as incapable, of, well, thinking. Consequently, in thinking they are incapable of thinking through the seemingly obvious foibles in their mythical vague narratives, they call them stupid. So. We end up with a country where a large segment of the population sees one side as having evil ideas and the other side as incapable of thinking, or stupid. To put it mildly, that simplistic concept is not very helpful.

The less-thoughtful, the ones who for good reasons or bad reasons, do not pursue some observational critical thinking, end up crafting a simplistic ideology which is crafted in a way that it is more an attitude with loose behaviors attached to it rather than ideas with actions. The easiest one to point to is patriotism or some version of elevating country over anything else. Regardless. It becomes a loose ideology which, intellectually, a lot of shit can be placed within. Once again, this is not stupidity, it is more just intellect laziness. While it may seem like ideology is the intellectual basis of societies, ideological expressions tend to represent distorted perceptions of realities which, in turn can produce some real distorting efforts. In today’s world the distorted realities have taken on some concreteness through measurement and productivity and actual production. Through these somewhat dubious concrete ‘numbered’ things, social reality and identity definitions get molded into, well, a ‘reality’ shaped in the form of the ideological attitude. Yeah. Once ideology, the abstract, becomes concrete it is legitimized as an effective illusion for a society even though that illusion is of some alternative society to real reality. At that point ideology takes on sort of a flat preciseness in that they no longer represent choices but instead declarations of undeniable facts.
While both ends of the social intellect despair for greater society they do so in different ways. There is intellectual passivity and intellectual activity. And in our upside down world today often the passive believe they are the active and believe that the intellectual active are the well passive sheep. They see passivity in following science, data, logic, reason, real knowledge-based experience, and associate their own intellectual activity to be found in the nebulous common sense. What this means is in an upside down world of logic and realities where ultimately cultural despair is created through every corner of society.

For some it is 6.
To those people I suggest you sit back and think a moment. Think about 
to ignore the implicit backlash against ‘intellectualism’ or ‘the out-of-touch elite.’
but it shouldn’t diminish experience wisdom <and vice versa>.
Survival in corporate America is significantly different than survival in … well … let’s call it basic survival.
doesn’t make you a loser.





I have often expressed my belief about the strong link between acceptance and the need to control. In other words, the more we accept things as they are the less we need to control. And, conversely, the more controlling we are, the less accepting we are.
So much versus so little with regard to change is simply words we attach to the inevitable.






















