Pathways to General AI, the unimagined

The unimagined: unimagined 


Your mind will have already assumed that this chapter, from its title, is presenting the unimagined outcomes, a discourse to overly optimistic viewpoints.  It is reasonable that anyone reading a book on Pathways to General AI will have interrupted the word, “unimagined” within this context. Before you skip this chapter as you are a believer that AI is the future or don’t need another comfort blanket chapter speaking to the fearful of AI,  this chapter is neither. It is not a wild fantasy of the possible nor is a negative chapter of the unknown risks and likely machine run apocalyptics. This chapter presents a matrix which enables the holder of either viewpoint to debate and discuss the unimagined together. 

However, to reach the matrix we have to create a framework first, which gives the reader a clear and coherent communication tool to position imagined AI initiatives. 

THE SIMPLEST ECONOMIC MODEL

Figure 1 presents a straightforward process. Raw materials are inputs to a process called gain, which creates an output. This simple model describes any process of adding value and a central core of a business. Indeed, applying the same model to our daily human life where the inputs are food and water, the gain is our process of living, and the output is our activity.


Image for post

Figure 1. A simple model

This simple model is an open-loop system, and it is unstable. It cannot exist in isolation as it exists in a complex world where single processes are continually interacting with other processes, all affecting each other. To create stability systems depend on feedback loops, where the output is fed back to create an input.

The most straightforward stable control system is termed a negative feedback loop. Figure 2 presents how you take the output after a process of gain and SUBTRACT (take is away) from the input. (the green loop.) If we took all of the gain away, there would be no value created [output — input*gain = 0], so we attenuate the output (reduce it by a factor) before the closing the loop. This negative feedback loop allows the output to track the input. Imagine the input increases, after a gain process, the output increases, however as the output is feedback and taken away from the higher input, the actual input into the process is corrected and controlled. Significant changes are avoided and small modifications are tracked. It is stable but enables innovation, agile changes and minor adjustments.

Image for post

Figure 2. A simple negative feedback loop

This is a control system. The negative feedback loop (subtracting the output from the input) delivers stability and control. It has stood the test of time. Technically the system control order of response [1st, 2nd, 3rd or higher orders] will depend on internal dependencies and external complexities; but give rise to different response times and the accuracy in tracking changes.

As a business function, we like this negative feedback loop, using planning and controls to keep businesses stable but iterating and growing in a controlled manner; innovation is incremental. In psychology think Nudge. In human terms this loop of control is happening all the time in our cells and is called homeostasis. Our body temperature remains at c.37oC irrespective of environment, food, clothing and activity. Society, politics and economics thrive on this stability cycle, continually making small adjustments to maintain order. In perspective; our lives, societies, businesses and environment remain in this stable loop for over 90% of the time.

Another control loop is presented in Figure 3, and it is a positive feedback loop. In this case, you take the output after a process of gain and ADD it to the input. (the yellow loop.) In this case, we take the output after the process of gain and amplify (make it bigger) then add this to the input. In audio terms, this type of feedback loop creates that loud ringing sound or in business planning, it is the magical hockey curve or exponential growth for a unicorn. It establishes fundamental accelerating change.


Image for post

Figure 3. A simple positive feedback loop


Positive feedback loops create instability, at the end of each cycle round the loop a new level is achieved. This positive feedback loop creates disruptive innovation and change. The control loop order (how quickly something changes and how much change) will depend on complexity, time and the resources available.

In the human terms, the negative feedback cycle delivered stability which is seen as homeostasis; this positive feedback loop delivers mitosis (cell division and growth) and meiosis (having children) or death when it gets out of control (cancer and virus). 

In business terms, negative feedback loops are times of stability and slow growth, positive feedback loops exist in times of rapid change, disruption and high growth. In perspective; our lives, societies and environment can only remain in unstable change loops for a brief period, less than 10% as it is as destructive as well as creative.

It is possible to show change by attenuating the output and adding it to the input, rather than amplifying; however, often you can’t control the amplification or attenuation.

THE CONNECTED MODEL

Combining these two models (positive and negative feedback loops) is obvious, we need both to understand our world of stability and change. We see this in our bodies every day at a cell level every day or when the disruption of when we are ill or a child arrives. We see the history of societies change across the passing of centuries, and at an environmental/ecology level across the millennium.

In business, the negative feedback loop allows us to be increasingly smart at one thing in the short term driven through continuous improvement yielding efficiency and getting to the best or optimal solution. In contrast, the positive feedback loop allows us to change, do radical innovation to create new products or markets, buy companies, and sustain high growth over a long period.

There is much evidence that our behaviours swop between these different loops. In Jonathan Haidt’s book “The Righteous Mind” he looks into the psychology of human beings and why they believe what they believe. In his studies, he’s found that humans generally think of themselves and are selfish. They mostly behave like their ancestors, chimpanzees. They’ll take care of their own needs first, then think of others — stability. However, Haidt also explains that humans, in certain instances, also behave like bees with a hive mind. At times, a switch can be flipped, which causes a human to think of the group before him or herself. When this mental state is achieved, one will even die for the group they care for. They become a part of something much larger than themselves.

COMPLICATED COLLABORATIVE MODELS

To bring the two feedback loop models together we need to introduce a third idea, the choice selector, Figure 4. The Choice selector picks which feedback loop, stable or unstable, we should currently be in. How we get to choose which loop we are in now and how to move from stable to change (different) and back to stable at a new level is “complex.” This choice selector is either where intelligence (learning &/or experience) is applied to change the cycle, meaning you are in control of the choice, or it is where you are reactive and forced to change. For humans we can see we can choose to change using a positive feedback loop two ways, we can choose to have a child (intelligence) or become ill (forced). In business, we can choose to be disruptive (intelligence) or react (forced) to the disruption.


Image for post

Figure 4 Choice Selector

THE APPLICATION TO AI

Building on the feedback loops of stability and change with a choice selector it is possible to explore where Machine (AI) is best and where Human is best and if there are scenarios where Human + machine (AI) would be a better outcome.  This becomes the AI matrix, figure 5.

Image for post

Figure 5. The AI matrix

EXPLAINING THE AI MATRIX

Walking down the Stability column in Figure 5. In the stable loop (negative feedback loop/ green) without a doubt, AI can deliver. AI will be transformative in terms of getting to the best solution, fast. Why: because in stable loops there is lots of data and it has been and will likely remain stable. The past data is an excellent indicator of future outcomes, training data works. Be careful as it is not that easy. The data can be biased and corrupt, but the application of AI in this loop is a winning strategy. If the human is in this loop, we will probably get it wrong (motivations, bias and politics). If Human + AI, the human still likely get things wrong. However, as proven in gaming (chess and Go), human + machine does work well and can lead to better outcomes.

Walking down the Change column in Figure 5.  In the change loop (positive feedback loop/ yellow) AI is just not up to it. The base reason is that we don’t know where we are starting from or where we are going, and there is no or very little data. What the machine determines to do and why will be random and unknown. If a project is being presented to you asking for funding to use AI to change anything, walk away. Putting the human as the lead for change (it is what we do today) but we know it is sub-optimal as humans have issues and limits. Human and machine (AI) is a fascinating solution as we can eliminate the worst human characterises that leads to bad decision making but getting the AI to support the decision but not take them.

The final column, the choice selector, picking which loop to follow. It is where we allow the AI/ machine to choose for us. Fear, uncertainty and doubt are outcomes. 

What this means is that if someone is presenting to you, an AI to make choices for you for which loop, probably right now the best action is to AVOID and run for the hills. The human leads on choice, and we have spent the past 5,000 years getting to rules, regulations, governance and government. We and our system have a few problems, but we (should be) are in control. Transparency is increasingly essential in choice. 

Human and AI together to make choices highlights a critical issue, who decides and “who decides, who decides.”  Who polices the police? How do you ever know what someone else has put in an AI that is helping you decide? Currently asking a machine to help you choose for stability or change does not feel right.

THE UNIMAGINED: UNIMAGINED 

The previous framework gets us to a point where someone, somewhere has imagined something to do with AI, but without us all understanding how it fits into the systems of life which mean we struggle to make a judgment call if the outcome is favourable or not.  This last section expands to the unimagined: unimagined and how we can discuss and debate what AI means for humanity and where AI may take us.

Figure 6: The unimagined: unimagined 

Expanding on figure 6, starting from the top right corner.  Two highly experienced professionals are problem-solving. They share a common understanding of the problem as it is imagined to them both.  It is possible to grasp that they can easily and quickly get to a common language, agreement and a shared sense of what to do next.  The majority of day-to-day problem solving is reaching through a shared and collective understanding and communication. 

In terms of AI, the imagined: imagined space is Machine Learning, Deep Learning, algorithms and data.  Variance is acceptable but there is an appreciation of what the methods do and how they work. It embraces the concept that new methodologies will be created.

More difficult is when something is imagined to one expert as a clear route ahead based on deep insight and analysis. The expert has clarity based on their framing, bias, time to think and data, but this is the first time for another professional. In this situation, perfect clarity can look like madness to someone else.   Similarly, if the expert and professional swopped positions an imaged a solution become obvious to the professional and foolish to an expert.  In both these cases, the is no substitute for time and the sharing of facts, insight and knowledge. Over time and with compromise, the two should be able to agree on the route ahead as it becomes imagined to both of them.  

Imagined to one expert and rejected by other experts is a wide range of topics around AI, based on bias, life expectation and experience. Elon Musk and Bill Gates talk to their fears where we allow the AI/ machine to choose which loop (stable or change) for us.  To them, it is clear, to others, it is madness. With time, advancement and more data points positions will change, and we will come to agree on how to deliver governance, oversight and stewardship that allows for the better outcomes.  

Finally, unimagined: unimagined, the bottom right quadrant. It is where we can come together to consider beauty and systemic risk.  We can park all imagined to any party and explore the unimagined. In this case, it is no longer about a decision based on interruption of available data or facts but the requirement for complex judgment.  We have to work together to determine the bias that we each bring and to find a way to articulate what scenarios looks like so others can also imagine it.  The unimagined: unimagined is not about the solution, but what the problem is and how to determine what an impact will be. It is right now that we realise that our words fail us in our ability to shared what is already imagined. 

Undoubtedly, the first step would be to accept that parts of all imagined scenarios will play out. We should stop arguing or rejecting models believing that we have foresight into highly dependent, emergent, complex and adaptive ecosystems. 

A second step is to create new names, words and language that doesn’t have an immediate bias, but where we can explore what the ontology of our latest thinking would mean.  We distort what we imagine to create outcomes that we favour in the short term. 

The unimagined is long term, which is where we need to go for general AI, not the superficial imagined, not the next 1 or 10 years, but the forthcoming century and millennium.  Let’s have fun creating new words such as “Onama” that means, “leave all the imagined ideas about sci-fi,  cyborgs, robots and augmented reality outside.”  

With new names or words, that don’t have an immediate bias or expectation, we can turn our thinking to governance, stewardship and oversight, exploring who has power, agency or influence in the previously unimagined scenarios.