Chaos and the abyss

This read describes the space between chaos and the abyss, where we find ourselves when we allow machines to make decisions without safeguarding collective criticism or realise they can change our minds.  

-----

There is a reality that we are not forced to recognise our collective ethical and own moral bias without others. However, these biases are the basis of our decision-making, so asking a machine to "take an unelected position of trust" and make a decision on our collective behalf creates a space we should explore as we move from human criticism to machine control.


Machines are making decisions.   

Automation is incredibly powerful and useful, and we continue to learn to reduce bias in automated decision-making by exploring data sets and understanding the outcomes by testing for bias.  As we continue testing, iterating and learning about using past data for future decisions, we expose many of our human frailties and faults.  


The decisions we ask machines to make today are easily compared to where we think we are going.  However, since I cannot confirm or write down my cognitive biases today (out of the 180 plus available). If asked which biases would be the same tomorrow, I would be unable to tell you.  Therefore, I am even less convinced that, as a team, we can agree on our team biases as these will change as a new sun rises because we all have eaten our own choice of food, have different biology, chemistry and bacteria and have had divergence experiences since the last sunrise.

AI and Hypocrisy


Hypocrisy is the practice of engaging in the same behaviour or activity for which one criticises another. Our past and present actions can be different, but because of our past, we have learnt, and change has happened, but that does not mean we should not be able to call out when someone is making the same mistakes.  


A defence used by those called out is to cry “hypocrisy”.  Human rights issues and football spring to mind. How can you judge when you did the same? As a Brit, we are responsible for some of the worst abuses of power and wrong thinking, but we are changing; I agree that it is not fast or far enough. However, the point here is that humans learn and can call something out to other humans if they are making the same mistakes.  I accept we are not very good at either. 


However, contemporary discourse is that if your past is flawed, you are not empowered to be critical of others.  However, if we ever believe that we are beyond criticism, fault or learning, surely we become delusional and unable to see the wrong we are doing, believing we are more moral or ethical.  But what about machines? When machines make a biased decision, who is there to be critical or will the AI call hypocrisy? 


I struggle with the idea that the company values, purpose and culture are good predictors of the decision-making processes that we have in place because of bias.  A good human culture can exist, but that is one of learning, but that does not mean the machine that powers the organisation is aligned with learning in the same direction.


This thinking about hypocrisy and culture creates gaps, voids and chasms filled with chaos between individuals' integrity, the integrity of the wider team/ company and what decisions we ask machines (automation) to make.   This is not new, and such gaps have been the study by many philosophies and political scientists since Aristotle.  


So how do we enable a machine to make a decision based on data but then allow other machines to see the inconsistency and defend hypocrisy?  This is the space between chaos and the abyss.

So how do we enable a machine to make a decision based on data but then allow other machines to see the inconsistency and defend hypocrisy? 


Being explainable is not the problem.


Explainable is in fashion in AI; however, events of 2020 to 2022 have presented rich picking from COVID lockdowns, the cost of living crisis, football WorldCup hosting and COP28 to say that explainable is not much use when decisions impact humans.  Equally, making an algorithm or the code behind it explainable does not solve the problem.  Neural networks are accurate but un-interpretable, whereas Decision Trees are interpretable but inaccurate.  I can explain an outcome, but that does not mean I can predict it.  We can explain mass shootings, but that is of little value or comfort to those who lost a loved one. 

Jumping into the abyss.


Machines with bias in decision-making are not new, nor is explainable AI thinking.  However, when we (humans) are criticised or called out, we often become defensive and don't change. Will machines be different?  Calling out that someone is wrong does not persuade them to follow a different path. Calling out a decision made by a machine is not going to change the machine's decision-making process.


Here is the final jump. How to change someone's mind is a super article from Manfred F. R. Kets de Vries at Insead.   It sets down The Scheherazade method of Seven steps to change a person’s mind.  Now when a machine learns that it is easier to change a human mind by following these steps, are we in danger of seeding the last part of independent thinking to the machine? We will not see the problems as our minds have been aligned to someone elses decision-making methods (calling out loyality).  It is why we need a void between morals and ethics and should celebrate unethical moral and immoral ethics as they show us the tensions and gaps in our thinking and allow us to learn. 

 


This is a nod to Brian Cork based on his comment on a previous article on Fear thank you.