Posts

Showing posts with the label AI

“Possible Minds” 25 ways of looking at AI edited by John Brockman

Image
“ Possible Minds ” 25 ways of looking at AI edited by John Brockman One of the best books on AI, right now in 2019, not because of its technical deep views but because it presents many (25) arguments about different aspects of AI and why there cannot be one unified vision or view. It is one to read, just because at the end you have more questions you don’t have answers to; which is a Richard Feynman quote in parts. It does allow you to explore your own views about AI, you will align to different parts of the viewpoints presented, indeed you will create a mashup of them all where you feel comfortable. The take away from the book is the framework and not the content from reading this book, the challenge laid down to keep up with the 25 themes are they develop, morph, link, combine, fraction and fork; especially the ones you find are at conflict with your own view and belief will be the most difficult, and there are lots. Below is for me, some personal, interruption and thinking

Status update on automated decisions and algorithms

Image
The problem defined by responses   We don't know how automated decisions are being made We don't know what the impact of the automated decisions is / are We don't have a complete map of where the automated decisions are We have little to no qualified insight into the effects on the business or our customers There is no reporting to management when automated decisions have problems - as we don't know when it does There is a sense of trust in the (cto, cio, coo) that they will be on top of it We have no idea about bias in the original algorithm, data set and what updates changes have been made and the effect How our automated decisions effect people in the organisation and their behavior has not been qualified excellent writing and thinking on automated decisions and algorithms https://fpf.org/2017/12/11/unfairness-by-algorithm-distilling-the-harms-of-automated-decision-making/ https://www.infoq.com/articles/Can-People-Trust-Algorithm-Decisions/ htt

If Man can create General #AI, will it solve the infinite regress problem? Exploring how a creators mind would need to work?

Image
The Infinite Regress problem is explained in the two sources below in far more detail. The basis of the idea is that each newly presumed creator of a creator, is itself presumed to have its own creator! In one form created adults create babies whom themselves can create babies and so it goes on. However the purpose of this post is to think about this problem in light of humans creating something (General AI), that itself will create something better than the creator (humans). Source 1 : https://en.wikipedia.org/wiki/Infinite_regress Source 2: http://www.informationphilosopher.com/knowledge/infinite_regress.html ----- Starting from the new concept (April 2019) of “Machine Behaviour” which is the thinking from MIT that says we should observe AI and record its action to understand what is going on inside AI. This is the same thinking as observing animal behaviour to determine what an animal may be thinking and their unique personalities. As a observational technique it certainly

How “nested Else” creates #bias and the impact on automated decision making

Image
Just read "We Are Data" Algorithms and the Making of Our Digital Selves by John Cheney-Lippold On Page 191 John explores the Else Test ---- At a simple level a nested Python If; Else statement can look like the code below. This is beautiful in its simplicity and offers a repeatable and deterministic way to match a grade to the logical number of the mark obtained.   In each case there is one output;   based on the actual input mark. Happy days if grade >= 90 :     print( "A grade" ) elif grade >= 80 :     print( "B grade" ) elif grade >= 70 :     print( "C grade" ) elif grade >= 65 :     print( "D grade" ) else :     print( "Failing grade" ) Let’s change the case slightly to something which says has more difficult to answer.   “Are you are good parent?”    We can approach the problem in two ways.   The simple way that hides the complexity and based on a score which deter

Artificial Unintelligence by @merbroussard explores the really important topic "algorithmic accountability reporting"

Image
Follow Meredith Broussard on twitter @merbroussard https://www.linkedin.com/in/meredithbroussard/ Highly recommended reading , and if interested also pick up  Weapons of Math Destruction  by Cathy O’Neill  How computers mis understand the world. A great and very accessible book on why understanding the inner workings and outer limits of technology help us appreciate that we should never assume that computers will always get it right. It explores the limits of artificial intelligence (AI) and techno-solutionism, furthermore showing how we can easily replicate existing structural inequalities which is not an achievement. --- This beautifully written book by Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally; hiring, driving, paying bills, even choosing romantic partners, that we have stopped demanding that our

using AI to understand AI @thinkmariya

NO TIME TO READ AI RESEARCH? WE SUMMARIZED TOP 2018 PAPERS FOR YOU reference - excellent Source : https://www.topbots.com/most-important-ai-research-papers-2018/

Value Alignment Research - stunning visualisation

Image
Source: https://futureoflife.org/valuealignmentmap/ Value Alignment Research - stunning visualisation The project of creating value-aligned AI is perhaps one of the most important things we will ever do. However, there are open and often neglected questions regarding what is exactly entailed by 'beneficial AI.' Value alignment is the project of one day creating beneficial AI and has been expanded outside of its usual technical context to reflect and model its truly interdisciplinary nature. For value-aligned AI to become a reality, we need to not only solve intelligence, but also the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens. This landscape synthesizes a variety of AI safety research agendas along with other papers in AI, machine learning, ethics, governance, and AI safety, robustness, and beneficence research. It lays out what technical research threads can help us to create beneficial AI

Hello World by Hannah Fry. Read this book.

Image
Hello World by Hannah Fry .   Follow Hanna on Twitter My take If you are having any issues explaining algorithms, data or the impact to anyone from a student to the CEO. READ this book ! It is brilliant. I highlighted more in this book than most Hannah brings out that algorithms and data bias is part of society, however the scale is now something we cannot hide from.  Past controls were hidden but now are out in the open, but sometimes we don’t have a clue how it happens We have a choice every time we use a service - be lazy or be in control.  Neither is better however, to become dependent on the algorithm is not about the loss of control but about the loss of identity Control of the algorithm is not limited to the code, but the weaknesses of the machine, design and data. Love Hannah’s take on AI, will be writing up another book on AI soon - so will keep thinking on AI for that. When people are unaware that they are being manipulated, they tend to believe that they

The Mind is Flat and other insights into how we think

Image
The Mind is Flat (book) by Nick Chater Rare find as the quotes live up to the content and have to say READ IT.    If you are thinking like me about ethics and AI - this is essential reading.   ----- A radical reinterpretation of how your mind works - and why it could change your life’ 'An astonishing achievement. Nick Chater has blown my mind' 'A total assault on all lingering psychiatric and psychoanalytic notions of mental depths ... Light the touchpaper and stand well back' We all like to think we have a hidden inner life. Most of us assume that our beliefs and desires arise from the murky depths of our minds, and, if only we could work out how to access this mysterious world, we could truly understand ourselves. For more than a century, psychologists and psychiatrists have struggled to discover what lies below our mental surface. In The Mind Is Flat , pre-eminent behavioural scientist Nick Chater reveals that this entire enterprise is utterly mis