Follow me, follow you. Follow what you have agreed to is a tad broken.


Facial recognition's 'dirty little secret': Millions of online photos scraped without consent

source : https://www.nbcnews.com/tech/internet/facial-recognition-s-dirty-little-secret-millions-online-photos-scraped-n981921?cid=par-aff-nbc-knbc_20190312


“Facial recognition's 'dirty little secret': Millions of online photos scraped without consent.”  so is the headline on this #NBC story.  Classic headline for click bate. 

IBM released 1 million pictures of faces, intended to help develop fairer face recognition algorithms. This is not a new issue and the bias features in a few very good TED talks so is very real.  However the story was that you 
face was scraped directly from #Flickr.  Now the question is all about permission of the subjects rather than this is something we need data for to remove bias. 

Data researchers scrape data from the internet (it is public) all the time to train algorithms. Photos are often a fantastic source of image data as the hashtags conveniently correspond to the content of the photos, making it extra easy to generate labeled data. #happydays

Is scraping data right or wrong, well that  
depends on an individual interruption of the context (complex).  However when we gave consent on a web service to the Terms and Conditions say, 5 years ago, were you aware that your image would be fed into a training algorithm.  Given that it was not a capability then and the harm is small, would you have stopped?

However, since data has value and consent (informed) was not really given we have a duty to work out a better way to engage all parties, provide value back, mange consent, honer privacy and provide better consistency - now that is a tech stack, which will come. 

How we work with individuals to explain what we are doing and why, which is a deeply human activity which we appear to have forgotten.