providing insight, strategy and leadership on and about digital and data
What level of consent is reasonable .....
when considering how your personal digital data should be used by others?
The purpose of this post is to explore the ideologies in a digital context, about when should Forgiveness or Approval be used as an appropriate consent method. Specifically, when a third party needs permission to use/ has used your data.
Consent: opening pandora's box with a familiar scenario. Imagine a friend needed to borrow your car, would you expect them to ask for your approval or forgiveness and would that change if there was an emergency. Imagine someone asking to borrow your credit card but then spending more or twice. Which consent method appears appropriate; Forgiveness or Approval. Would the amount spent cloud your opinion or the closeness of the relationship? In human to human relationships we believe that we have a clear set of boundaries for consent irrespective of approval and forgiveness. However, outside of the very few extreme cases there is a personal judgement call which we use to form our own consent framework which balances trust, level of harm, time, experience, perception, mood, relationship, value, urgency and culture; to name a few of the variables.
Politics, religion, science and medical research have a history of swapping between the two models of approval and forgiveness, often to their own convenience. Having said that some highly effective drugs we have today came about because of approval but more because of forgiveness. It is evident we need both but what do these models create in a digital consent context?
What is consent management ?
In its broadest sense consent management is a process where an individual is asked clearly to approve or reject an activity or action. In a digital world the question and response is stored and depending on the level of harm or value, consent may be continually re-sought/ re-acquired over a period of time. Best practice guidelines protect us before consent is even asked for as many activities are bound by codes of conduct or ethics, which in medical research would include the following protections for subjects:
Protected from physical or psychological harm (including loss of dignity, loss of autonomy, and loss of self-esteem)
Protection of privacy and confidentiality
Protection against unjustifiable deception
The subject must give voluntary informed consent to participate in research. Guardians must give consent for minors to participate. In addition to guardian consent, minors over age 7 (the age may vary) must also give their consent to participate.
These guidelines exist as forgiveness as a model is easier but does not always offer the same levels of protection as approval. Since in some cases harm was done through the forgiveness model, consent now has to be explicit and agreed ahead of any research.
It is not so easy in a digital world as digital consent starts when you sign the terms and conditions. In many cases you cannot use the device or service unless you agree to the terms, which they confirm you read, but there is no choice. You may randomly click the annoying cookie popup on a new website without realising what it means. The question digital consent is facing is a user cannot really understand what they are consenting to and so is there an obligation on companies to behave better though a new layered consent model that shows promise in rectifying this situation or face being regulated?
Approval and Forgiveness, why we need both?
Forgiveness as a model for consent has enormous value. It is fast, responsive and agile which is needed as we deal with a complex, uncertain, volatile and ambiguous market. It enables innovation through the ability to push the boundaries and can lead to a first mover advantage by capturing a market. There is a risk that the forgiveness is too great which will bring down the company, its brand and reputation and if everyone elects for forgiveness there would be a fast erosion of moral and ethics as we dumb down to the lowest possible models of exploitation.
Approval is the ethical high ground. It is slow to respond as it requires long term planning and structure. However these bring about value and trust in both the companies, the outcomes and the process. There are established boundaries and codes which provide protection to the users.
first mover advantage
dumb down morals
boundaries and codes
The assumption being made is that forgiveness and approval are mutually exclusive which is not the case, they have to and must co-exist, both are able to improve trust or destroy value. A company who first asks for approval, may find it is in their interest to improve the services by forgiveness as they are losing ground to a new competitor (FinTech). Or the company goes for forgiveness, builds a user base and then seeks approval as needed as they need to grow up ( Social Media giants). Human nature appears to be able to at the one to one level mix forgiveness and approval equally, it is how our children grow up and learn, but at the mass adoption level businesses tend to stop thinking of individuals as individuals and more of a market and change the ethical slider as a result. This is a fundamental difference which differentiates the approach at the one to one interaction level with one to many interaction level.
It makes sense then that there should be an underpinning of both models (forgiveness and approval) which is baked in an agreed code of ethics, but what do these ethics look like? Plato was thinking and writing 500 years before Jesus on just the same topic of ethics and left us with a reasonable structure to start from:
does it make sense?
is it the right thing to do?
Is it the right time to do this?
Is it relevant ?
Does it feel stale?
We are at a crossroads of either translating Plato’s questions into relevant modern questions about digital consent or to create rules and guarantees based on them such as: a promise not to abuse your data or a guarantee to protect and secure your data. The question we are facing an a digital industry is which is the right model for the next short period? Given that history has shown us that a rush of income, investment or users tends to erode start up high moral principles are we left with any real choice?
Maverick thought for debate over Christmas dinner! In a truly digital world where every action is recorded, is it not the case that the services should know better what you will do next than you! If this is sort of digital assertion as any truth, then can companies determine from your data what your own personal consent model is and would it not be better to use this rather as a model than assume?