The emergence of the internet combined with technology developments in areas like sensors including IOT, smart phones and social media has seen society rush head long into a hyper connected world together with an explosion of data, much of it personal. Alongside those forces we have seen an ongoing march of automation using technology to increasingly make decisions on behalf of people. In recent years we have also seen a wide range of unintended consequences. People feel they have lost their privacy, they are increasingly concerned not only with who has access to data relating to them but also what is being done with it. A common conversation is people talking about having a discussion in the kitchen and five minutes later they are served up an ad about the topic they were discussing when they sit down in the lounge room. There is even a term for this “Surveillance Capitalism”.
There is an emerging field of research relating to Ethics and AI, with scientists looking to put Ethics at the forefront of AI projects rather than dealing with unintended consequences after the event. CSIRO has established a responsible innovation research program and the Gradient Institute has been established to champion the cause of Ethics. One of the key reasons for these initiatives is that scientists increasingly feel that many have focused on what can be done with the technology without thinking about what should be done or alternatively not spending time to consider what could go wrong.
We are also seeing strong rear guard action from regulators responding to the concerns of their people. In particular, in the EU, the General Data Protection Regulations impose the strongest regulations and penalties that can apply to any business dealing with EU. The regulations became operation in May 2018 and we have already seen major penalties imposed where an organisation was violating transparency and information rules around user information, as well as a lack of valid consent around ad personalisation.
The Deloitte Privacy Index for 2018 shows that people are willing to share personal data with organisations as long as there are clear and relevant benefits to them. The lose trust where there is a lack of transparency, where data is traded or used without their explicit consent or there has been a data breach and the organisation hasn’t handled it a timely manner.
What is AI?
Artificial Intelligence can be defined as:
- Any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.
- Typically demonstrates at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.
Many people think AI is a phenomenon of the future where machines or computers will be more intelligent than humans. The reality is that we are already surrounded by AI and it has become a part of our daily lives. Things such as: predictive text when sending an SMS, recommendations on Netflix, Siri on your phone or an Amazon Alexa or Google Home, Google Maps presenting you with an optimal route taking into account current traffic conditions, sensors on cars causing it to sound warnings if you are too close to an object or automatically braking to avoid a collision; are all examples of AI.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of AI based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. While this may seem like a great leap forward problems occur because it is often not clear why the AI is making the decisions it makes and that machine learning inevitably creates bias. When using machine learning bias arises from what you are seeking to optimise; if you optimise for profit you may exclude people from accessing a service or alternatively ignore ethical considerations around the fitness for purpose to a certain market. The data used to “train’ the machine” may be incomplete or already have in-built biases or choices may be made about which data to include or exclude that also can create bias. If a problem does arise regulators and courts generally do not take too kindly to answers along the lines of “I don’t know why the machine made the decision it did”. When AI is used to make decisions in fields such as employment, law and order or health it is easy to understand how things can get messy.
What to do?
Firstly organisations need to think hard about what data they collect or acquire and then think about what they are doing with it. They need to be totally transparent about what they data have and how they use it. A good guiding principle is am I doing this to benefit myself or does this truly benefit my customer, employee or user.
If you are looking to trade data you have you need to be sure that the affected stakeholder know who you are trading to, what are the benefits to them and how are you controlling what happens to the data once it is in the hands of others.
If there is a breach or a problem own it and be open and honest immediately or your brand can be badly affected.
If you are using machine learning to make or augment decisions test it for other contexts. If you are looking to optimise profit test it in other contexts such as fairness, equality, social acceptance and ethics.
Finally ensure that you have humans in the loop. Machines do not have in-built mechanisms to understand social norms, integrity or put simply what is the right thing to do from a human perspective.
Perhaps this quote from AI researcher Pedro Domingos sums it up the best “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.” Tread carefully my friends