First published in The Drum

Enterprise adoption of AI needs to learn from Big Tech’s ethics failures

Enterprise adoption of AI needs to learn from Big Tech’s ethics failures

red_line
It’s long been the case that the pace of technological development has left that of government, policy and regulation in its trail.
It’s long been the case that the pace of technological development has left that of government, policy and regulation in its trail.

And as a result, it’s hard to review the news and not see some headline about the failures of Big Tech to better self-govern, the need for greater regulation and even the desire, from some quarters, to break these behemoths up.

Whether it be GDPR breeches, political manipulation, self-harm and extremism going uncensored or the ever-listening ears of voice assistants, it’s fair to say that ethics have taken a back seat to customer acquisition and active users.

 As AI adoption is continuing to move, at its own pace, from pilots on the periphery of businesses to becoming a mainstay of business and growth strategy, it’s critical that companies take the time to explore, not just want AI can do for the business but what it should do.

Some of the key implementors and solutions providers in this space have taken a proactive approach with the likes of Microsoft, IBM, Adobe and Salesforce creating frameworks and policies for their responsible development of AI for both themselves and their clients.

While we have noticed a direct correlation in some businesses between the maturity of their adoption of AI and their approach to ethics, others are implementing AI at a rate of knots but are not yet giving due consideration to the ethics framework and cultural considerations that should support it.

.

Towards an approach to AI ethics

Towards an approach to AI ethics

red_line

 

There’s no one definitive answer as to what should be included within a business’s ethical policy towards AI, as every business will have nuanced differences. For example in the extreme case of autonomous vehicle companies, consideration needs to be given to the long-debated, classic ethical conundrum, the “Trolley Problem” - where an accident involving a vehicle is imminent, and the vehicle must opt for one of two potentially fatal options I.e. an accident that involves several bystanders vs one that involves only a couple of bystanders.

In a recent global study on this subject by MIT, it was found that there was largely global alignment to this conundrum I.e. the options that involve the smallest possible loss of human life should be the outcome however cultural outlooks between East and West do start to affect decision marking when the conundrum is evolved to impact an elderly bystander vs a younger bystander.

Putting the more extreme examples of AVCs and the potential use of Hololens 2, robots and drones in warfare to one side. There are some hard and fast areas that every business exploring the further use of AI should be considering.

1. A policy of transparency

This rings true for both internal and external audiences. 

Recent Forbes research showed the disparity between business leaders, who saw AI not as a threat to jobs but as a way of elevating existing jobs and creating new ones, versus the workforce as a whole, who believed the threat to be far higher.

The fact that the research also found that nearly 20 percent identified ‘resistance from employees due to concerns about job security’ as a challenge to their AI efforts, accentuates the need for a clear and transparent policy that is socialised throughout the business. This should identify the benefits AI can bring, the proposed uses cases and clear red lines as to where it won’t be utilised.

As important, is the need for transparency with customers. Customers should never be in a situation whereby they are unsure if they are conversing with a human or AI, this is especially important when dealing with sensitive subjects like health and finance. They should also be able to discover more about the businesses they are interacting with and how their personal data will be used in the context of machine learning.

2. Perpetuation of inequality

There’s been a huge push for diversity over the past 5 years across all walks of business, from boardroom representation to specific industries and roles and while some progress has been made, it’s been slow.

So it follows, without careful internal governance, the very nature of issues surrounding diversity in those creating the AI logic and programmes, means that current biases could be perpetuated or, worse still heighten.

The World Economic forum has already warned of ‘racist robots’ referencing the case of software developed to predict future criminals, being biased against black people. While not responsible for this fail, IBM recognised the potential for bias in AI and last year launched the Fairness 360 Kit. This open source solution will analyse how machine learning algorithms make decisions in real-time and figure out if they are accidentally being biased.

3. Artificial stupidity

That fact that AI is referring to machine or deep learning, makes it abundantly clear that there is a process by which errors will be made while algorithms are optimised and machines train and get smarter.

This means that a robust approach to training, piloting and releasing is crucial but even with this, there will be edge cases that are untested and unexpected. As such consideration needs to be given to AI management and ongoing analysis of outcomes and development of learning patterns as the machine gets smarter. Clear protocol for reporting and escalating unexpected results should be defined and expedient communications policy towards those affected is needed.

4. The customer is always right

And the protection of their data, privacy and safety is paramount. In addition to these obvious requirements, it’s important that the customer maintains control and a business’s products and services remain accessible.

 In the case of automation, it’s important that, if AI-enabled services are not able to fully support a customer’s needs that alternative, human support can be easily sought. Or if customers are not comfortable utilising conversational AI for example, that they are not discriminated against and/or marginalised.

5. Ownership and security

Trust is critical for the future of AI’s success. Dystopian headlines about a robot future and current concerns with ‘Big Tech’ are already predisposing some people to feel cynical about engaging with AI-powered services and solutions. 

As such, breaches in security, exposure of personal data and potential hacks to ‘re-educate’ machine learning patterns, all have to be treated with the upmost importance and robust security measures and contingence plans put in place.

This is to protect both customers and the business. The latter who need fall back plans and potential short-term human solutions to take over and support the business whereby breaches are fixed and normal, automated services can be resumed.

Conclusions

CONCLUSIONS

red_line

Encouragingly, a recent piece of global research conducted by Forbes found 63 percent of respondents stating that they “have an ethics committee that reviews the use of AI,” and 70 percent indicating they “conduct ethics training for their technologists.”

That said, only time will tell if, the late majority, now approaching AI adoption, will be as enlightened in their approach to ethics as the earlier adopters appear to have been.

SUGGESTED ARTICLES

red_line

For more information or to discuss your own Intelligence Audit please contact Paul Vallois, managing director.
paul.vallois@nimbletank.com
020 3828 6440
© 2019 Nimbletank. All Rights Reserved.