Technology is the most significant force for change in society today, but it is essential that it works for us by improving outcomes for people, rather than against us.
Discussions around AI are increasingly prevalent, ranging from remarkable discoveries of its ability to change how we interact with the natural world and tackle climate change, to fears of the existential threat an out-of-control algorithm may pose to human life. This technology is already delivering both social and economic benefits in industries such as healthcare and agriculture, and 94% of business leaders agree that AI will be critical to success over the next five years.
Its rapid growth has raised concerns over potential risks to human rights, safety, and privacy as well as its ability to make fair, unbiased decisions. As business leaders, it is difficult to stay on top of these debates and understand how to respond when faced with decisions around the use of AI. The question of where the responsibility lies for managing these risks is an urgent issue for organisations to address as existing processes are often inconsistent and disconnected.
Organisations must be proactive rather than reactive in the management of AI risks and adopt a responsible approach to ensure it is designed, developed and used to empower employees and fairly impact consumers and society. AI presents many opportunities – it is predicted that it will contribute $15.7 trillion dollars to the global economy by 2030 – but if organisations fail to manage the risks, they will be unable to exploit the opportunities and will be left behind.
Why are businesses adopting AI?
The adoption of AI tools has more than doubled in the last five years, with over half of all businesses globally now using it in their operations to revolutionise data management, reduce inefficiencies and produce significant cost savings. Three key areas in which organisations can utilise this technology are:
- Automation & productivity – over 40% of businesses who have adopted AI have done so to automate tasks – to drive efficiency and make better decisions.
- Innovation – companies and governments are discovering the potential of machine learning to strengthen innovation and product service offerings for greater competitive advantage.
- Business value – businesses who effectively use AI report an average of around 6% increase in revenue as a direct result, as well as benefits to other business functions including cyber security, sustainability, and customer service.
What are the risks associated with AI?
AI presents a range of new challenges, such as threats to critical infrastructure, or concerns that it will exacerbate wider systemic and societal issues by influencing public opinion and democracy with the use of synthetic media such as deepfakes. Several companies including Amazon, Apple and Samsung have placed restrictions on the use of ChatGPT due to concerns around their employees sharing sensitive documents with the tool. Four key areas of risk that business leaders should be aware of are:
- Human rights – bias can be built into the data from which AI systems are trained, leading to unfair decision making without sufficient governance systems in place.
- Reliability – the information produced by AI systems can be inaccurate as generative AI is trained to predict what comes next, based on patterns the AI has previously detected which might be misleading.
- Security threats – employees who have had insufficient training could expose sensitive information. Criminals are already exploiting AI for cyber-attacks and phishing, for social engineering or fraud.
- Trust – the introduction of AI can lead to fears that certain roles will become redundant. Additionally, only 35% of global consumers trust how AI is being implemented in organisations.
The case for responsible AI
AI technologies are already in widespread use, and this is only set to expand in ways many businesses find difficult to imagine. The risks of unintended consequences presented by self-executing, automated systems require businesses to adopt a responsible approach to their deployment. This involves ensuring that AI is designed, developed, and used to drive fair, responsible and ethical decisions that comply with current regulation and law.
One example of this is Microsoft’s approach to responsible AI, which is intended to ensure that AI systems are human-centric. Their approach is centred on six core principles and is guided by two perspectives:
- Ethical – AI must be fair, inclusive and indiscriminatory. It must also be held accountable for decisions. Microsoft advise organisations to consider establishing an internal review body to provide recommendations on best practice.
- Explainable – AI should be transparent in how decisions and conclusions have been reached, to build trust and ensure compliance. Microsoft has developed an open-source toolkit to help organisations to achieve explainability.
Actions for businesses to take now
Whether or not your organisation is responsible for developing and deploying AI, it is important to consider what this technology will help you to achieve and what to avoid. If you are wondering where to start, the following actions will increase your preparedness:
- Help decision-makers at every level understand the risks and opportunities, and ensure they are given operational guardrails so there is deliberate consideration of risk appetite.
- Enable decision-makers to think critically about AI, to be aware of their own power and responsibility, and identify the questions they need to ask such as who the affected stakeholders might be.
Implement AI Governance
- Develop a framework to mitigate the risks of AI.
- Ensure policies are in place to measure the bias and accuracy of information and embed AI cyber threats into cyber security policies.
- Work with governments and regulators to develop AI regulation and legislation – for example the UK’s new Pro-innovation approach to AI.
- Share knowledge and insights with other organisations on how to address the challenges of AI.
Over the next few years, the impact of AI on business and society will become clearer – there are likely to be incredible positives, but at what cost? Now is the time for business leaders to engage with this technology, educate themselves about the risks and influence decision makers to ensure that we are releasing AI into the world responsibly.
Sancroft is working with KPI, consultants and the authors of The AI Dilemma: 7 Principles for Responsible Technology and has developed service offerings to advise and support businesses on how to navigate the opportunities and risks of AI. If you would like to learn more about our new AI related services please contact email@example.com or firstname.lastname@example.org.