AI promises great opportunity, and with that comes great responsibility for government and enterprise leaders alike.
In the last year, there has been an ever-increasing velocity of articles, blogs, speeches and thinking raising ethical concerns about AI. People are (rightfully) sounding the alarm, raising questions such as:
What are the national or international economic policy changes we need to make to reduce the potential for disruption in specific geographies or economic sectors?
What are the risks for labor displacement?
What specific types of job training and counseling programs will be effective in helping people adapt to a new age of machines as co-workers and new jobs created as a result of AI?
What new levels of protection should be introduced to safeguard not just an individual’s data but also the data personas of individuals?
What education and awareness building does the public need to understand what is possible with AI, but also to raise their trust in the technology?
What policies and legal protections are necessary in order to persuade the public to adopt the technology?
However, what is absent in the current zeitgeist is a thoughtful and detailed discussion on how to move from theoretical ethics questions to real-life application.
It’s time to move from the “what-ifs” to the “do-nows.”
That’s why Accenture has developed a practical approach to what we call Responsible AI. A company’s AI deployments need to be aligned to a company’s core values and ethical principles in order to benefit customers, employees, the business and all of society. When companies do so, they engender trust with their consumers and society.
We believe Responsible AI is both an opportunity and a responsibility for business, government and technology leaders around the world. That’s why we are proactively teaming with our clients, ecosystem partners, academia and corporate R&D organizations to define, design and drive Responsible AI.
We are helping our clients create enterprise governance frameworks to evaluate, deploy and monitor AI so that it creates new growth opportunities. Practically speaking, our Responsible AI methodology translates into architecting and implementing solutions that put people at the center. One way we do this is by using design-led thinking to help clients examine the core ethical questions in their context in light of their policies and programs—and then create a set of value-driven “requirements” under which AI will be deployed.
Our Responsible AI approach addresses the imperative to:
Govern – Create the right framework to enable AI to flourish, anchored to a company’s core values, ethical guardrails and accountability frameworks.
Design – Architect and deploy AI with trust (e.g., privacy, transparency and security) by design built in, including building systems that lead to “explainable” AI.
Monitor – Monitor and audit the performance of AI against key value-driven metrics, including with respect to algorithmic accountability, bias, cybersecurity.
Reskill – Democratize AI learning across an enterprise’s stakeholders using Accenture’s myLearning technology to reduce barriers to entry for individuals impacted by AI.
We began from a starting point that Accenture’s Responsible AI is rooted in key principles around accountability, fairness, honesty, human-centricity and transparency. As we evaluate our internal deployments of AI, we find ourselves returning to these key principles—and as they relate to our Code of Business Ethics and Core Values.
Looking at what we’ve experienced at Accenture, we know that an effective deployment of AI (or any emerging technology) occurs as a multi-disciplinary approach. This is done in conjunction with the internal Compliance organization, which in Accenture is part of Legal, which commonly is responsible and accountable for deploying enterprise-wide compliance programs in response to government regulations, ethical violations, litigation and more. This multi-disciplinary approach is necessarily proactive in order to address evolving AI technologies and predict compliance as government regulations and public sentiment evolve. Given that cycles of innovation occur at a faster pace than regulatory and/or legislative cycles, we navigate through “unchartered” territories with our Code of Business Ethics and Core Values as our compass to guide our way.
Responsible AI is a collective effort
The only way that Responsible AI will become a reality is with full participation from every sector. Governments of established and emerging countries will need to design and implement a regulatory environment compatible with Responsible AI, while working to improve economic policy, innovation incentives, data privacy compliance and intellectual property protection. Enterprises will need to influence and evolve with government regulations and public sentiment on Responsible AI.
Most importantly, as discussed in our Technology Vision 2017, Accenture sees a trend emerging in which pioneering companies will take a leadership role in setting industry best practices and codifying technical standards. Having sensible standards will help to mitigate the need for outmoded regulation, especially as these leading organizations create new digital industries that have embedded AI components like connected health or precision agriculture. As just one example, Alphabet, Amazon, Facebook, IBM and Microsoft are working together to create a standard of ethics for advancements in the AI industry.1 Although these companies are competitors, they’re collaborating on ground rules for the entire ecosystem of AI pioneers. Collectively setting the rules for this rapidly evolving industry helps to mitigate the risks of complex external oversight, prevent harm to consumers, accelerate innovation and protect the reputations of every brand pushing the frontier of AI.
Other groups will also need to step up. Professional associations are already establishing standards, certifications and codes of conduct for organizations to follow. For example, the Institute for Electrical and Electronics Engineers (IEEE) launched a for ethical considerations in the design of AI and autonomous systems. Accenture has responded to this consultation, as we believe the proposal for ethically aligned design is critical and is inherently linked to the adoption of technology by our clients. Responsible AI is our response to dealing with this challenge.
Where we go from here
It’s essential for business and government leaders to proactively address the critical issues that AI raises by inventing new models and approaches with a Responsible AI philosophy, or as what Paul Daugherty, chief technology & innovation officer at Accenture, calls a Technology by People, for People approach. (Read his thoughts on here.)
To meet the Responsible AI imperative, Accenture encourages companies to:
Emphasize education and training, especially for people who are disproportionately affected in employment and income.
Reinvigorate a company’s code of ethics by adapting it for the many ways AI will impact how the company will operate and its people will interact with each other (and with AI).
Help create adaptive, self-improving regulation and standards to keep pace with technological change.
Establish sound cybersecurity practices, and
Integrate human intelligence with machine intelligence by reconstructing work to take advantage of the respective strengths of each.
Accenture is leading the charge toward Responsible AI. With the goal of providing better outcomes for all people, we invite our Global 2000 clients to actively design and direct AI to augment and amplify human capabilities—allowing people to achieve more for themselves as individuals and for the world around them. Will you join us? Reach out to us to discuss.1 “How Tech Giants Are Devising Real Ethics for Artificial Intelligence,” New York Times, September 1, 2016.