The key assertion we are making is that AI Governance is an ESG risk.

For a detailed exploration of this topic, we encourage you to read our 2020 Whitepaper, Ethical by Design written by co-founders Charles Radclyffe and Richard Nodell which is available to download here.

The summary of our argument is as follows:

Many organisations have found themselves subject to the ‘techlash’ which has come about due to an erosion of trust in technology, technology companies, and non-technology companies who implement technology. Our view is that in order to build trust, an organisation needs to demonstrate a commitment to building governance, using this governance once built, and ensuring the right conditions exist for their people to be incentivised to be consistent with the application of the governance. We also hold it is important for stakeholder feedback to be solicited and acted upon in order for trust to assessed. It is this second feature which is so often lacking even where a commitment to governance is robust.

If considered in the abstract, that trust issues can be resolved through a commitment to governance and stakeholder engagement; then what evidence is there to suggest that this approach works in analogous scenarios? While our 2020 Whitepaper focused on the parallels between the Climate Change lobby and Digital Ethics activists, our current thinking is that Simon Zadek’s 2004 paper The Path to Corporate Responsibility makes a stronger case that ESG issues (such as the human rights issues he centred his research on) follow a predictable path to maturity, and organisations tend to also follow a predictable path in their response to these issues.

Through this lens, the conclusion is clear. AI Governance is already in the 3rd Phase of Issue Maturity, which is that of ‘consolidation’. Why do we hold this? Well, it’s certainly at least in the 2nd Phase as the political and media awareness of the issues has been evidence since at least the Cambridge Analytica scandal broke in 2016. For an issue to reach the 3rd Phase, there has to be an emerging body of business practices as well as evidence of sectorwide and issue-based voluntary initiatives. We see this evidence everywhere we look, and the EthicsGrade research and scores shows clear difference between good actors with grades of B and above, and those at the early stage of their maturity who receive an NR grade.

When do we feel that AI Governance will reach the 4th Phase? Well, certainly in the next 3-5 years and perhaps sooner. In 2021 the European Commission will publish draft regulations on AI which are likely to be passed into law and enforceable around 2023 or 2024. By this point, we expect to see many examples of litigation on the issue of AI as well as legislation to police the highest risk use cases. We expect through efforts of ourselves and others to draw attention to best practice, that many more companies will be achieving B Grades and above, and we expect to see some organisations start to not just conduct AI Audits but also publish these Audit Reports and thereby achieve A+ Grades according to our model.

Finally, if this isn’t evidence enough, we are also seeing the first signs that the Asset Management community are considering the risks associated with technology, technology companies, and companies implementing technology. COVID-19 dramatically changed the composition of investment portfolios so the greatest ‘non-financial’ risks that investment managers were exposed to shifted from climate centric risks to technology centric ones. Asset Managers that we have spoken with were rattled particularly by the rioting on Capitol Hill in January 2021. How platforms police content, and what their response is to the question of their responsibility for content has suddenly become a question that can move share prices, and therefore on the agenda for those in stewardship roles at Asset Managers.

Unfortunately, over the coming weeks, months and years we are only likely to read of more stories of technology failing to live up to expectations and playing a role in negative impact towards the fabric of society. As this trend continues, we expect more Asset Managers to start to model these risks, highlighting the value of our data. Ultimately, not getting ahead of Digital Ethics risks will impact the cost of capital for firms and this will be the nudge that companies need in order to act, much in the same way that this has forced companies to respond to other ESG topics such as Human Rights and the Climate Emergency.

AI Governance as part of a wider issue of Digital Ethics is therefore an ESG risk today, and organisations need to urgently consider their rating and building their reporting of current state and strategy in order to arm their Investor Relations teams with the right information to handle investor scrutiny.

For more information on the areas AI Governance we have currently in scope, and our roadmap for coverage expansion, please visit www.ethicsgrade.io/research