EthicsGrade Logo

Methodology

Ratings Key

EthicsGrade categorises the subjects of its research into the following 7 tiers, which include 6 rated groups, 5 of which receive an EthicsGrade.

Here is an explanation of what these tiers mean:

NR

Not Rated

Companies which receive an ‘NR’ do so because our assessment of them is insufficient to draw conclusions as to the quality of their governance as pertains to the aspects we cover in our model. There are various reasons for this, either they are too small or early stage an organisation and so haven’t yet developed sufficient scale to warrant a sophisticated governance approach, alternatively their use of AI and related technologies is itself so nascent that they haven’t developed and communicated the existence of governance. In the case where neither of these things are true, then it is our assessment that the organisation has chosen not to communicate the existence of much of our expectations of good governance for technology – and therefore should be treated with caution.

R

Rated, no Grade

Companies which receive a ‘R’ rating do so because we are not sufficiently confident from our research of them that their governance around technology deserves to be graded. Similarly to NR above, this could be because the organisation is early stage, or its use of AI and related technologies is itself early in development. For firms where neither is true, then this shows an unsophistication to governance components of their strategy, and therefore we urge caution with regards to interacting with autonomous systems. Organisations that receive an ‘R’ rating tend to be characterised as being ‘defensive’ in their response to challenges to their regard for questions of technological governance.

D

D Grade

A ‘D’ grade is the lowest EthicsGrade provided and signifies a weak level of maturity towards the challenge of ensuring appropriate governance for their technology. Organisations that receive a ‘D’ grade tend to be characterised as being ‘compliant’ in their response to external challenge, with the typical retort that they “follow the laws and norms where they conduct business”.

Such organisations carry the risk of regulatory intervention, and also due to their relative immaturity of governance face specific reputational risks of their systems failing, malfunctioning, or having unintended consequences.

C

C Grade

A ‘C’ grade signifies that an organisation has taken definitive steps towards ensuring a basic level of governance is in place that is appropriate for the firm’s approach to AI and related technologies.

Organisations that receive ‘C’ grades tend to be in ‘managerial’ mode which translates into ‘doing the things that make them look good’.

Such organisations carry heightened risk relative to higher performing peers, and while firms that receive ‘C’ grades for their governance tend to have stronger technical competence, still face risks of unintended consequences stemming from their application of technology to solving problems or as part of their business model.

B

B Grade

EthicsGrade offers ‘B’ grades to organisations that have developed an appropriate level of response to questions of governance. It is the lowest positive grade that we offer.

Such organisations have taken active steps to minimise their risk surfaces stemming from the deployment of emerging technologies and are mitigating what remains. Their response is largely ‘strategic’ which means they are ‘doing the right things, because they’ve realised it’s in their interests to do so’. Organisations which receive B grades are trustworthy in the sense that they are clear about what they are doing, and why they are doing it. While not necessarily leaders in pushing the envelope of best practice, they might well be leading their respective peer group and thus be the benchmark for others to be emulating.

A

A Grade

Organisations which receive ‘A’ grades are top of their game. They have adopted a ‘civil responsibility’ for their actions, which means they are ‘doing the right things for the right reasons’. They are engaged with stakeholders and have developed the appropriate muscle to respond to external challenge.

Due to the nature of emerging technologies such as AI, there always remains residual risks, but A grade firms have sophisticated risk management in place so their response to addressing such risks go beyond the basics and are leading in their field. Organisations which receive A grades are trustworthy in the fullest sense of the term, and ought be studied and their examples followed.

One caution for firms with A grades is to be on guard against complacency. As the subject matter of technological governance itself matures, so the demands to develop and maintain appropriate governance deepens. It is quite conceivable (expected, even) that firms that are leaders today are laggards tomorrow. The highest accolades should be reserved for those organisations which attain and retain their A grades, for they are on trend with what best practice is demanding of them and continuously rising to those challenges.

A+

A+ Grade

We reserve the highest EthicsGrade for organisations that achieve A grades but go the extra step in submitting to independent audit of AI and autonomous systems. The highest marks are awarded to those organisations that not only conduct such an independent audit, but publish the auditors’ report. Transparency is a virtue that is often signalled, seldom shown. at time of writing there are no such examples of companies who have conducted independent audits of AI systems, but we expect this to change in the coming months.


Advisory Panel

Led by co-founder Dick Nodell, EthicsGrade Advisory Board is charged with two tasks:

First, the advisory board is the point of escalation for any challenge that is brought to EthicsGrade concerning the quality or validity of our data. All challenges can be understood as emerging from one of three potential causes:

  • we considered the wrong question,
  • evaluated the wrong evidence,
  • or weighted the data point wrongly.

Our panel is there to consider these questions and drive the iteration of our model where appropriate.

An essential requirement that we look for in organisations that we rate is that they appropriately move from ‘principles’ to ‘pronouncements’ i.e. actionable and fully articulated decisions derived consistently through the use of ‘protocols’. Accordingly, any challenges that we have brought to our advisory board will be reasoned transparently and the outcomes will accordingly be published in the public domain. In our first two years of operation, we have had no such challenges.

Our process is there to ensure that the companies we rate have confidence that they have an opportunity to challenge where we have erred and customers of our data confidence that we are consistent in our approach.

The second task charged to our advisory board is to expand our the scope of our model. We currently look at a narrow set of questions that we deem to be important for questions of Corporate Digital Responsibility and in particular those pertaining to a company’s ‘AI Governance’ and its consideration of matters of ‘AI Ethics’. Ultimately, however this is unsatisfactory to us. We know there is a bigger debate around digital ethics that we need to address. And beyond that there are urgent social and environmental issues that need attention. Concern with how technology affects diversity and inclusion issues, and exacerbate environmental destruction are thus a natural extension to our work.

Our advisory board consists of a core group of experts and designers whose work is then reviewed by a different and larger set of evaluators appropriate to our client’s challenges. Please contact us if you would like to register your interest in serving as a reader or an evaluator.

The advisory board is operationally independent of EthicsGrade, and EthicsGrade commits to be bound by its decisions.


Six Biggest Challenges of Best Practice AI Governance

At EthicsGrade, we evaluate the quality of AI governance present at some of the world’s biggest companies. This is important as more and more big brands show why AI gives them a competitive edge and reshapes markets. But, unless we hold them to account for the quality of controls they are putting in place to safeguard us, how can we trust them and their systems?

The great news is that some organizations are really good at this stuff, yet, certain big names we know and trust are just getting started. We think best practice should be for organizations to be transparent about their governance in these areas. In commercially sensitive cases, we have also created a mechanism for them to bring their best practice to our attention.

Here is an explanation of the six categories in our model as of Q2 2021, and why we think each is important:

1. Structure

Our firm belief is that organizations need to connect their AI governance efforts to their corporate governance policies. We don’t feel that Boards of organizations are otherwise equipped to control the levers of risk unless they can be confident that there is a connection between policies that they set and operational controls on the ground.

What we are concerned about is the proliferation over the last few years of statements of AI ethics ‘principles. While by themselves harmless and certainly very well-intentioned, they highlight that concern for the issues is being addressed by marketing or PR functions as opposed to corporate governance and ESG, where they are best actioned.

We look for evidence of strong corporate governance and governance structures that extend from Boards to those on the front-line building or selling AI systems. We give particular weight to semi-independent oversight boards or committees, and particularly to those where the accountable executive is in a position of responsibility and high visibility such as the Chief Digital Officer (or equivalent).

2. Public Policy

Whereas some ratings firms only measure the health of the fish, we’re looking at the health of the ocean. A healthy ecosystem is when an organization understands that it cannot operate a sustainable business model if it operates to undermine the efficacy of policymakers, the trust of consumers, or the functioning of market forces.

We are concerned with organizations that lobby more than they contribute productively to the debate on future regulatory initiatives. We look for organizations who are forward-thinking in terms of the risks that stem from their activities, and seek to understand a diverse set of viewpoints on these risks and proactively look to mitigate them long before those risks manifest.

A good example is those risks pertaining to the impact of automation on employment. Some organizations shrug off their responsibilities regarding the long-term employability of their workforce and across their industry, and others are keen to ensure the risks to a healthy workforce are managed and mitigated. We see this as a vital issue that will develop in prominence over the coming months and years.

3. Technical Barriers to Trust

Machine Learning, Robotics and Automation systems need constant refinement in pioneering early days to ensure that their operation is safe to users.

We are concerned with controls that pertain to the use of data and the lifecycle of models. We know all too well that if these controls are not in place and the appropriate monitoring set up, then biased and discriminatory outcomes are too often what result.

The good news here is that there is a burgeoning ecosystem around ‘ML Ops’ platforms that provide the tooling to ensure that ML systems are efficiently developed and deployed, and the key attributes of their functioning are measured and monitored.

It is common for automotive manufacturers to publish data on their vehicles’ fuel efficiency, and for vehicles to be subjected to annual emissions testing. We expect very soon it will be very common for organizations to be publishing the key metrics on the functioning of their AI systems. We give additional credit to those organizations who are early to market on this important aspect of providing metrics in addition to information on their governance.

4. Ethical Risk

The ethical risks present from the use of technology are those with which we first concerned ourselves with, and what we focus on in our original whitepaper. In essence, our view is that (largely) organizations are good at mitigating technical risks, such as ensuring products are designed and developed to the appropriate standards or with the correct engineering tolerances. Yet, many organizations struggle with translating stakeholder concerns into actionable and operational guidance for colleagues to work with.

Suppose we were to boil down what our model is tuned for in this section. In that case, it’s to understand:

  • How an organization has developed structures to solicit feedback from stakeholders
  • Evaluate feedback with protocols that bring a consistency to the outcomes
  • Communicate the resultant decisions back to stakeholders to judge whether the firm is a trustworthy partner and one that aligns to their values

5. Data Privacy

We’ve seen attitudes towards data privacy mature significantly over the last few years, and European Data Protection Regulations (GDPR) have moved the needle extensively in organizations understanding what best practice is and how to achieve it.

However, we’re still concerned that there is far more to be done on this subject. Over the coming months, we plan to deepen our coverage of this important topic and begin dissecting the privacy policies of the organizations we cover. This is so we can offer our users a sense of which organizations are the most exploitative regarding harvesting user data, and those respect the privacy, security and sovereignty of the data they collect and process on behalf of others.

6. Sustainability

This is our first new category since we started EthicsGrade and reflects that we are starting to see evidence of organizations considering the carbon impact of activities such as ML model development and developing governance to ensure that energy consumption is proportionate to the value and has the minimum carbon impact wherever possible. For organizations that operate in creating products, such as consumer device manufacturers, we’re also looking at the circularity of the product lifecycle and particularly policies and actions where it comes to questions such as planned obsolescence.

Expanding our coverage

We’re always interested in expanding our coverage both in terms of breadth and depth of the questions we’re asking. If you have suggestions for us for themes which we are not yet covering, or areas where you think we could go deeper – please contact us, we’d love to chat.


Research

Our ESG ratings approach is simple yet a powerful method to help different organisations that carry risks in relation to their digital strategy. Our Research Analyst team heads this process, evaluating the extent to which organisations implement transparent and democratic values, ensure informed consent and risk management protocols, and establish a positive environment for error and improvement. The model used to evaluate organisations is informed by open consultations on digital governance, our horizon scanning of the regulatory landscape, as well as dialogue with civil society organisations who are evaluating the potential environmental or societal impacts that digital technology might have, and considering the best practices for mitigation of these risks.

Methodology Diagram

Confidence in our data quality is our most important metric. This is how we achieve it:

Baseline Rating
EthicsGrade is comprised of research conducted into publicly available information. There is a sufficient difference between levels of detail in the public domain around the risks associated with digital strategies to create a picture of where best practice is emerging, who the leaders and laggards are – as well as showing how this picture is developing and maturing over time.

Our goal is to always encourage organisations to be more transparent on their reporting and disclosures into the public domain, and therefore to reduce their reliance on proprietary survey platforms to elicit insights into the quality of the governance in place.

InsideView Rating
Beyond our Baseline Rating, the InsideView is based on research into non-public information provided to EthicsGrade on request. Companies who submit data to us as part of an InsideView Survey will have their scores re-rated according to the additional information provided. The fact that they have submitted non-public information to us may be disclosed alongside their score and rating. We, therefore, have a higher level of confidence with InsideView ratings than we do with those constrained to public data and as the proportion of organisations that we cover who engage with us increases we will use these insights to be able to adjust baseline scores to account for where we think an organisation is in its maturity based on the submissions made by similar organisations or peers. InsideView Ratings and derivative data feeds are only provided to clients, whereas the publicly available baseline rating is only updated based on confidence level calculations and information that is in the public domain.

Benchmark Rating
Our Benchmark Rating is provided to organisations who have submitted information via the InsideView Survey to give them detailed insights into their performance on various topics in our coverage compared to their peers. The benchmark reports is driven by our ‘data-driven materiality’ platform and can be used by organisations to help them develop their roadmap strategy in response to EthicsGrade research on their stakeholder-centric ESG risks. We provide this service free of charge as a quid pro quo to organisations who conduct the InsideView survey so there is no conflict of interest arising from our relationship with them.


Search and view our list of rated organizations. Ratings are added or updated every few days.
EthicsGrade © 2022 All Rights Reserved
  • Privacy
  • Terms
  • Careers
  • Contact