EthicsGrade Logo
Methodology
Advisory Panel

Led by co-founder Dick Nodell, EthicsGrade Advisory Panel is charged with two tasks. First, the advisory panel is the point of escalation for any challenge that is brought to EthicsGrade concerning the quality or validity of our data. All challenges can be understood as emerging from one of three potential mistakes:

  • Considered the wrong question
  • Evaluated the wrong evidence
  • Incorrectly weighted the data points

Our panel is there to consider these questions and iterate our model where appropriate. An essential requirement that we look for in organizations that we rate, is that they move from ‘principles’ to ‘pronouncements’ i.e. actionable and fully articulated decisions derived consistently through the use of ‘protocols’. Any challenges that we have brought to our Advisory Panel will be reasoned transparently and the outcomes will be published in the public domain. This will give the companies we rate confidence that they have an opportunity to challenge where we have erred and customers of our data confidence that we are consistent in our approach.

The second task charged to our Advisory Panel is to expand our model's scope. We currently look at a narrow set of questions that we deem to be important for questions of ‘AI Governance’ and ‘AI Ethics’ in particular. Ultimately, this is unsatisfactory to us. We know there is a bigger debate around Corporate Digital Responsibility questions that we need to address. Beyond that there are urgent social issues that need attention. Concern with how technology affects diversity and inclusion issues are a natural extension to our work. Our Advisory Panel consists of a core group of experts and designers whose work is then reviewed by a different and larger set of evaluators appropriate to the model's challenges. Please contact us if you would like to register your interest in serving as a reader or an evaluator.

The Advisory Panel is operationally independent of EthicsGrade, and EthicsGrade commits to be bound by its decisions.


Six Biggest Challenges of Best Practice AI Governance

At EthicsGrade, we evaluate the quality of AI governance present at some of the world’s biggest companies. This is important as more and more big brands show why AI gives them a competitive edge and reshapes markets. But, unless we hold them to account for the quality of controls they are putting in place to safeguard us, how can we trust them and their systems?

The great news is that some organizations are really good at this stuff, yet, certain big names we know and trust are just getting started. We think best practice should be for organizations to be transparent about their governance in these areas. In commercially sensitive cases, we have also created a mechanism for them to bring their best practice to our attention.

Here is an explanation of the six categories in our model as of Q2 2021, and why we think each is important:

1. Structure

Our firm belief is that organizations need to connect their AI governance efforts to their corporate governance policies. We don’t feel that Boards of organizations are otherwise equipped to control the levers of risk unless they can be confident that there is a connection between policies that they set and operational controls on the ground.

What we are concerned about is the proliferation over the last few years of statements of AI ethics ‘principles. While by themselves harmless and certainly very well-intentioned, they highlight that concern for the issues is being addressed by marketing or PR functions as opposed to corporate governance and ESG, where they are best actioned.

We look for evidence of strong corporate governance and governance structures that extend from Boards to those on the front-line building or selling AI systems. We give particular weight to semi-independent oversight boards or committees, and particularly to those where the accountable executive is in a position of responsibility and high visibility such as the Chief Digital Officer (or equivalent).

2. Public Policy

Whereas some ratings firms only measure the health of the fish, we’re looking at the health of the ocean. A healthy ecosystem is when an organization understands that it cannot operate a sustainable business model if it operates to undermine the efficacy of policymakers, the trust of consumers, or the functioning of market forces.

We are concerned with organizations that lobby more than they contribute productively to the debate on future regulatory initiatives. We look for organizations who are forward-thinking in terms of the risks that stem from their activities, and seek to understand a diverse set of viewpoints on these risks and proactively look to mitigate them long before those risks manifest.

A good example is those risks pertaining to the impact of automation on employment. Some organizations shrug off their responsibilities regarding the long-term employability of their workforce and across their industry, and others are keen to ensure the risks to a healthy workforce are managed and mitigated. We see this as a vital issue that will develop in prominence over the coming months and years.

3. Technical Barriers to Trust

Machine Learning, Robotics and Automation systems need constant refinement in pioneering early days to ensure that their operation is safe to users.

We are concerned with controls that pertain to the use of data and the lifecycle of models. We know all too well that if these controls are not in place and the appropriate monitoring set up, then biased and discriminatory outcomes are too often what result.

The good news here is that there is a burgeoning ecosystem around ‘ML Ops’ platforms that provide the tooling to ensure that ML systems are efficiently developed and deployed, and the key attributes of their functioning are measured and monitored.

It is common for automotive manufacturers to publish data on their vehicles’ fuel efficiency, and for vehicles to be subjected to annual emissions testing. We expect very soon it will be very common for organizations to be publishing the key metrics on the functioning of their AI systems. We give additional credit to those organizations who are early to market on this important aspect of providing metrics in addition to information on their governance.

4. Ethical Risk

The ethical risks present from the use of technology are those with which we first concerned ourselves with, and what we focus on in our original whitepaper. In essence, our view is that (largely) organizations are good at mitigating technical risks, such as ensuring products are designed and developed to the appropriate standards or with the correct engineering tolerances. Yet, many organizations struggle with translating stakeholder concerns into actionable and operational guidance for colleagues to work with.

Suppose we were to boil down what our model is tuned for in this section. In that case, it’s to understand:

  • How an organization has developed structures to solicit feedback from stakeholders
  • Evaluate feedback with protocols that bring a consistency to the outcomes
  • Communicate the resultant decisions back to stakeholders to judge whether the firm is a trustworthy partner and one that aligns to their values

5. Data Privacy

We’ve seen attitudes towards data privacy mature significantly over the last few years, and European Data Protection Regulations (GDPR) have moved the needle extensively in organizations understanding what best practice is and how to achieve it.

However, we’re still concerned that there is far more to be done on this subject. Over the coming months, we plan to deepen our coverage of this important topic and begin dissecting the privacy policies of the organizations we cover. This is so we can offer our users a sense of which organizations are the most exploitative regarding harvesting user data, and those respect the privacy, security and sovereignty of the data they collect and process on behalf of others.

6. Sustainability

This is our first new category since we started EthicsGrade and reflects that we are starting to see evidence of organizations considering the carbon impact of activities such as ML model development and developing governance to ensure that energy consumption is proportionate to the value and has the minimum carbon impact wherever possible. For organizations that operate in creating products, such as consumer device manufacturers, we’re also looking at the circularity of the product lifecycle and particularly policies and actions where it comes to questions such as planned obsolescence.

Expanding our coverage

We’re always interested in expanding our coverage both in terms of breadth and depth of the questions we’re asking. If you have suggestions for us for themes which we are not yet covering, or areas where you think we could go deeper – please contact us, we’d love to chat.


Research

Our ESG ratings approach is simple yet a powerful method to different organizations that carry risks against adopting best-practice. Their Research Analyst team heads this process, evaluating the extent to which organizations implement transparent and democratic values, ensure informed consent and risk management protocols, and establish a positive environment for error and improvement. The model used to evaluate organizations is informed by open consultations on such upcoming AI governance.

EthicsGrade Methodology Chart

Confidence in our data quality is our most important metric. This is how we achieve it:

Baseline Rating
EthicsGrade is comprised of research conducted into publicly available information. There is a sufficient difference between levels of detail in the public domain around Governance's questions for AI to create a picture of where best practice is emerging.

InsideView Rating
Following our Baseline Rating, the InsideView is based on research into non-public information provided to EthicsGrade on request. Companies who submit data to us as part of an InsideView Survey will have their scores re-rated according to the additional information provided. The fact that they have submitted non-public information to us will be disclosed alongside their score and rating. We, therefore, have a higher level of confidence with InsideView ratings than we do with those constrained to public data.

Benchmark Rating
Ratings based on research conducted on-site or virtually with clients. Any information we discover in a Benchmark exercise will impact our rating of a company, for better or worse. We, therefore, can stand by our Benchmark EthicsGrade ratings as an entirely accurate picture of what an organization has in terms of controls. We believe that organizations that have committed to this level of transparency deserve recognition for this. Companies who have conducted a Benchmark with us have this fact disclosed alongside their score and rating, which we expect the community to consider as being the highest standard in disclosure and transparency.


Search and view our list of rated organizations. Ratings are added or updated every few days.
EthicsGrade © 2021 All Rights Reserved
  • Privacy
  • Terms
  • Careers
  • Contact