AI Impact Control Panel

Lachlan McCalman
Gradient Institute
Published in
6 min readMar 15, 2022

--

New software for controlling the business and ethical impact of AI

In partnership with Minderoo Foundation, Gradient Institute has released the first version of our AI impact control panel software. This tool helps decision-makers balance and constrain their system’s objectives without having to be ML experts. You can see a demo of the tool here, and get the source code here.

AI’s impacts need controlling

Today there are many AI systems making consequential decisions which affect people’s lives and livelihoods. Banks run AI systems to decide who gets a loan, governments use AI systems to police citizens, job agencies use AI to choose who should be shortlisted for a role and social media uses AI to filter and highlight the politics or public health messages its users see.

Moving decisions like these from people to AI creates new risks. Unlike humans, AI obeys the letter, not the intent of instructions. It views the world only via the data we give it. It has no minimum moral constraints, no understanding of context and it can make millions of decisions every second.

The result is made obvious by a stream of incidents of AI systems causing serious harm with consequences to people’s lives. Recruiting and job ad tools showing bias against women, newsfeeds pushing misinformation and hate and healthcare algorithms that wrongly cut-off users from pain medication or systemically under-resource Black neighbourhoods, are just a few of the high-profile examples.

Misdesigned AI harms people

AI systems are typically designed to achieve a single narrow objective: make accurate predictions given the available data. But there are many ways a system can be accurate and still do unacceptable harm. An ‘accurate’ system still makes mistakes, and the impact of those mistakes depends heavily on their context: compare showing someone an ad for the wrong product to falsely predicting they will commit a crime while on bail. The former might be an annoyance at worst, the latter might wrongly incarcerate them. What is accurate enough in one case may be completely unacceptable in another.

More importantly, accuracy itself is not sufficient to capture a system’s impacts. Some systems might be accurate on average but make most of their errors on disadvantaged parts of the community, exacerbating inequality. Or they might get simple cases right but miss things that really matter, like an AI content moderation system that catches swear words but misses posts inciting violence.

Impact-driven design of AI

The best way to prevent AI from harming people is to design it that way: make design decisions not based on how much they improve accuracy, but instead, how much they improve the system’s impacts, including its harms and benefits to the people it affects.

It may not always be clear, however, what the ‘best’ impacts actually are. Given two potential designs, one might make more revenue while the other has a lower error rate on a disadvantaged population. Which of these designs should be deployed requires context-specific judgement and a clear understanding of the system’s broader objectives and constraints.

Such decisions need to be made with proper governance, involving people such as senior managers, risk and compliance officers, ethics committees, and representatives of impacted people and their communities. However, because the details of the design decisions are highly technical, they are often left to data scientists and engineers who lack a full view of their consequences.

AI impact control panel software

Our AI Impact control panel helps to bridge the gap between technical design decisions and a system’s impact. The control panel stops an AI system’s impacts being an unknown product of design decisions made by data scientists to maximise average accuracy whilst ignoring the overall business and ethical objectives of the system. Instead, the control panel gives the people in charge of the system’s operation the ability to explicitly decide its impact in terms of these objectives. They can

  • decide what range of impacts is acceptable, and when the system might need to be deactivated or replaced with a fallback
  • decide on a design of the system that most closely aligns with the impacts they want.
AI impact control panel’s location in the governance of an AI system.

Making these decisions involves understanding and balancing the trade-offs that exist between the different performance aspects of a system (such as how to balance a system’s fairness against its profitability).

There is no objectively ‘correct’ solution: the answer depends on the values and priorities of the decision-makers. The AI impact control panel is designed to elicit those values and priorities by iteratively asking users about acceptable ranges of different measures of performance, the relative importance of different objectives and the relative desirability of different outcomes. The AI impact control panel adapts the choices presented to users over time to efficiently discover their preferences without overwhelming them.

Because the users’ inputs are made explicitly through the control panel interface, they are also automatically documented by the software. This provides a clear record of what motivated the system’s design and how the system owners balanced and constrained its possible impacts.

A screenshot of the control panel’s interface for filtering unacceptable candidate models.

A step toward better AI governance

The AI impact control panel software helps the people in charge of an AI system understand its potential impacts to make informed decisions about its design. However, it is only one part of the broader work needed to prevent the system doing serious harm.

How, when and to whom a system causes harm cannot be determined automatically. It requires careful examination of the system’s actions and consultation with the people it affects. If a potential harm escapes the notice of the designers of an AI system, then it cannot be taken into account in any decision-making process, our software included.

Even if the system’s important potential impacts can be identified, they need to be measured if they are going to form part of the design process. However, many impacts are difficult or impossible to quantify, especially when they involve physical or emotional harm to real people. System owners and regulators need to give careful consideration to whether AI is appropriate at all in these circumstances.

Finally, the AI impact control panel shows system owner’s their options, it does not force them to select an ethical one or one acceptable to society. They may decide to ignore potential harms and deploy a harmful but profitable system, for instance. Our approach does not replace governance or regulation of AI systems, but rather, supports good governance and regulation by ensuring key design decisions are made by the right people and appropriately documented.

Continuing development

Ensuring AI systems make the world better, rather than worse, requires changing the way they’re designed, governed and regulated. The AI impact control panel aims to help encourage that change. However, these changes are not the same for every system: different use-cases have different owners, impacts, and risks. We have designed the software with this in mind, making it possible for organisations to add functionality and customise it to their needs. The control panel is being released as open source software, to make it easy for organisations to use and modify it.

Now that we have developed an initial version of the control panel, we are starting to trial it on real AI systems in production to validate our design, obtain feedback to improve it, and ultimately, to give people more control over the impacts of their AI systems. Over time, we believe that the approach to AI system design and governance embodied by our software will provide a practical way forward for building better AI and potentially inform AI regulation.

--

--