Predictive AI: limits and considerations

Since my last post, I have been busy preparing for my third year exams and writing a legal article on how different jurisdictions regulate the use of Artificial Intelligence in public sector decision-making. I am greatly drawn to administrative law, so I try my best to stay informed on this technology as it is reshaping the practice. While writing my article, my professor, Dr. Raul Madden, recommended the book 'AI Snake Oil' by Arvind Narayanan and Sayash Kapoor. It is a very insightful overview of AI and while not geared towards legal practitioners, it provides easy to understand explanations of AI technology. The analysis and implications discussed are also supported with fascinating real-life case studies. (Stay tuned for a book review Dr. Madden and I are working on!)

In this post, I want to very briefly share concerns about AI as a decision-making tool discussed in the book. I will focus on predictive AI, a type of AI that uses data, statistical algorithms, and machine learning techniques to make predictions about future events or outcomes. Instead of responding based on pre-programmed rules, predictive AI identifies patterns in historical and real-time data to, well, predict the future. Predictive AI is the primary model used in Automated Decision-Making, (ADM) holding the potential to entirely displace human beings from the decision-making process. Public and private sector players are investing heavily in this technology to promote efficiency and cost-effectiveness. However, prediction is an extremely challenging task and implementation of this technology must be supported by informed considerations of its limitations.

The key limitation is that predictive AI is only as good as the quality and range of the data it is trained with. Bias in the data, such as against a particular group, will be reflected in the system’s outputs. Because it is trained on past data, it is also limited because it cannot adjust in real time to developments and new data that are material to its assessments.

As a consequence of this limit, predictive AI can be problematic when applied outside of the circumstances and data it was trained with. This can be particularly consequential in sectors such as immigration and refugee law, where any AI system would be engaging with extremely diverse and varied subjects. If an AI is trained on data for decisions regarding a particular demographic, it can be "utterly useless"[1] when applied to another demographic. The system is also likely to evaluate smaller demographics, such as minorities and factual anomalies, using data that is not representative of them.

Consider the United States Public Safety Assessment, an AI that attempted to predict the risk of releasing defendants before trial. It was trained using data from 1.5 million people across 300 US jurisdictions. One might expect that such a large data set would mitigate inaccuracies. However, instead the AI did not consider that one jurisdiction might have a significantly lower crime rate than another. When it was applied to one such county with a lower crime rate, the AI unnecessarily jailed thousands of defendants for months before their trial.[2] Decision-making is often fact-sensitive, especially in the public sector, so discretion is essential. An AI must be trained very carefully if there is to be any confidence in its assessments and decisions. In fact, it may not even be possible to achieve this level of confidence with current technology due to the ‘8 billion problem’, which argues that training an AI to make consistently accurate predictions requires such a large set of data that there are not enough humans on earth.[3]

It is vital that while we approach this technology with optimism in both the private and public sectors, we also maintain a healthy scepticism. Whether as the subjects of AI decision-making, as an organization that wishes to implement such a system, or as counsel intending to challenge a decision assisted or made by AI, the parameters of what it can actually achieve, rather than what it is marketed as being capable of, must frame our approach. Unsubstantiated claims as to the function and quality of AI systems in the decision-making process should be treated with suspicion.

This is a very brief insight into this topic. Hopefully I will be able to provide an update soon with a link to a published article with my name on it that delves deeper. I am keenly interested in the impacts of AI technology and will continue to share my thoughts on the developments I track.


[1] AI Snake Oil, page 51.

[2] Ibid 51-53.

[3] Ibid 97.

Comments

Popular posts from this blog

Making Climate Action Investible: The Policy and the Data

2024/25 Law Student Award for Creative Legal Innovation (Queen Mary, University of London)