Workplace AI: Regulators stepping in amid political distractions
With a new UK government, employers and employees might be forgiven for thinking that the National Artificial Intelligence Strategy of the Boris Johnson administration will be shelved. This assumption would be a mistake.
Under Johnson, the current Chancellor of the Exchequer, Kwasi Kwarteng, championed the AI Strategy as the secretary of state for business, and he sees AI as a driver of economic growth. With the rapid development and deployment of AI in the UK and abroad, the government’s focus will return to this area.
In the interim, AI has become increasingly embedded in people’s everyday lives and it is increasingly evident in the workplace as a tool to aid the recruitment and monitoring of workers.
One of the key pillars of the government’s AI Strategy was governance, and with the distractions facing politicians, regulators have stepped into the gap. This has been driven by examples of employers and workers suffering harm as a result of the use of new forms of AI and algorithms.
Information Commissioner’s Office
In mid-July, the Information Commissioner launched a new strategic plan, ICO25. The core objectives include focusing on AI-driven discrimination and a commitment to investigating the use of algorithms to sift job applications. This will sit alongside new guidance on ensuring algorithms treat people fairly.
The Information Commissioner’s Office (ICO) will look at biometric technologies such as facial recognition as well as new “emotion recognition” technologies. Such AI is being readily adopted by employers who want to monitor workers and their performances, notably home workers.
Companies should be aware that regulators, such as the ICO, will be focused on the use and impact of these technologies. Advice should be sought before deployment concerning avoiding breaches of existing discrimination and human rights laws. The ICO has produced an AI and data protection risk toolkit, but this will not encompass the full range of legal concerns that will arise for employers.
ICO25 specifically flagged the role of the ICO as a member of the Digital Regulation Cooperation Forum (DRCF) and its role in working with international counterparts on a cross-border basis. For those who operate in more than one country, including those who employ digital nomads or provide services or goods in other jurisdictions, this cross-border approach should be factored into their plans. There could be several layers of regulation to comply with and companies should be aware of likely cooperation between agencies and regulators sharing information about their use of AI or algorithms by one company.
Equalities and Human Rights Commission
The Equalities and Human Rights Commission (EHRC) released its strategic three-year plan in the spring of this year and this made tackling discrimination in AI a major strand. The design and use of automated decision-making and algorithmic biases are a core part of this and will impact employers who use AI even for discrete tasks. Alongside its strategic litigation role, the Commission may decide to exercise its inquiry and investigation powers.
A taster of how this might look is evident from the Commission’s new Public Sector Equality Duty (PSED) and AI guidance. The focus includes obvious examples of where a data set used to train AI might be biased, alongside cases where the AI might develop and accumulate biases over time, as occurs with machine-learning technology.
Even where companies are not directly impacted by the PSED, businesses tendering for government work may have to explain how any AI component of their services operates and that appropriate checks and oversight are in place.
Algorithmic auditing
Earlier in September, the UK’s Digital Regulation Cooperation Forum published its analysis on auditing algorithms and the role of regulators. It is evident from the examples used that the regulators intend to use existing powers to perform algorithmic audits. This includes the Competition and Market Authority’s use of information-gathering powers for market investigations or merger inquiries and Financial Conduct Authority powers to commission a “skilled person” or, in this instance, an algorithmic auditing expert.
Companies that have not obtained proper expert input before the use of AI will be vulnerable to regulator action and, ultimately, litigation. Litigation will increase as regulators force greater transparency around AI tools and their use and people become aware of the harms they have suffered. Employers are likely to be a focal point as decisions made by AI and algorithms could have a fundamental impact on whether they obtain or retain employment as well as pay levels, if used to monitor or evaluate performance.
EU AI Liability Directive
Meanwhile, the EU continues to move towards its objective of creating a legal framework for AI that seeks to balance innovation and rights. As part of this agenda, on 28 September, the European Commission proposed an AI Liability Directive, which is significant in two major respects.
First, the AI Directive will introduce a “presumption of causality”, which seeks to undo the problems that are created by imposing a burden of proof on victims. Where a victim of AI can show there was a failure to comply with an obligation and that there is a link with the harm, a court can presume that the failure to comply caused the damage. Second, the AI Directive will allow victims access to relevant evidence about high-risk AI systems.
This is particularly relevant for employers because any AI used for making decisions related to recruiting, promoting, or terminating workers is defined by the draft EU Artificial Intelligence Act (AIA) as high risk. Crucially, it could plug the gaps in the EU AI framework that were highlighted by the Uber litigation in Amsterdam’s District Courts in March this year, where workers had to show they had been the subject of automated decision-making before they could access relevant data.
Lawyers should take steps to advise clients who use AI and algorithms in the recruitment or monitoring of their workers on the potential breaches of discrimination and human rights law. Those who delay until further regulation will find that the amplificatory impacts of AI could mean that any harm caused is significant and, ultimately, costly.
Article by Sara Ibrahim and Matthew Hodson. First published in International Employment Lawyer.
Disclaimer
This content is provided free of charge for information purposes only. It does not constitute legal advice and should not be relied on as such. No responsibility for the accuracy and/or correctness of the information and commentary set out in the article, or for any consequences of relying on it, is assumed or accepted by any member of Chambers or by Chambers as a whole.
Contact
Please note that we do not give legal advice on individual cases which may relate to this content other than by way of formal instruction of a member of Gatehouse Chambers. However, if you have any other queries about this content please contact: