Tuesday, 1 August 2023

The legal and regulatory setting of Artificial Intelligence

 You probably already know that artificial intelligence (AI) is now quite popular in the news. Not a week appears to pass since the publication of Chat GPT at the end of 2022 without news praising its advantages or warning about its dangers.

Whatever your position on the issue, it is certain that the fast developing field of artificial intelligence (AI) is here to stay. In sectors where AI has the ability to influence or inform choices concerning persons, in particular, there is a growing need to take AI risk management into account. One excellent illustration of this is the world of work.

In this blog, we examine the UK's present (but changing) legal and regulatory environment for the use of artificial intelligence in the workplace and how firms could get by there.

The regulatory environment

There is presently no agreement on how AI should be regulated internationally. The UK is contemplating "an innovative and iterative approach" to regulation while the EU is preparing stringent regulation and strong limitations on the use of AI by legal firms in London

, with Italy outlawing Chat GPT due to privacy concerns.

In its newly released White Paper A pro-innovation approach to AI regulation, the UK Government suggests a framework of non-statutory principles that would be monitored and applied by current authorities rather than proposing new law.

The government would "encourage" the Equality and Human Rights Commission, the Information Commissioner, and the Employment Agency Standards Inspectorate to collaborate with the Employment Agency Standards Inspectorate to produce joint guidelines on the use of AI systems in recruiting or employment, which has implications for the employment sector. The Government anticipates that the unified advice will, in particular:

  • Explain what information companies should offer while putting AI technologies in place.
  • Determine the best supply chain management procedures, such as impact analyses of AI or due diligence.
  • Offer appropriate solutions for monitoring, mitigating, and detecting bias.
  • Give advice on how to provide contestability and redress channels.

But after Rishi Sunak's remarks on his way to the G7 Conference, it's unclear whether the government would actually take this strategy. He stressed the need for AI to be utilised "safely and securely, and with guardrails in place" in this section, adopting what seemed like a more cautious tone. Could this be a sign that a change to a more controlled posture is possible?

In his piece Regulating Artificial Intelligence, Ian De Ferities (a partner in our Data, IP and Technology Disputes unit) offers insightful criticism on the Government's most recent White Paper. He examines the five basic concepts put forward by the government in the article and compares them to other recent events.

Discrimination: Much has been made about how prejudice in algorithms and AI runs the danger of introducing new forms of discrimination or reproducing those that already exist. Amazon, for instance, notably had to remove an AI recruiting tool that had trained itself to favour male candidates in top legal firms in London over female ones. Employers should make sure the AI they use does not violate the Equality Act 2010's existing anti-discrimination safeguards, which continue to apply to all kinds of AI used in employment.

Data protection: Generative AI, like Chat GPT, analyses input data to find patterns and produce fresh, original content. Employers who use data in this way must make sure their actions comply with the UK GDPR and the Data Protection Act of 2018. For further details, go to the ICO's Guidance on AI and data protection.

Monitoring and surveillance: According to reports, a third of employees are subjected to digital monitoring at work, such as by tracking software or remotely operated cameras. For instance, Royal Mail has acknowledged utilising tracking technologies to check the dispatchers' dispatch speeds. As mentioned above, businesses should verify that any surveillance of their personnel complies with data protection laws and does not violate employees' rights to privacy under the Human Rights Act of 1998.

No comments:

Post a Comment