UK government quietly disbands AI ethics advisory board just months before hosting global conference

UK government quietly disbands AI ethics advisory board just months before hosting global conference

27 Sep    Finance News, News, Technology

The UK government has disbanded the independent advisory board of its Centre for Data Ethics and Innovation (CDEI) without any announcement amid a wider push to position the UK as a global leader in AI governance.

Launched in June 2018 to drive a collaborative, multi-stakeholder approach to the governance of artificial intelligence (AI) and other data-driven technologies, the original remit the CDEI’s multi-disciplinary advisory board was to “anticipate gaps in the governance landscape, agree and set out best practice to guide ethical and innovative uses of data, and advise government on the need for specific policy or regulatory action”.

Since then, the centre has largely been focused on developing practical guidance for how organisations in both the public and private sectors can manage their AI technologies in an ethical way, which includes, for example, publishing an algorithmic transparency standard for all public sector bodies in November 2021 and a portfolio of AI assurance techniques in June 2023.

The decision comes ahead of the global AI safety summit being held in the UK in November and while the advisory board’s webpage notes it was officially closed on 9 September 2023, Future News said that the government updated the page in such a way that no email alerts were sent to those subscribed to the topic.

Speaking anonymously with Recorded Future, former advisory board members explained how the government attitude to the body shifted over time as it cycled through four different prime ministers and seven secretaries of state since board members were first appointed in November 2018.

“At our inception, there was a question over whether we would be moved out of government and put on a statutory footing, or be an arm’s-length body, and the assumption was that was where we were headed,” said the official, adding the CDEI was instead brought entirely under the purview of the Department for Science, Innovation and Technology (DSIT) earlier in 2023.

“They weren’t invested in what we were doing. That was part of a wider malaise where the Office for AI was also struggling to gain any traction with the government, and it had whitepapers delayed and delayed and delayed.”

The former board member further added there was also very little political will to get public sector bodies to buy into the CDEI’s work, noting for example that the algorithmic transparency standard published in November 2021 has not been widely adopted and was not promoted by the government in its March 2023 AI whitepaper (which set out its governance proposals for the technology): “I was really quite surprised and disappointed by that.”

Speaking with Computer Weekly on condition of anonymity, the same former board member added they were informed of the boards disbanding in August: “The reason given was that DSIT had decided to take a more flexible approach to consulting advisers, picking from a pool of external people, rather than having a formal advisory board.

“There was certainly an option for the board to continue. In the current environment, with so much interest in the regulation and oversight of the use of AI and data, the existing expertise on the advisory board could have contributed much more.”

However, they were clear that CDEI staff “have always worked extremely professionally with the advisory board, taking account of its advice and ensuring that the board was kept apprised of ongoing projects”.

Neil Lawrence, a professor of machine learning at the university of Cambridge and interim chair of the advisory board, also told Recorded Future that while he had “strong suspicions” about the advisory board being disbanded, “there was no conversation with me” prior to the decision being made.

In early September 2023, for example, just before the advisory board webpage was quietly changed, the government announced it had appointed figures from industry, academia and national security to the advisory board of its rebranded Frontier AI Taskforce (previously it was the AI Foundation Model Taskforce).

The stated goal of the £100m Taskforce is to promote AI safety, and it will have a particular focus on assessing “frontier” systems that pose significant risks to public safety and global security.

Commenting on the how the disbanding of the CDEI advisory board will affect UK AI governance going forward, the former advisory board members said: “The existential risks seem to be the current focus, at least in the PM’s office. You could say that it’s easy to focus on future ‘existential’ risks as it avoids having to consider the detail of what is happening now and take action.

“It’s hard to decide what to do about current uses of AI as this involves investigating the details of the technology and how it integrates with human decision-making. It also involves thinking about public sector policies and how AI is being used to implement them. This can raise tricky issues.

“I hope the CDEI will continue and that the expertise that they have built up will be made front and centre of ongoing efforts to identify the real potential and risks of AI, and what the appropriate governance responses should be.”

Responding to Computer Weekly’s request for comment, a DSIT spokesperson said: “The CDEI Advisory Board was appointed on a fixed term basis and with its work evolving to keep pace with rapid developments in data and AI, we are now tapping into a broader group of expertise from across the department beyond a formal board structure.

“This will ensure a diverse range of opinion and insight, including from former board members, can continue to inform its work and support government’s AI and innovation priorities.”

On 26 September, a number of former advisory board members – including Lawrence, Martin Hosken, Marion Oswald and Mimi Zou – published a blog with reflections on their time at the CDEI.

“During my time on the Advisory Board, CDEI has initiated world-leading, cutting-edge projects including AI Assurance, UK-US PETs prize challenges, Algorithmic Transparency Recording Standard, the Fairness Innovation Challenge, among many others,” said Zou.

“Moving forward, I have no doubt that CDEI will continue to be a leading actor in delivering the UK’s strategic priorities in the trustworthy use of data and AI and responsible innovation. I look forward to supporting this important mission for many years to come.”

The CDEI itself said: “The CDEI Advisory Board has played an important role in helping us to deliver this crucial agenda. Their expertise and insight have been invaluable in helping to set the direction of and deliver on our programmes of work around responsible data access, AI assurance and algorithmic transparency.

“As the board’s terms have now ended, we’d like to take this opportunity to thank the board for supporting some of our key projects during their time.”

Reflecting widespread interest in AI regulation and governance, a number of Parliamentary inquiries have been launched in the last year to investigate various aspects of the technology.

This includes an inquiry into an inquiry into AI governance launched in October 2022; an inquiry into autonomous weapons system launched January 2023; another into generative AI launched in July 2023; and yet another into large language models launched in September 2023.

A Lords inquiry into the use of artificial intelligence and algorithmic technologies by UK police concluded in March 2022 that the tech is being deployed by law enforcement bodies without a thorough examination of their efficacy or outcomes, and that those in charge of those deployments are essentially “making it up as they go along”.

Natalie Cramp, CEO of data company, Profusion, said:  “It’s deeply disappointing that the Government has taken the decision to disband the independent AI and data ethics advisory board. It’s another indication that the Government simply doesn’t have a coherent strategy towards data and AI, nor does it have strong stakeholder engagement on this topic. In November the UK is set to hold a global summit on AI safety which is an ideal opportunity to position the UK in a leading role to define how AI can develop. However, when compared to the progress the US and Europe have made towards creating and debating AI legislation, the UK is far behind. The reality is that AI will become one of the defining technologies of the next few decades. Without well-thought-out rules that govern its development we risk, at best, wasting this opportunity to do a lot of good, and at worst, creating an environment where damaging and undesirable uses of AI thrive. This could lead to us regressing to a more unequal society.

“It’s particularly worrying that the Government has disbanded the advisory board because data ethics is a critical part of legislating a fast-moving technology like AI. When we put this move into the context of the failure to finalise a replacement of GDPR seven years after it was announced it would be scrapped, the ongoing issues around the Online Safety Bill, and the failure to introduce any wide-ranging AI regulations, it paints a picture of a Government that does not seem to have a strategy towards regulating and cultivating innovation. I hope that the AI safety summit does help to focus minds and results in the threat and opportunity of AI being taken more seriously, however, data ethics is broader than just AI which leaves questions on how the Government is going to progress this without the advisory board. The establishment of DSIT was very positive but we need real engagement and transparency with the sector, including SMEs, and the public. Concrete action is needed now before it is too late.”

See also  Starling Bank chief Anne Boden to step down after announcing six-fold profits increase

Leave a Reply

Your email address will not be published. Required fields are marked *