The proliferation of artificial intelligence (AI) has been rapid, with 9 in 10 leading businesses (across 112 countries) leveraging some form of the technology within their operations.
New use cases are constantly emerging, with businesses across diverse sectors worldwide benefitting from enhanced analytics, productivity and a reduction in manual tasks.
In a rapidly evolving environment businesses want to implement AI to avoid falling behind and to remain competitive. As with any innovation, companies are interested in how they should implement AI while remaining compliant with upcoming regulations. As the landscape is currently unregulated, this is particularly important. The decisions companies make today will massively impact their future operations.
Rules around AI will likely relate to its general development and implementation. But these should also focus on specific use cases. This will be particularly relevant to content moderation, the practice of removing illegal, irrelevant or harmful material from online platforms.
We are already seeing an emergence of AI-generated images, many of which are age restricted or even illegal. AI-generated deepfakes have increased by 780% across Europe in the past year, with celebrities, politicians and law enforcement all highlighting the consequences of harmful AI-generated content. Meanwhile, it is still unclear how legislators will cover content moderation in upcoming regulations. These changes will likely be significant, and something businesses will need to be aware of. Therefore, they will need to put the tools in place to identify and remove illegal content, including material that has been created using AI, and work with like minded businesses to tackle the issue.
Emerging AI Regulation
With the proliferation of AI, we can expect legislators across the globe to announce how they plan to regulate the technology in the coming months. The EU’s proposed Artificial Intelligence Act is a major step towards this as the first piece of draft regulation from a major legislative body and is expected to set the global standard for legislation in this area. The law will assign AI use into three categories: That which creates unacceptable risk, that which leads to high-risk. And other uses which are largely left unregulated.
Although the proposed AI Act is breaking new ground, it is expected to set the standard for these legislations worldwide. To help EU businesses prepare for implementation, the regulator has set up a tool for businesses to provide insights on regulatory obligations.
This tool defines who is subject to the legislation, based on a definition of AI and the role of the entity. Risk in this instance is related to how AI is used and the role of the organisation. As a consequence, companies can learn if they are subject to obligations if and when the regulation is approved. Currently, the exact wording and implementation of the act are still subject to debate.
For EU businesses, this legislation should be closely monitored to understand its impact. When operating in other jurisdictions, businesses should also monitor how they are impacted. While it is not the finished deal it will likely inform future legislation not just across the EU, but also the UK, US and beyond.
Unlike the internet, where regulatory action is still being debated decades on, we can expect AI legislation to be far swifter. Regulators, the business community and society in general have learnt lessons from the internet, which was left largely unregulated to allow for innovation and businesses to flourish. However, the online safety debate continues, with little solid guidance from regulators.
A universal standard of thinking will likely be established, building on the reality of AI implementation, the knowledge of third parties and regulatory necessities.
Preparing Your Business?
Businesses must keep an eye on key documents, launch timings and implications of upcoming AI legislation. Moreover, they must assess how they are deploying or expect to deploy AI in their organisations to understand their compliance obligations. As with any regulatory consideration, compliance likely comes with a monetary cost for businesses which will need to be factored in. For high-risk organisations, especially small or medium-sized enterprises, outsourcing these solutions to experts may be the most cost-effective, instead of developing and implementing in-house technology. Outsourcing solutions from vendors limits the need for in-house expertise and ensures that AI is used in an explainable way, crucial when under regulatory scrutiny.
For businesses impacted by regulations, a thorough understanding is essential, as is staying informed on relevant regulations and guidelines in every market they operate. For some business leaders, this task may seem daunting. However, there are simple steps to keep up.
Firstly, business leaders should engage with their peers, both in their industry and the wider ecosystem to keep up with new developments, sharing best practices in working groups and industry forums. As subject matter experts, these platforms can help to address real issues relating to the implementation of regulation.
Impacted businesses should also perform regular audits and risk assessments to understand AI systems to understand compliance and risk. Before this process, businesses should maintain documentation of policies, procedures and decision-making processes. These documents can be used as evidence of compliance or to provide transparency to regulators and partners. To give a balanced view, these risk assessments can be conducted by third parties with broader experience.
Leaders must also put training and development in place at all levels for employees involved in AI development and deployment. Robust training will help employees to understand their responsibilities. This will ensure that these individuals understand their responsibilities regarding compliance and ethical AI use. By establishing a groundwork through training, businesses can implement continuous improvement practices, proactively addressing emerging challenges as they arise. Therefore, AI governance will be improved based on feedback, lessons learned and emerging best practices.
These practices will likely differ depending on industry, company size and function. In content moderation, however, it means aligning with like minded businesses to roll out a solution which can identify age restricted and illegal AI-generated content so that action can be taken accordingly.