
Artificial Intelligence (AI) is transforming how businesses operate, but it’s also opening the door to new threats. Microsoft Copilot Studio and Azure Health Bot, for instance, were flagged for AI-related vulnerabilities in this year’s report.
AI is already being used by threat actors to automate attacks, identify weaknesses faster and even write malicious code. We haven’t yet seen a large-scale attack where an AI or large language model (LLM) becomes the main infection point, but that day is coming.
The biggest question on the horizon: can we trust the output from AI tools? What if the answers, code or insights we get from AI are secretly manipulated by a hacker? Canadian companies need to think about how to secure not just their AI tools, but also the data and systems that feed them. AI security can’t be an afterthought; it must be built into every layer of your defence strategy.
The power of “least privilege” in a “zero-trust” world