Lots of thought leadership and energy
What a January - what will February bring?
January was full of energy and thought leadership for CCN and the Community. From releasing two national reports on the same day, new programs and lots of new partnerships, Canada looks ready to take on the world, although there is a lot of heavy lifting to be done.
What's clear is that security in 2026 is clearly about trust, trust in leadership, in governance and in the signals.
Canada must create trust by working together in collaboration to unite leadership, governance and signals so that Canada can prosper, be safe and to continue to uphold the legacy and values we all love
Article of the Month
2026 State of Cybersecurity in Canada Report is out!
Cybersecurity in Canada has crossed a line.
It is no longer a technical issue operating quietly in the background. In 2026, cyber risk is a leadership issue tied directly to trust, resilience, and business continuity.
The State of Cybersecurity in Canada 2026, released on January 28, makes this clear. The threat environment is no longer defined by isolated breaches, but by sustained pressure on identity, trust, and decision making.
Deepfakes, voice cloning, AI driven fraud, ransomware, and supply chain disruption are no longer edge cases. They are now routine tactics. Attackers are not trying to break systems. They are exploiting trust, people, and process gaps. Identity has become the new perimeter
For leaders, this changes the playbook.
The most damaging incidents outlined in the report did not start with sophisticated hacks. They started with ordinary decisions. A vendor relationship accepted without scrutiny. A help desk process built on implicit trust. A crisis plan that existed on paper but had never been tested.
What separates resilient organizations from exposed ones is not the absence of incidents. It is preparation. Organizations that rehearsed decision making, clarified authority, and coordinated across teams recovered faster and with less impact. Those that did not faced longer disruption, higher costs, and lasting reputational damage.
Artificial intelligence is accelerating this shift.
AI is making attacks faster and more convincing, especially in social engineering and impersonation. At the same time, it is becoming essential for detection and response. The report shows leaders walking a careful line. They recognize AI’s value but remain cautious about autonomy, governance, and data exposure.
That caution is healthy. The message is not to rush, but to act deliberately. Leaders who delay entirely risk falling behind adversaries already operating at machine speed.
The report also highlights a growing imbalance across the Canadian economy. Large organizations are advancing, while many small and mid sized organizations struggle to keep pace, despite being deeply embedded in critical supply chains. Cyber resilience is now an ecosystem challenge, not an individual one.
Culture matters as much as technology. Employees remain the most targeted attack surface, not because of negligence, but because deception now outpaces human perception. Systems must assume deception and verify identity at moments that matter.
The conclusion for leaders is direct.
Cybersecurity in 2026 is about trust over tools. Preparation over reaction. Identity over perimeter. Leadership over delegation.
This is where the Canadian Cybersecurity Network plays its role.
CCN's role is to bring credible voices together, surface real world insight, and support informed decision making across sectors.
The State of Cybersecurity in Canada exists to help leaders cut through noise, understand what is changing, and act with clarity.
In a landscape defined by speed and uncertainty, shared insight and collaboration are strategic advantages. Canada’s strength has always been working together. Cybersecurity is no different. Download the report here
CCN insights
What Happens When AI Is Allowed to Act
In January, the Canadian Cybersecurity Network launched CCN Insights, a new report series designed to explore emerging cybersecurity and digital trust issues before they become headline incidents.
The first report, released at the NKST IAM Conference on January 28th, set the tone.
When AI Acts Securing Autonomous Systems at Machine Speed looks at a shift already underway inside many organizations. Artificial intelligence is no longer limited to analysis or recommendations. Autonomous systems are now logging in executing tasks initiating transactions and making decisions across enterprise environments.That transition changes risk.
One of the clearest findings from the report is that autonomy is scaling faster than trust. Organizations are moving quickly to deploy agentic AI for efficiency and productivity, yet identity assurance governance and accountability models are still built for human paced decision making. When systems act at machine speed the window for human intervention disappears.
Another key insight is that perception is no longer proof. Voice video and visual presence once served as trusted signals. Today deepfakes and synthetic identities exploit those same signals to bypass controls. This affects people and autonomous systems equally. If decisions rely on appearance instead of cryptographic verification both humans and AI can be misled.
The report also highlights a growing blind spot around non human identities. AI agents authenticate using service accounts API keys and workload identities that often carry broad permissions and limited monitoring. When compromised these agents do not look suspicious. They look like authorized infrastructure. That makes detection slower and consequences larger.
Rather than calling for organizations to slow innovation, the report reframes the challenge as one of governance. Not all automation carries equal risk. High consequence actions such as transferring funds resetting credentials or modifying access policies must require explicit verification whether initiated by a human or an autonomous system acting on their behalf.
What resonated most with leaders at the conference was that this is not a tooling problem. It is a leadership problem. Clarity of accountability intent and decision authority now matter more than adding new controls to old models.
For Canada the implications are immediate. Deepfake driven fraud is already measurable across financial services telecommunications and identity workflows. Growth rates are accelerating faster than policy insurance and organizational readiness. Organizations that cannot demonstrate strong verification and runtime governance will increasingly face operational regulatory and reputational pressure.
CCN Insights exists to surface these inflection points early. Each report is focused practical and vendor agnostic. They are designed to help leaders understand what is changing and why it matters before risk becomes visible through failure.
Following the launch of its first report CCN Insights will continue to publish targeted research on emerging cybersecurity and digital trust topics throughout the year.
Organizations interested in sponsoring a future CCN Insights report focused on a specific topic sector or emerging risk area are invited to reach out. CCN works with sponsors to ensure each report remains credible, balanced and valuable to the community while providing meaningful visibility and engagement.
As AI evolves from assistant to actor clarity and collaboration will matter more than speed alone. CCN Insights will continue to provide a neutral platform to support that conversation across Canada’s cybersecurity ecosystem
CCN Contribution
Autonomous AI and the Moment Trust Fails (from CCN Insights)
No one working in IT or cybersecurity could have missed the recent surge of new products claiming to automate what were once expensive, time consuming, human dependent processes. Agentic AI has become the latest innovation hype cycle, with organizations actively exploring how to adopt its features and promised benefits.
Compared to the LLM wave of 2024 2025, which delivered limited business value beyond individual productivity, agentic AI holds far greater enterprise potential. In cybersecurity, agentic automation has already reshaped the security operations center. Tier one responses are increasingly automated, enabling near instantaneous action based on predefined run books, while human analysts focus on higher value tier two investigations. The risk is that an autonomous system makes the wrong decision. A false signal could disconnect a critical system from the network. In healthcare, that could mean a ventilator or other life sustaining device being taken offline, with potentially fatal consequences.As with all software, coding errors are inevitable. With AI, risk extends beyond code to the data used for training. Training data may be mislabeled, poisoned, or deliberately fabricated. Adversarial machine learning and data poisoning are no longer rare events. Academic research is increasingly compromised by fake studies built on false premises and AI generated content. Chinese paper mills churn out hundreds of fraudulent papers each month. In 2025, Russia reportedly spent more than 137 billion rubles, roughly 1.4 billion US dollars, on propaganda and troll farms designed to flood the internet with fabricated narratives. At scale, this creates a real risk that AI systems are trained on corrupted data. As geopolitical competition intensifies and the race for AI leadership accelerates, some nation states may see advantage in deliberately poisoning the data ecosystems their rivals rely on.
Beyond training data, the security of AI algorithms and the organizations that build and maintain them is equally critical. Power is increasingly concentrated in a small number of AI companies, raising concern among governments and enterprises alike. Verifying that employees are who they claim to be, and that they have legitimate reasons to modify AI systems, is essential. Zero trust, privileged access management, and multi factor authentication are no longer optional. Employee vetting and continuous identity validation are now baseline requirements. The discovery of remote North Korean workers inside Amazon identified only through a 110 millisecond keyboard delay highlights the sophistication of modern infiltration tactics. Deepfakes further erode trust and complicate verification. These risks extend beyond AI vendors to the entire supply chain and anyone with access to critical systems.
Third party security is now the single greatest exposure for most enterprises. Attackers increasingly compromise multiple organizations through a single vendor. In healthcare, where roughly 75 percent of connected endpoints are not managed by hospital IT, the danger escalates. Medical and IoT devices are often supplied and maintained by third parties and frequently remain unpatched despite known vulnerabilities. These systems connect both to enterprise networks and directly to patients. When AI and agentic capabilities are added, the risk increases sharply. Unless medical IoT devices are properly managed, segmented, and restricted, patient safety is directly at stake. Autonomous IoT systems amplify existing weaknesses, especially in environments where organizations have adopted a set and forget mindset and lack visibility into connected assets.
IoT vulnerabilities extend beyond patching and device management to identity and access control. Who should be authorized to access a life sustaining medical device? Who can modify the drug library of an infusion pump or the output of a CT scanner or radiotherapy system, and from which identities, IP addresses, and protocols? These questions are often unanswered, leaving dangerous gaps in control.
As organizations adopt increasingly autonomous technologies, security must advance at the same pace. Where systems cannot be patched or fully secured, compensating controls are essential. Multi factor authentication, privileged access management, and effective identity and access management should already be baseline requirements. When autonomy increases risk, these controls are not optional. They are mandatory
Richard Staynings is a globally renowned thought leader, author, public speaker and advocate for improved cybersecurity across the Healthcare and Life Sciences industry
Community
Only 11 Days left till the Great Canadian CTF!
Don’t miss the Great Canadian CTF Tournament — Canada’s nationwide virtual cybersecurity showdown hosted on Hack The Box. Thirty two teams from coast to coast will compete in a March-Madness-style bracket starting February 14th 2026 testing skills solving real challenges and earning bragging rights and prizes along the way. Whether you’re a student professional or just levelling up this is a chance to learn sharpen your skills and see community talent in action
Community Voices
“Deepfake-as-a-service will scale deception the way phishing-as-a-service scaled email attacks. The real impact is trust decay, organizations will need to operationalize verification, not just awareness." Cary Johnson
“As synthetic identity and AI manipulation accelerate, the challenge is no longer just detection, it’s whether organizations have built the governance, literacy, and accountability structures to respond with confidence rather than confusion.” Sandi Jones
“Autonomous AI agents move at machine speed, making static credentials and permanent access a silent risk most organizations underestimate. Runtime authentication and authorization are becoming the only way to keep access accountable, auditable, and governable as autonomy scales.” Ketan Kapadia
Signals
Interesting insights from our When AI Acts CCN Insights report presented on January 28th and the impact on business and organizations. Download the report here
CCN Podcast
Our podcast is off and running with 4 podcast already out. We had conversations with Enza Alexander, James Cairns, Steve Waterhouse and Neumann Lim, with a new one coming out each week.
Our new episode drops soon. Check them out and subscribe to the podcast.
Upcoming Report
This report will come out in April and be presented at the in Toronto.
If you are an AI though leader, get in touch asap as we will be making decisions on authors shortly. For sponsorships, see the sponsorship package and contact us.
CCN Event of the Month
Join us for an exclusive evening designed for Canada’s top security leaders.
Like all of our other events, there is no sales pitches, no moderators, and no endless slide decks. Instead, it’s an open space where CISOs and senior security leaders can connect, share insights, and speak freely.
What to expect:
- Exclusive networking with peers at one of Toronto’s premier private clubs
- Drinks & hors d’oeuvres in a relaxed setting
- Open mic sessions & prizes designed to spark real conversations
BSides
BSides… is made out of people!
BSides Ottawa is a dynamic, grassroots-driven cybersecurity unconference powered entirely by passionate volunteers. It offers an open and inclusive platform for experts, industry professionals, and cybersecurity enthusiasts to share innovative ideas, tackle pressing questions, showcase research, and spark meaningful discussions. By bringing together the public and private sectors, BSides Ottawa strengthens trusted connections within the National Capital Region’s vibrant cybersecurity community.
"BSides Ottawa, is a huge difference maker as it brings government, private businesses and everyone in the community together to learn and share about cybersecurity. This is rare currently in this country". Francois Guay, Founder CCN
Coming Soon
February - CCN Circle Community Launch
February - Training Navigator
February - New Mentoring App and Program
April - The State of AI, Cybersecurity and Digital Trust in Canada
Get in touch for advertising, sponsoring or contributing to the various sections of our newsletter.