Why one industry veteran says it is time to replace vendor promises with scientific proof
For more than a decade security awareness training has been marketed as the antidote to human error. It has been packaged as the cultural reset every company needs gamified, engaging content that will turn ordinary employees into the first line of cyber defense. The demand is real global cyber losses are rising, attacks are accelerating, and phishing remains the single most common cause of compromise. Yet beneath the polished marketing campaigns and upbeat dashboards lies an uncomfortable truth. According to industry veteran Cary Johnson founder of Phishbusters and one of the most experienced phishing simulation specialists in North America, the cyber awareness industry is built on a measurement model that no longer deserves the trust of security leaders.
It is a bold claim and one that challenges the foundation of a sector now worth billions. But Johnson argues that the problem is simple. The people selling cyber awareness solutions are the same people defining how success is measured. Then they self-assess their own performance. No one would accept that in any other corner of cybersecurity. You would never ask your MSSP to perform your SOC 2 audit. You would never let your endpoint vendor run your penetration test. Yet for nearly twenty years organizations have accepted vendor supplied metrics as proof of success in human risk reduction. The result is a credibility crisis that is quietly shaping boardroom conversations across Canada. Security leaders say it out loud more often than vendors care to admit. But does it work? Johnson believes the question keeps coming back because the industry has never built a scientific framework for answering it.
A model built on activity rather than impact
At the core of Johnsons critique is what he calls the activity bias that drives most awareness programs. For years vendors have promoted engagement as the north star metric. More games. More training modules. More quizzes. More touchpoints that create an illusion of progress. When people are busy, they feel productive says Johnson, but busyness is not evidence of reduced risk. He argues that the industry has succeeded in selling entertainment but has failed to produce objective measurements that prove behavior change.
This activity heavy model has side effects. It overloads users with well-intentioned but often outdated content sometimes described as hack lore. Alerts about public WIFI hotspots or airport charging stations may feel familiar but do little to protect against the sophisticated phishing campaigns now driving Canadian breaches. The cognitive load being placed on employees is not proportionate to the risks they face. Without accurate measurement, organizations cannot calibrate their training to focus on real world threats.
Why vendor benchmarks are a dead end
To solve this Johnson says the industry must abandon its attachment to vendor supplied benchmarks. Benchmarks are seductive because they offer a sense of competition. Where do we stand compared to peers. But they are only point in time snapshots and do nothing to help an organization understand its own progress.
Instead, Johnson advocates for a scientific approach rooted in baselining. Rather than comparing one organization to another he proposes a model that compares an organization to itself over time. It begins with a baseline established through an initial assessment. That baseline is then reset on a consistent rhythm monthly or quarterly. Each reset serves two purposes. It defines a new starting point and reveals the incremental impact of what happened since the last measurement.
This allows security leaders to track whether their program is strengthening or stagnating. Over time it builds undeniable evidence of what works and what does not. The transformation is profound. Security awareness stops being a compliance chore and becomes a measurable component of risk reduction.
Focus on the few who drive most of the risk
One of Johnsons most pragmatic insights comes from segmentation analysis based on the Pareto principle. In every organization three groups emerge from phishing simulations. A majority who never click. A large middle group he calls learners who occasionally fail but improve. And a very small subset of movers, repeat offenders whose decisions repeatedly expose the organization to risk. With proper measurement a company can direct its most focused interventions to the group that drives a disproportionate percentage of incidents. Meanwhile the top performers can be leveled up into a strength more likely to report real attacks faster than security teams can detect them.
The benefit is operational as much as it is behavioral. Security teams overwhelmed by training obligations can finally allocate their resources based on evidence rather than one size fits all mandates.
The threat landscape is evolving faster than training models
Johnson notes that phishing attacks themselves have transformed dramatically. The earliest phishing attempts were crude plain text messages asking users to reset a BlackBerry ID. Today they are fully functioning replicas of brand sites with professional design, tight copywriting, and sophisticated obfuscation of malicious elements. Many assume that artificial intelligence is responsible for this leap. Johnson says the real turning point arrived earlier with the rise of phishing as a service. The PHAAS (Phishing as a Service) ecosystem supplies attackers with ready-made kits that include templates, hosting, credential capture infrastructure, and automated delivery. It democratized cybercrime and reduced both the technical and financial barriers to entry. AI has since accelerated this evolution even further.
The takeaway is stark. If the threat landscape has evolved dramatically, training models built on legacy assumptions cannot hope to keep pace without measurement that reflects current realities.
The skill cyber professionals need most
Although the briefing focuses heavily on measurement Johnson also offers advice for professionals working across security disciplines. The single most important skill he says is communication. Cyber professionals must learn to adjust their language to their audience.
With leaders the conversation must connect directly to business outcomes. With peers it can be technical and acronym rich. With end users it must be stripped of jargon and shaped around clear guidance. People do not need to understand the mechanics of a buffer overflow or a credential harvesting page. They need to know how to avoid a mistake.
A vision for reforming a broken model
Across the industry Johnson is known for his scientific approach to human risk. But he also has a broader ambition. He hopes to be remembered as the person who pushed the industry toward objective evidence and away from faith-based metrics. He believes the sector has its priorities reversed the cart pushing the horse. Activity driving the illusion of results rather than measurement guiding strategy.
Fixing the model will not be easy. It requires vendors to relinquish the comfort of self-assessment. It requires security leaders to demand scientific validation. And it requires a cultural shift from compliance reporting to continuous improvement. But as cyber threats escalate, and as phishing kits become more accessible than ever the need for reform is no longer optional. Organizations across Canada are looking for clarity and accountability. Awareness training can reduce real risk but only if the industry builds the measurement discipline to prove it.
Cary Johnson believes that future is possible. And he says it starts with one simple principle. Measure what matters and measure it scientifically. Everything else is noise.
You can reach Cary Johnson here.