The modern cybersecurity analyst is drowning in data. A relentless stream of alerts, logs, and threat intelligence reports washes over them daily. The attack surface keeps expanding and their adversaries grow more sophisticated. Security professionals have noted an increase in attacks fueled by the growing rise of bad actors using generative artificial intelligence (AI). Is this the breaking point for human analysts? Are they about to be swept away by the relentless tide of automation and AI, replaced by bots and agents?
Generative AI: Hype vs Reality Analyst firm Gartner placed generative AI between the 'Peak of Inflated Expectations' and 'Trough of Disillusionment' in their hype cycle for emerging technologies in 2024, indicating that the technology is still far from proven, given its unpredictable, inconsistent outputs and penchant for hallucinations.
Allie Mellen, a prominent security industry analyst, recently observed that there was noticeably less excitement over generative AI at Black Hat USA, in contrast to the significant buzz surrounding it at RSA earlier this year. This decline in enthusiasm was likely because many attendees, particularly potential clients, felt let down by generative AI demos that frequently overstated their capabilities. Nevertheless, Mellen emphasized that generative AI does offer valuable applications in security operations, anticipating that the diminished hype would pave the way for discovering more practical and effective uses.
The Automation Imperative
It's undeniable that automation and AI are rapidly transforming security operations. The sheer volume of alerts—often exceeding human capacity to process—makes this shift inevitable. According to a 2024 SANS survey on the state of automation in security operations, organizations are already automating between 29 to 51% of incident response processes. Phishing response, vulnerability management, and data enrichment were the top use cases for security automation.
The impact of AI and automation is particularly pronounced in the Managed Security Service Provider (MSSP) landscape. A survey we conducted this year revealed a resoundingly positive sentiment towards automation among MSSPs. An impressive 82% of the MSSP professionals surveyed reported high or medium utilization of automation, and 60% reported high-to-moderate usage of AI capabilities. 67% attributed revenue growth to automation and 87% of respondents experienced a positive impact on their job satisfaction due to automation.
Augmentation, Not Replacement: The Human Element Remains Crucial
We found that automation rarely equates to a pink slip for a human analyst. Only 4% of MSSPs surveyed reported using automation to replace workers. The reality is that most organizations are struggling to hire and retain qualified cybersecurity professionals.
The role of the cybersecurity analyst will evolve from a reactive, alert-driven model to a proactive, threat-hunting mindset. Critical thinking, creativity, and strategic decision-making—quintessential human skills—will be more valuable than ever. Instead of a robot takeover of security operations, the future is more likely to be that of an augmented, cyborg-like cybersecurity analyst. Picture AI that helps make human practitioners be more effective, instead of replacing them.
By embracing and communicating the "cyborg" model, organizations can not only empower their analysts to work more efficiently, but also gain crucial buy-in from their security teams for automation initiatives. This approach can help foster a collaborative environment where analysts are eager to leverage new technologies because they are not afraid of being replaced by them. The result is a more effective, efficient, and engaged security team focused on tackling the most critical cybersecurity challenges.
The author works for Vancouver-based cybersecurity firm D3 Security. Views are his own. |