Unlocking the Potential: Large Language Models in Security Operations Centres
Part 1: Understanding the Landscape
In the first instalment of our blog series, we dive into the landscape of Large Language Models (LLMs) in Security Operations Centres (SoC). We explore the potential these models hold for enhancing threat detection, investigation, and response. Gain insights into the risks and opportunities that come with this revolutionary approach to cybersecurity.
Innovation is not a choice but a necessity. A new vulnerability is being discovered every 20 min, and within the next year there will be around 10 devices per person on the planet, which is over 80Billion IT assets that attackers can target. Enter Large Language Models (LLMs), a revolutionary development with the potential to reshape Security Operations Centres (SoC) fundamentally. In this first part of our series, we embark on a journey to comprehend the rapidly expanding landscape of LLMs, exploring both the promises they hold and the challenges they pose.
Part 1: Understanding the Landscape
The Promise of LLMs:
Large Language Models, powered by advanced machine learning techniques, bring the promise of transforming the way we approach threat detection, investigation, and response. An LLMs’ ability to comprehend and contextualize vast amounts of textual data opens new avenues for proactive cybersecurity.
Opportunities Unveiled:
1. Enhanced Threat Detection: LLMs can analyse and correlate information across diverse sources in real-time, enabling SoCs to detect sophisticated threats more efficiently. The last 3-5 years has seen the implementation of machine learning in technology platforms with the goal of reducing error rates. The next 3-5 years will see the implementation of LLM’s to support the people & process aspect of security operations. Helping SoC teams better integrate technology, people and process.
2. Contextualised Investigation: By understanding the context behind security incidents, LLMs empower cybersecurity professionals to conduct more thorough and effective investigations. Contextualised & efficient investigation & response: Automation powered, in part by LLMs can expedite response times, mitigating the impact of security incidents at a much higher velocity and with greater efficacy.
3. Enhanced Operators: Security operations staff should be curious to learn how LLM’s will allow them to become more effective analysts in a shorter timeframe with LLM’s supporting SoC investigations and reducing thinking-time associated with making determinations around threat type, severity and counter-measures.
With market demand for security analysts at an all-time high, the ability of LLM’s to help junior SoC analysts to quickly become proficient has huge value. Rather than replacing humans, the opinion of security leaders leans towards LLM’s helping SoC teams create a better working environment & a more effective team that can handle a higher volume of incidents.
Navigating Challenges:
1. Data Privacy Concerns: The utilisation of large-scale language models necessitates careful consideration of data privacy & data security implications. Understanding where and how data is processed should be considered in addition to assessing the risk of injection attacks to the data model such that bad actors could manipulate data set to force the LLM to misread or completely overlook certain events.
Security leaders cite requests from their executive team to put guard rails around their organisations use of LLM’s but are themselves unclear of the risk the LLM’s pose to the business and to cyber operations. These concerns were overcome when IT leaders were posed the same challenge in the last decade with the advance of cloud, the early adopters were seen as mavericks putting their data into the unknown. The same could be said with LLM’s initially without guidance and knowledge on who has access and how the model repositories are protected, the early adopters will be forgiven for being cautious.
With security technology and service providers leading the way, best-practice is starting to emerge that mitigates early concerns. Active collaboration amongst security practitioners & ongoing consultation with subject matter experts will remain vital to staying informed and striking a balance between opportunity & risk.
2. Potential Biases: LLMs may be misinformed by the data they are trained on. Addressing and mitigating biases & false data is essential to ensure accuracy. To what extent LLM’s can be biased within the context of security investigations and response is unclear. But it’s easy to imagine a scenario where threat actors may attempt a data injection attack to influence training data to subvert security controls powered by an LLM.
3. Commercial Impact: As with all decisions on emerging technologies the balance between viability and technical debt is a challenge. Typically, an LLM’s commercial model is built around their output and not necessarily their effectiveness. Simply pushing the LLM agenda without first assessing the impact to people and your processes might not realise their full potential.
Part 1: Understanding the Landscape
Stay tuned as we navigate the future of cybersecurity!
As we stand on the brink of this transformative journey, it’s vital to equip ourselves with a comprehensive understanding of the LLM landscape. In the upcoming parts of this series, we will delve into the critical decision-making process: whether organisations should build and train their own models or leverage the capabilities offered by security platform providers.
If you are reading this now, you are so are someone who is fascinated in all things AI and LLM’s. Exciting news, we have a perfect event for you!
Cybanetix is kicking off their event season with an AI community roundtable, where all guests can openly share their thoughts and queriers in front of industry experts.
Head over to the sign-up page below to book your seat at the roundtable.

