Experts from Darktrace, a leading global cybersecurity AI firm, offered their cybersecurity predictions for the upcoming year, including the rise of generative AI, growing concerns over OT attacks, and the proliferation of malware-as-a-service.
Nicole Carignan, VP of Strategic Cyber AI, Darktrace
Bigger won’t equate to better when it comes to LLMs.
For LLMs to reach their true potential when it comes to enterprise use, there are several challenges that will need to be overcome in 2024. AI researchers will turn to new techniques to improve the performance and accuracy of LLMs without just adding more compute and data.
Specifically, vector databases – which store numerical representations of data, including numerical representations of the similarities between data – will become more widely used to improve the performance of LLMs without having to increase the compute. Vector databases will also be used to facilitate more knowledge sharing between different LLMs, which will enable organizations to take learnings from one model and apply them to another model to refine it. This type of technology will be key as developers look for faster and more efficient ways to improve their LLMs without requiring more compute/data center resources.
Reinforcement learning is currently a key component of how LLMs are trained and refined over time, but one of the challenges facing developers is that each time a model is taken “offline” for updates, the reinforcement learning is overridden and they have to start from scratch to fine tune the models again. AI researchers will focus on finding new ways to make reinforcement learning more persistent so that each time the model is updated, the existing refinements remain in place.
AI will hunt for software vulnerabilities – for the good guys and the bad guys
As AI becomes more widely used to augment software development, defenders will use it to find vulnerabilities in their software. On the flip side, AI could also become an even more powerful tool for adversaries to find and exploit new vulnerabilities in software to launch attacks.
Autonomous agents will augment attacker and defenders
Adversaries are going to focus their efforts on improving and optimizing autonomous agents (e.g. AutoGPT) to augment on-demand attacks. As autonomous agents get more sophisticated, they will be able to better pivot and improve their decision making about the best next step to advance an attack. Currently autonomous agents have limited capacity to make complex decisions, but as adversaries focus on optimizing and training existing agents, they will become even more capable of targeted and sophisticated actions. This will be a particular focus for nation-state adversaries.
At the same time, as adversaries double down on the use and optimization of autonomous agents for attacks, human defenders will become increasingly reliant on and trusting of autonomous agents for defense. To build this trust, explainability and transparency of autonomous agents and their decision making is critical.
Marcus Fowler, CEO of Darktrace Federal
Disruptive ransomware attacks will target operational technology behind critical infrastructure.
As network and email security improves and cybersecurity increases as a priority in both the public and private sector, we will see non-state actors such as cyber-criminal groups gravitate toward less secured, softer targets, potentially into Operational Technology (OT) environments linked to manufacturing and critical infrastructure.
As OT security struggles between legacy systems and the expanding wave of IT and OT interconnectivity within their environments, they may very likely find their organizations and environments increasingly targeted by for-profit attackers.
These non-state actors do not have the same risk calculus as state actors, as they do not have the same diplomatic or national security concerns. Not only will non-state actors likely be more willing to compromise critical infrastructure, but the increased capability for non-state actors to launch sophisticated threats, combined with the increasing vulnerability of critical infrastructure as IT and OT systems combine into ‘cyber-physical systems’, dramatically raises the likelihood of ransomware attacks and other disruptive attacks against critical infrastructure.
For cyber-criminals in particular, they know that critical infrastructure organizations cannot afford to have their systems down for even a moment. This means that any cyber disruption, however minor, can be leveraged to extort victims for financial gain.
Hanah Darley, Director of Threat Research, Darktrace
Multi-pronged attacks as malware-as-a-service will proliferate
With the increase of ‘as-a-Service’ marketplaces, it is likely that organizations will face more multi-phase compromises, where one strain of malware is observed stealing information (Info-Stealers) and that data is sold to additional threat actors or utilized for second and/or third-stage malware or ransomware.
This trend builds on the concept of initial access brokers but utilizes basic browser scraping and data harvesting to make as much profit throughout the compromise process as possible.
This will likely result in security teams observing multiple malicious tools and strains of malware during incident response, with attack cycles and kill chains morphing into less linear and more abstract chains of activity – making it more essential than ever for security teams to apply an anomaly approach to stay ahead of asymmetric threats.
A good initial example of this trend observed during 2023 was the prevalence of info-stealing malware and the volume of organizations affected by those, compromising data and often leading to credential harvest and secondary infections.
Toby Lewis, Global Head of Threat Analysis, Darktrace
2024 elections will be targets of possible deepfake propaganda, use of cyber to control narrative.
2024 is the year of some potentially high stakes elections, with Russia, Ukraine and the US going to the polls to elect their leaders. The use of cyber to attempt to manipulate voters is nothing new, going back to at least 2016 & 2017 with public statements and accusations around the US and French presidential elections, as well as the UK's Brexit referendum.
But this year could see a new weapon being exploited - AI-generated deepfakes and propaganda, to either destabilize the vote, or amplify one candidate over another. This capability is now more accessible than ever, with popular YouTubers and other content creators such as MrBeast and FootDocDana being mimicked in video ads to promote products they have had nothing to do with.
Whilst more outlandish comments and claims maybe relatively easy to refute, subtle alterations and manipulations of content may end up being all that is needed to sow that element of doubt. One thing is for sure, consumers will need to be mindful of the content they're consuming and the reputability of their sources.
Max Heinemeyer, Chief Product Officer, Darktrace
AI will be further adopted by cyber attackers and might see the first AI worm
2023 has been the year where attackers test things like WormGPT and FraudGPT, and adopt AI in their attack methodologies. 2024 will show how more advanced actors like APTs, nation-state attackers and advanced ransomware gangs have started to adopt AI. The effect will be even faster, more scalable, more personalized & contextualized attacks with a reduced dwell time.
It could also be the year of attackers combining traditional worming ransomware - like WannaCry or notPetya - with more advanced, AI-driven automation to create an aggressive autonomous agent that has sophisticated, context-based decision-making capabilities.
Comments