top of page

2024: Navigating GenAI Fears, Ransomware Threats, and AI Risks in the Age of Cyber Resilience

With new cyber threats constantly emerging, cybersecurity predictions act as a compass, helping organizations navigate the ever-changing threat landscape. By staying informed and prepared based on these forecasts, businesses can adapt quickly, minimize potential damage, and ultimately safeguard their sensitive data and digital assets in an increasingly hostile online environment. We heard from leaders at Immersive Labs on what the cybersecurity industry should be prepared to navigate in 2024.

Kev Breen, Director of Cyber Threat Research, Immersive Labs

FUD around GenAI will die down: GenAI hit the technology scene in a huge way this past year, and is already heavily being embraced — or companies are racing to get involved so they don’t fall behind. But among all its popularity, we’re simultaneously seeing a significant amount of FUD - fear, uncertainty and doubt - and misunderstanding. People still don’t fully understand the risks and vision of AI, which lends itself to paranoia and fears of unfounded massive cybersecurity attacks by AI. In the year ahead we’ll hopefully see the hype around AI die down and become more of the norm so that we can focus on the many benefits of using these tools to do work more efficiently and effectively. A handful of organizations are dedicating ample time and resources to the actual use cases of this technology and we can expect more businesses to follow suit.

Too much time for exploitation: Despite government intervention to try and strengthen transparency and guidance around cybersecurity practices, many standard implementations still haven’t kept pace. For example, FedRAMP guidelines say organizations have 30 days to remediate high-risk threats — yet attackers just need one day to discover a vulnerability and take advantage to wreak havoc on systems and cause costly damage to organizations. Cybercriminals will likely continue to have first mover advantage, so it is security teams' responsibility to assume compromise and remain cyber resilient as it is unlikely that guidelines such as FedRAMP will be updated to meet the standards of today's threat landscape.

Continued development of AI policies: We already began to see this towards the end of 2023, but in 2024, we can expect governments and AI service providers to continue to implement policies regulating the development of AI. The key differentiator will be if these entities have moved beyond the shock and awe of AI to focus on the benefits. Risk assessment will continue to be a part of the equation as it should with any advancement in technology, but prioritizing innovation in these policies rather than fear will set countries apart. In 2023, we focused on the potential risks of AI. In 2024, it will be essential to focus on the potential opportunities.

Ransomware isn’t going anywhere, so be prepared: One can hope that organizations have learned from the major data breaches we’ve seen over the last year, but we unfortunately continue to see a lot of organizations who are simply not ready to handle the impact of a ransomware attack. Organizations still fall victim to the tried and true tactics that cyber criminals use to gain access to their most sensitive information and despite government advisories saying otherwise, they continue to pay the ransom — which is why this attack style is still popular. We should expect to see ransomware groups leveraging new techniques in Endpoint Detection & Response (EDR) evasion, quickly weaponizing zero days and as well as new patched vulnerabilities, making it easy for them to bypass common defense strategies. As a result, security teams can't rely on an old security playbook. Companies should not worry about how they can detect everything, and instead just assume at some point it will go badly so you should have plans in place to best respond.

AI risks will largely stem from developers and application security: When talking about the risks of AI, many think about threat actors using it in nefarious ways. But in actuality, in 2024 we should be most concerned about how our internal teams are using AI — specifically those in application security and software development. While it can be a powerful tool for certain teams like offensive and defensive teams and SOC analysts to enhance and parse through information, without proper parameters and rules in place regarding AI usage by organizations, it can potentially lead to unexpected risks for CISOs and business executives and leave holes in their cyber resilience to leave the door open for exploitation.


Max Vetter, VP of Cyber, Immersive Labs

Cyber compliance still won’t mean cyber security: Compliance is a necessary evil for an organization’s security posture. Without compliance, organizations are operating with little structure and accountability, yet a compliant organization is often one with a false sense of security. That said, organizations that prioritize resiliency within their workforce’s cybersecurity efforts provide a more secure environment for their organization. High-profile examples of the past year, like the MGM breach and the SolarWinds CISO lawsuit, should provide a springboard for security and IT leaders to prioritize workforce cyber resilience in 2024 rather than merely prioritizing compliance. The bottom line is that more compliant organizations are not necessarily the most secure organizations.



bottom of page