top of page

Impact of AI on Cybersecurity: Pros and Cons Analyzed By Experts

AI's impact on cybersecurity has brought both advantages and challenges. On the positive side, AI has significantly enhanced threat detection capabilities by swiftly analyzing vast datasets and identifying anomalies, enabling quicker responses to potential breaches. Additionally, it has improved incident response by automating processes, reducing human error, and expediting breach containment and recovery.

However, AI's adoption in cybersecurity has also introduced new challenges. Cybercriminals are now utilizing AI for more sophisticated attacks, automating the identification of vulnerabilities and crafting personalized phishing attempts. For Cybersecurity Awareness Month, we heard from experts from across the industry on the impact of AI on cybersecurity - both pros and and cons. Joe Regensburger, Vice President of Research Engineering, Immuta

“AI and large language models (LLMs) have the potential to significantly impact data security initiatives. Already organizations are leveraging it to build advanced solutions for fraud detection, sentiment analysis, next-best-offer, predictive maintenance, and more. At the same time, although AI offers many benefits, 71% of IT leaders feel generative AI will also introduce new data security risks. To fully realize the benefits of AI, it’s vital that organizations must consider data security as a foundational component of any AI implementation. This means ensuring data is protected and in compliance with usage requirements. To do this, they need to consider four things: (1) “What” data gets used to train the AI model? (2) “How” does the AI model get trained? (3) “What” controls exist on deployed AI? and (4) “How” can we assess the accuracy of outputs? By prioritizing data security and access control, organizations can safely harness the power of AI and LLMs while safeguarding against potential risks and ensuring responsible usage.”


David Divitt, Senior Director, Fraud Prevention & Experience, Veriff

"We’ve all been taught to be on our guard about “suspicious” characters as a means to avoid getting scammed. But what if the criminal behind the scam looks, and sounds, exactly like someone you trust? Deepfakes, or lifelike manipulations of an assumed likeness or voice, have exploded in accessibility and sophistication, with deepfakes-as-a-service now allowing even less-advanced fraud actors to near-flawlessly impersonate a target. This progression makes all kinds of fraud, from individual blackmail to defrauding entire corporations, significantly harder to detect and defend against. With the help of General Adversarial Networks (GANs), even a single image of an individual can be enough for fraudsters to produce a convincing deepfake of them.


Certain forms of user authentication can be fooled by a competent deepfake fraudster, necessitating the use of specialized AI tools to identify the subtle but telltale signs of a manipulated image or voice. AI models can also be trained to identify patterns of fraud, enabling businesses to get ahead of an attack before it hits.


AI is now at the forefront of fraud threats, and organizations that fail to use AI tech to defend themselves will likely find themselves the victim of it."


Bala Kumar, Chief of Product at Jumio

“There are a number of commonly used verification tools out there today, like multi-factor authentication (MFA) and knowledge-based authentication. However, these tools aren’t secure enough on their own. With the rise of new technologies like generative AI, cybercriminals can develop newer and more complex attacks that organizations need to be prepared for. Fraudsters can leverage ChatGPT, for instance, to create more convincing and targeted phishing scams to increase their credibility and impact, victimizing more users than before.

This month’s emphasis on cybersecurity reminds us that organizations must build a strong foundation starting with user verification and authentication to efficiently protect customer and organizational data from all forms of fraud. Strong passwords and MFA are always beneficial to have, but with the increasing sophistication of cyberattacks, organizations must implement biometric-backed identity verification methods. By cross-referencing the biometric features of an onboarded user with those of the cybercriminal attempting to breach the company, organizations can prevent attacks and ensure that the user accessing or using an account is authorized and not a fraudster, keeping vital data out of criminals’ reach.”


Yariv Fishman, Chief Product Officer, Deep Instinct

“This Cybersecurity Awareness Month is unlike previous years, due to the rise of generative AI within enterprises. Recent research found that 75% of security professionals witnessed an increase in attacks over the past 12 months, with 85% attributing this rise to bad actors using generative AI.


The weaponization of AI is happening rapidly, with attackers using it to create new malware variants at an unprecedented pace. Current security mechanisms rooted in machine learning (ML) are ineffective against never-before-seen, unknown malware; they will break down in the face of AI-powered threats.


The only way to protect yourself is with a more advanced form of AI. Specifically, Deep Learning. Any other NL-based, legacy security solution is too reactive and latent to adequately fight back. This is where EDR and NGAV fall short. What’s missing is a layer of Deep Learning-powered data security, sitting in front of your existing security controls, to predict and prevent threats before they cause damage. This Cybersecurity Awareness Month, organizations should know that prevention against cyber attacks is possible – but it requires a change to the “assume breach” status quo, especially in this new era of AI.”


Olivier Gaudin, Co-CEO & Founder, Sonar

“This Cybersecurity Awareness Month (CAM), a message to business leaders and technical folks alike: Software is immensely pervasive and foundational to innovation and market leadership. And if software starts with code, then secure or insecure code starts in development, which means organizations should be looking critically at how their code is developed. Only when code is clean (i.e. consistent, intentional, adaptable, responsible) can security, reliability, and maintainability of software be ensured.


Yes, there has been increased attention to AppSec/software security and impressive developments in this arena. But still, these effort are being done after the fact, i.e. after the code is produced. Failing to do this as part of the coding phase will not produce the radical change that our industry needs. Bad code is the biggest business liability that organizations face, whether they know it or not. And chances are they don't know it. Under their noses, there is technical debt accumulating, leading to developers wasting time on remediation, paying some small interest for any change they make, and applications being largely insecure and unreliable, making them a liability to the business. With AI-generated code increasing the volume and speed of output without an eye toward code quality, this problem will only worsen. The world needs Clean Code.


During CAM, we urge organizations to take the time to understand and adopt a ‘Clean as You Code’ approach. In turn, this will stop the technical debt leak, but also remediate existing debt whenever changing code, reducing drastically the cybersecurity risks, which is absolutely necessary for businesses to compete and win -- especially in the age of AI.”


David Menichello, Director, Security Product Management at Netrix

"Generative AI is creating an imbalance between offensive and defensive security teams. Generative AI is accelerating the development of exploits and payloads on the offensive side. Likewise, it is a good tool for the blue teams who defend their networks and applications for finding ways to automate and bridge gaps in a population of IT assets that could be vulnerable and not under one management program that’s easily patched, secured, or interrogated for susceptibility to attacks. There will always be an imbalance because the attack side can weaponize exploits quicker than the defense side and assess, test, and patch."


Doug Kersten, CISO, Appfire

“First and foremost, whether an employee has been at an organization for 20 days or 20 years, they should have a common understanding of how their company approaches cybersecurity; and be able to report common threats to security.


It’s been refreshing to see security come to the forefront of conversation for most organizations. It was rare 20 years ago that cybersecurity awareness was even a training concern unless you were at a bank or regulated institution. Today, it is incredibly important that this heightened interest and attention to security best practices continues. With advancements in technology like AI, employees across industries will face threats they’ve never encountered before - and their foundational knowledge of cybersecurity will be vital.


Employees today should be well-trained on security standards and feel comfortable communicating honestly with their security teams. Even more important, security leaders should ensure their organizations have anonymous alternatives for employees to report their concerns without fear of retaliation or consequence. By combining education and awareness into the foundation of your organization’s security framework, and empowering employees, the odds of the realization of a threat decrease exponentially.”

###



bottom of page