The Intersection of
Artificial Intelligence (AI) and Cybersecurity
What Does This Really
Mean?
Dr. Nathaniel J. Fuller
Senior Fellow – Artificial
Intelligence (AI) and Innovation, Noblis
Adjunct Professor – Purdue
University (Global)
10/8/24
When
looking into various industry sectors, the current state of cybersecurity and artificial
intelligence (AI) can be summarized as an integration of machine learning (ML)
and deep learning (DL) into defensive and offensive strategies. For example, defensive
AI efforts consist of improving the security and resilience of information technology
(IT) systems against cyberattacks. These systems are designed with ML and DL to
identify unusual patterns in network traffic, behavior anomalies, and new forms
of malware. Automated response systems use learning algorithms to detect and
mitigate threats, isolate infected systems, block traffic, or neutralize
attacks before human intervention. Predictive analytics are also employed to predict
future vulnerabilities and attacks with real-time and historical data, while AI
endpoint protection provides device monitoring and identifies malicious
activity.
Offensive
AI, on the other hand, is used to develop new cyberattacks and automate the
exploitation of existing vulnerabilities. This emerging field combines ML and
DL with traditional attack vectors, creating sophisticated threats and attacks that
companies study and analyze to improve cyber defenses. However, ML and DL are both a blessing and a
curse for offensive and defensive AI. While they can improve and enhance
cybersecurity, they can also be used to launch attacks at unprecedented speed
and scale.
According
to a 2024 generative AI and cybersecurity report by Sapio Research and Deep
Instinct (Figure 1), 75% percent of cybersecurity experts and professionals
reported seeing an increase in cyberattacks over the past year, with 85%
attributing the rise to bad actors using generative AI. Nearly half (46%) of
all respondents believe generative AI will make businesses more vulnerable to
cyberattacks than they were before AI implementation. Thirty-nine percent
envision an increase in privacy concerns, while 37% said they believe an
increase in undetectable phishing attacks is possible. A third of respondents
also said they see an increased volume and velocity of attacks, as well as a
heightened presence of deep fakes used to orchestrate these attacks.
Fourty-seven
percent of respondents indicated that their companies now have a policy to pay ransoms
associated with AI-driven cybersecurity attacks, an increase of 13% from last
year. Additionally, 42% of respondents reported paying for stolen data to be
returned, compared to 32% who did the same in 2022. This situation is
unprecedented because studies have shown that payments to hackers do not
guarantee remediation or mitigation from the attack. Fourty-five percent of
those who paid cybercriminals still had their data exposed.
Figure
1. Concerns Around Implementation – Source: Sapio Research & Deep Instinct

The
future intersection of AI and cybersecurity will require the introduction of AI
Security (AISec) to mitigate AI-powered attacks or adversarial AI. AISec combines
human cognitive knowledge with AI to create a more adaptive and resilient Zero
Trust (ZT) model. Model access control and authentication identify and validate
actors, interfaces, and data inputs that can mislead ML and DL models into
making incorrect predictions or classifications. The key here is to reduce the
attempts to deceive ML and DL models with manipulated or malicious data during
inference or during the model’s learning process.
Model
assessment and evaluations involve bias analysis, model and data drift, differential
privacy techniques, and homomorphic encryption to protect sensitive data used
for training ML and DL models, ensuring that models do not leak personal or
confidential information. Assessment and evaluation cards explain how the model
or models were statistically trained, what data or data sets were used, and
why. This is essential in ensuring that the model or models are trustworthy and
auditable. Lastly, assessment and evaluation cards include auditing diaries that
describe when and how the models were assessed, ensuring that they continue to
function securely and ethically over time.
This article originally appeared in the Fall 2024 edition of Service Contractor magazine.