Artificial Intelligence at the Crossroads: The Need for Security

From IC Insider Thales Trusted Cyber Technologies

By Bill Becker, CTO, Thales Trusted Cyber Technologies

The rapid acceptance of artificial intelligence (AI) across a range of applications in both the public and private sectors carries with it the promise of unprecedented speed and operational efficiency. That same speed of adoption, however, begs the question of whether today’s AI has incorporated adequate security.

There is substance behind that question – which is why in 2023 the industry analyst group Gartner placed a number of AI technologies in its Hype Cycle report, with generative AI at the top of its Peak of Inflated Expectations.

This dubious distinction should not be surprising to any of us. You can’t read or watch the news without the topic of AI coming up as one of the greatest innovations for human achievement in modern memory. And yet, for the very reason that AI has the potential to do great things, it is also one of the most significant potential causes for harm if used for malicious purposes. We must be clear that with the advent of AI, demands on security have never been greater than they are at this moment.

It is the potential threat to security posed by AI that prompted the Senate Cybersecurity Caucus this past May to introduce the Secure Artificial Intelligence Act of 2024. The legislation is intended to “improve information sharing between the federal government and private companies by updating cybersecurity reporting systems to better incorporate AI systems.” Additionally, the Act would create a “voluntary database to record AI-related cybersecurity incidents including so-called ‘near miss’ events.”

To fully prepare for the transformational possibilities of AI, we must enter into this new age with our eyes wide open, understanding that there are security considerations to address when implementing AI. These considerations fall broadly into three categories:

  • Security of AI – How secure AI is from interference from bad actors,
  • Security for AI – Essential practices to address AI’s security concerns, and
  • Security from AI – Positive contributions to cybersecurity that come from or are enhanced by AI.

 

In this article, we’ll take a clear-eyed look at each of these aspects of AI security. Before we do that, however, let’s understand the potential malicious uses of AI.

Malicious Use of AI

As with practically every technological innovation before it, AI can be used – and in fact is being used – by bad actors with malicious intent such as criminals, terrorists, and hostile nation-states.

These bad actors deploy a surprisingly wide range of tactics made easier through AI to disrupt operations in both public and private sector. To name just a few current and potential abuses of this technology:

Deepfakes and Disinformation. For the uninitiated, deepfakes involve creating fake audio or video content that convincingly impersonates real individuals. This manipulated media can then be used to discredit public figures by spreading false information, or to influence public opinion through deceptive content.

In many instances, they can be used to extort funds by impersonating someone’s loved ones over video calls. Deepfakes by their very nature can be hard to detect and stop, making them among the most serious potential threats.

Misuse of Military Robots. Adversaries could exploit AI-powered military robots for malicious purposes. These robots might be reprogrammed to cause harm or disrupt security systems.

Autonomous Weapon Systems. Rogue states or non-state actors might deploy AI-enhanced lethal drones or other autonomous weapons. These systems could potentially be operated without human intervention, posing a grave threat to global security.

Social Engineering. AI can be used to craft persuasive messages that deceive individuals into revealing sensitive information or taking harmful actions. Cybercriminals already leverage social engineering to exploit human psychology; the added capabilities unlocked by AI make this type of malicious activity particularly concerning.

Hacking and Cyberattacks. Malicious actors can use AI to automate attacks on computer systems, networks, and critical infrastructure. Techniques like fuzzing and neural networks enable the creation of sophisticated computer viruses.

Membership Inference Attacks. AI models can be reverse-engineered to infer membership in specific datasets. Attackers could exploit this capability to reveal sensitive information about individuals.

Security of AI: Common Attack Vectors

The other side of the coin of malicious uses of AI described above is the equally troubling potential that a user’s own AI implementations can be manipulated or subverted in adversarial attacks. There is a significant challenge to preventing machine learning models from being deceived and having their vulnerabilities exploited without the knowledge of network administrators.

Some common adversarial attack techniques include the following:

Poisoning Attacks. In the testing or deployment phase of machine learning, models are susceptible to evasion attacks. Attackers may inject malicious data into the training set, with small, imperceptible perturbations added to the input. This manipulation of input data may deceive the model into misclassifying data and producing incorrect output. The model learns incorrect patterns and may therefore potentially make wrong decisions.

Transfer Attacks. These attacks exploit the transferability property of machine learning models. An adversarial example crafted for one model can also fool other models with similar architectures. Transfer attacks can even target black-box models, where the attacker has no knowledge of the model’s architecture or parameters.

Prompt Injection Attacks. In this example, attackers craft input prompts to mislead the model. These prompts are specifically designed to exploit vulnerabilities and produce harmful responses.

Backdoor Attacks on AI Models. Malicious actors in this instance insert backdoors during training. These hidden patterns allow them to trigger specific behaviors in the model when certain conditions are met.

We’ve seen some of the inherent potential security problems of AI. In the next section, let’s take a slightly deeper dive into best practices to address security for AI.

Security for AI: What to Do

Whether it is personal identity information, corporate intellectual property, or even computer network information, there is a vast amount of sensitive data utilized by AI systems that needs to be protected before, during, and after use. Uses of that data can include training of large language models, input data, or the output of an AI solution, among others.

It’s absolutely essential to protect data in AI systems to maintain privacy, security, and ethical standards. Here are some essential practices:

Access Control and Authentication. Implement access controls to restrict data access based on roles and permissions. Use authentication mechanisms (for example, OAuth or API keys) to verify user identities.

Anonymization and Pseudonymization. Anonymize personally identifiable information (PII) to prevent direct identification. Use pseudonyms (fake names) for individuals in datasets.

Audit Trails and Logging. Maintain audit logs to track data access, modifications, and system activities. Logs help detect unauthorized access or suspicious behavior.

Data Encryption. Encrypt sensitive data at rest and during transmission.

Use strong encryption algorithms and secure key management practices.

Data Minimization. Collect only necessary data for model training and inference.

Avoid storing excessive or irrelevant information.

Data Masking and Tokenization. Mask sensitive data such as credit card or Social Security numbers with placeholders. Use tokens to represent sensitive information.

Regular Security Assessments. Conduct penetration testing and vulnerability assessments. Identify and address security gaps.

Secure APIs and Endpoints. Protect APIs and endpoints used for data exchange.

Use HTTPS and validate input data to prevent injection attacks.

Secure Model Deployment. Ensure that AI models are deployed in a secure environment. Limit access to model APIs and endpoints.

Secure Model Training. Protect training data from unauthorized access. Use federated learning to train models without sharing raw data.

Security from AI – How AI Helps Security

So far, we’ve spent a lot of time discussing the threats and potential harm related to AI, now let’s consider the good side of AI. How is AI being used to enhance cybersecurity? Here are ten examples of how AI is being used in the field of cybersecurity:

Intrusion Detection and Prevention. AI can identify unusual activity on a network and alert security personnel to potential threats. It helps detect unauthorized access attempts or suspicious behavior.

Cyber Threat Intelligence. AI analyzes data from various sources, including social media and the dark web, to identify emerging threats. It provides valuable intelligence to businesses, helping them stay ahead of cybercriminals.

Phishing Protection. AI analyzes emails and detects patterns indicative of phishing attempts. By identifying malicious links or suspicious content, it helps prevent successful phishing attacks.

Vulnerability Management. AI scans software and systems to identify vulnerabilities. It prioritizes which vulnerabilities need immediate attention based on their potential impact, allowing organizations to focus on critical issues.

Network Security. AI monitors network traffic and identifies unusual patterns that may indicate an attack, such as denial-of-service (DoS) attacks or unauthorized access attempts.

Password Security. AI helps users choose stronger passwords by identifying weak ones. Strengthening password security reduces the risk of unauthorized access.

User Behavior Analytics. AI monitors user behavior and identifies patterns that may indicate security risks. For example, it can detect unusual login locations or abnormal data access.

Threat Detection and Prevention. AI algorithms continuously analyze data to detect potential threats. Whether it’s malware, ransomware, or other malicious activities, AI helps prevent security breaches.

Vulnerability Assessment. AI assesses the security posture of systems, applications, and networks. It identifies vulnerabilities and recommends remediation steps.

Password Management. AI assists in managing passwords securely. It can suggest password changes, enforce password policies, and detect compromised credentials.

All of these examples demonstrate how AI enhances cybersecurity by automating tasks, improving threat detection, and enhancing overall security posture.

The introduction of artificial intelligence and machine learning touches nearly every aspect of our everyday IT experience – from enhanced user interaction to operational speeds that are orders of magnitude beyond our current capabilities. The significance of these advancements is comparable to creation of computer networking itself.

To go confidently into these new, virtually uncharted areas, we must temper our enthusiasm for AI with an awareness of what bad actors are planning or actually doing now to use this technology to their own nefarious ends. The need for incorporating comprehensive security measures with AI, including securing data in motion and at rest, has never been greater.

About Thales TCT

Thales Trusted Cyber Technologies, a business area of Thales Defense & Security, Inc., protects the most vital data from the core to the cloud to the field. We serve as a trusted, U.S. based source for cyber security solutions for the U.S. Federal Government. Our solutions enable agencies to deploy a holistic data protection ecosystem where data and cryptographic keys are secured and managed, and access and distribution are controlled.

For more information, visit www.thalestct.com

About IC Insiders

IC Insiders is a special sponsored feature that provides deep-dive analysis, interviews with IC leaders, perspective from industry experts, and more. Learn how your company can become an IC Insider.