AI Security Risks and the Coming Quantum Threat

From IC Insider Thales Trusted Cyber Technologies

By Gina Scinta, Deputy CTO, Thales Trusted Cyber Technologies

Artificial intelligence (AI) is rapidly transforming our world, from the way we work to the way we interact with machines. As AI becomes more sophisticated, so too do the potential security risks.

Although AI and quantum computing can enable better security, they also threaten security at the same time. These converging technologies independently pose an existential risk to cybersecurity, and both have significant security ramifications.

Let’s look at some of the critical issues at the intersection of AI, quantum and security. In this article, we’ll make sense of the language surrounding quantum computing and AI. We’ll also look at the malicious use of AI, and outline a strategy for guarding AI systems and the coming quantum computing threat.

 Detangling quantum computing

 Security experts typically group quantum computing into four areas:

Quantum computing. Utilizing quantum computers to do complex computations that can’t be achieved by classical computing platform.

 Quantum Key Distribution (QKD). This is a distinct technology from quantum computing. It uses the inherent physics in quantum mechanics to securely distribute cryptographic keys across different endpoints.

Quantum Random Number Generation (QRNG). This refers to leveraging the physics of quantum not for computational efforts, but to generate randomness to derive random numbers through quantum entanglements of measuring how photons react to sensors.

Post-Quantum Cryptography (PQC). This refers to the concerns of how a cryptographically relevant quantum computer might break classic cryptography as we know it today.

All of this language provides context when we consider how AI is creating a higher-risk security environment for organizations.

Malicious uses of AI

There are several ways in which AI can be used maliciously by bad actors:

Deepfakes and Disinformation. Deepfakes involve creating fake audio or video content that convincingly impersonates real individuals. Deepfakes are difficult to detect and stop, which makes them a potent threat.

Disinformation campaigns. Use of deep fake content can discredit public figures by spreading false information, influence public opinion through deceptive content, and extort funds by impersonating someone’s loved ones over video calls.

Social engineering. AI can be used to create persuasive messages that can deceive individuals into revealing sensitive information or taking harmful actions. Cybercriminals leverage social engineering to exploit human psychology, which can lead to unwitting security breaches.

Hacking and cyberattacks. Malicious actors can use AI to automate attacks on computer systems, networks, and critical infrastructure. AI techniques like fuzzing and neural networks can enable the creation of sophisticated computer viruses.

The language of AI security attacks

How can AI security be attacked by rogue actors? There are several typical common means of exploiting vulnerabilities.

Poisoning attacks. During the training phase of an AI implementation, attackers inject malicious data into the training set. These small, nearly imperceptible changes to the model input can lead to wildly different expected output that is incorrect, inconsistent, or erratic.

Transfer attacks. These types of incursions exploit the transferability property of machine learning models, potentially using a successful attack on one AI system against another, similar AI system. Transfer attacks can even target black-box models, where the attacker has no knowledge of the model’s architecture or parameters.

Prompt injection attacks. Here, attackers create input prompts to mislead the model. Such attacks are specifically designed to exploit vulnerabilities and produce harmful responses.

Backdoor attacks on AI models. Malicious actors can insert backdoors during the training of AI large language models. These hidden patterns allow them to trigger specific behaviors in the model when certain conditions are met.

Non-traditional threats to AI models

AI can create a range of cybersecurity threats that cannot be addressed by traditional countermeasures.

Data poisoning. Data poisoning involves adversaries intentionally introducing false or misleading information into the training dataset. This can be achieved through various methods. Inaccurate information or false data can be added to skew model outputs. Existing data can be altered to mislead the model. And in some cases, critical information can be deleted, which can cause the model to function incorrectly.

Backdoor insertion. This refers to a type of attack where an adversary covertly manipulates an AI model during its training phase, embedding a hidden vulnerability known as a “backdoor.” This backdoor allows the attacker to control the model’s behavior under specific conditions, typically triggered by certain inputs. When the AI model encounters this trigger during its operation, it can lead to harmful or unintended outcomes. For example, an AI model used in healthcare could misdiagnose a condition when presented with a specific input.

Backdoor attacks can remain undetected during normal operation, so that the model appears to function correctly for benign inputs. This can make it challenging to identify the compromised behavior until the trigger is activated.

Training data extraction. In this case, specific data points used to train a machine learning model can be retrieved, enabling an attacker to access sensitive or proprietary information that the model was trained on. The implications of training data extraction include potential privacy violations, especially if the training data contains personally identifiable information (PII) or confidential business data.

Membership Inference. These attacks are characterized by an adversary attempting to determine whether a particular data point was included in the training dataset of a machine learning model. This is particularly concerning in scenarios where the training data contains sensitive information.

Adversarial/evasion attack. In adversarial attacks, an adversary deliberately manipulates the input data to mislead the model into making incorrect predictions or classifications. The goal is to cause the model to misclassify the adversarial example while keeping it perceptually similar to the original input. Adversarial examples generated for one model can often be transferred to fool other models, even if they have different architectures or were trained on different datasets.

Model theft. Also known as model extraction, this is a cybersecurity threat where an adversary aims to replicate or duplicate a machine learning model without having direct access to its internal parameters or training data. This is typically accomplished by querying the model through its public interface, such as an API, and analyzing the outputs based on various inputs.

Prompt injection. This type of attack targets machine learning models, particularly large language models (LLMs) that use prompts for instruction-following. In a prompt injection attack, an adversary injects malicious instructions into the prompt, disguising them as legitimate inputs. This allows the attacker to manipulate the model’s behavior, causing it to ignore its original instructions or perform unintended actions.

High-risk vulnerabilities to quantum attacks on cryptography

What would happen when a cryptographically relevant quantum computer can break Public Key Infrastructure (PKI) cryptography? In a word, disaster. All internet-based encryption is done with PKI cryptography.

When bad actors are able to break PKI, it will be much easier to forge signatures and impersonate entities utilizing those credentials. This can put the integrity of any contract that is digitally signed at risk.

It would be difficult if not impossible to trust any updates to systems that aren’t quantum resistant. And the risk doesn’t even have to be immediate. With “harvest now, decrypt later” tactics, an adversary can collect transmitted encrypted data for later decryption, so even the most sensitive long-lived data can be put at risk. That’s why agencies like NSA are pushing CNSA 2.0 requirements at such an accelerated pace, when compared to older transitions of different versions of cryptography.

Securing AI models and data with quantum resistant security

So, how do we protect this AI technology from the coming threat posed by quantum computing?

AI models are broken into three different parts: The inputs to the model, the model itself, and the outputs to the model.

There are unique threats to those models, and it is critical to treat them in kind and make sure you are protecting across to all three of those areas. It is about altering the model to give inaccurate results that will affect decision making, it isn’t like the quantum threat of “harvest now, decrypt later” which is about data theft.

It all comes back to data and the importance of protecting the data. You need to protect the data at rest, in transit and in use. You must ensure that the data you are using to train the model is classified appropriately and protected. The data must be encrypted to ensure that only those with proper access can retrieve the information.

While data is in transit, make sure to monitor all transactions on the network, to detect any malicious actors that might be trying to gain access to any of this data. Also, make sure that the data is encrypted when it is moving across the network.

Do not forget the impact of quantum computing on encryption. You must ensure that your encryption solutions—whether for data at rest or in transit—are crypto-agile meaning that they support both today’s classic algorithms and the PQC algorithms designed to be quantum resistant. This will protect your AI data from quantum attacks.

The cryptographic keys used to encrypt and decrypt data are the keys to the kingdom. If these keys are compromised, malicious actors can decrypt your data. Key management enables central management and protection of cryptographic keys across different use cases. It also provides the ability to backup and archive key material, for on-premise, cloud, and hybrid deployments.  Key management solutions must utilize post quantum cryptography to protect against the quantum threat.

When data is in use, you must have access to behavioral analytics tools. If you are the administrator of the AI model systems, you must understand how that data is being utilized in real-time. You must also leverage strong authentication to ensure that access is properly controlled.

Data discovery and classification allows you to search structured and unstructured repositories for sensitive data and provides tools for understanding your risk level.

Encryption and tokenization can support a large number of use cases (for example, the file system level, the database or column level, application embedded libraries, API gateways, etc.).

Strategy for securing AI systems against the quantum threat

The tools mentioned above are just that – tools. What’s necessary is to develop a strategy for how those tools fall into your overall plans for securing your AI systems.

Here are a range of best practices that form a solid strategy to protect AI against the coming quantum threat.

Protect data within AI systems. Encrypt sensitive data at rest and during transmission.

Use crypto agile solutions that offer both classic and PQC algorithms and offer secure key management practices.

Use access control and authentication. Implement access controls to restrict data access based on roles and permissions. Use authentication mechanisms (for example, OAuth, API keys) to verify user identities.

Minimize data collection. Collect only necessary data for model training and inference. Avoid storing excessive or irrelevant information.

Anonymize and pseudonymize. Anonymize personally identifiable information (PII) to prevent direct identification. Use pseudonyms (fake names) for individuals in datasets.

Use audit trails and logging. Maintain audit logs to track data access, modifications, and system activities. Logs help detect unauthorized access or suspicious behavior.

Secure APIs and endpoints. Protect APIs and endpoints used for data exchange. Use HTTPS and validate input data to prevent injection attacks.

Mask and tokenize data. Make data (credit card data, for example) anonymous, so that hackers don’t know who or what the information is about. Importantly, you must prevent that data from being used when you are training any AI models.

Secure model training and deployment. Protect training data from unauthorized access, and use federated learning to train models without sharing raw data. Ensure that AI models are deployed in a secure environment, and limit access to model APIs and endpoints.

Conduct regular security assessments. Perform penetration testing and vulnerability assessments on a routine basis. This will enable you to identify and address security gaps.

We’re seeing AI technology and systems extending into all aspects of computing, and the rapid approach of cryptographically relevant quantum computing can make these systems susceptible to hacking from bad actors who themselves are using AI to make their dirty work easier.

Create a proactive strategy that safeguards against the malicious use of AI (and eventually quantum computing). In that way, your organization can make the best use of the technology without falling victim to its dangerous downsides.

About Thales TCT

Thales Trusted Cyber Technologies, a business area of Thales Defense & Security, Inc., protects the most vital data from the core to the cloud to the field. We serve as a trusted, U.S. based source for cyber security solutions for the U.S. Federal Government. Our solutions enable agencies to deploy a holistic data protection ecosystem where data and cryptographic keys are secured and managed, and access and distribution are controlled.

For more information, visit www.thalestct.com

About IC Insiders

IC Insiders is a special sponsored feature that provides deep-dive analysis, interviews with IC leaders, perspective from industry experts, and more. Learn how your company can become an IC Insider.