The State of AI Security: Insights from the Top 5 Open-Source AI Frameworks
The Top Five Open-Source AI Libraries on GitHub are Examined for Security Issues: Reachability, Exploitability, Fixability, and Initial Access Potential
As AI frameworks become integrated into various applications, evaluating their security with precision is critical. This report focuses on the top five (most starred) open-source AI libraries on GitHub:
- TensorFlow
- Hugging Face Transformers
- OpenCV
- PyTorch
- Keras
Each library’s security posture is assessed by examining known vulnerabilities, reachability (whether vulnerable functions are commonly used), exploitability (the availability of proof-of-concept attacks), fixability (whether patches exist), and Initial Access Potential (IAP)—how likely these AI libraries are to be targeted in the initial phase of an attack chain.
This analysis is designed to provide security researchers and engineers with insights into securing AI infrastructure and understanding how these vulnerabilities might be leveraged in real-world environments.
Comparison of Key Risk Factors
A detailed security assessment is critical for understanding the real-world impact of vulnerabilities within widely adopted AI framework. The table below highlights the key risk factors:
Assessing Security Metrics for Reachability, Exploitability, Fixability & IAP
Definition & Example: Reachability
Reachability refers to whether a function in a library is invoked during an application’s normal workflow. In the context of open-source security, a function with an associated vulnerability that is called at a higher frequency poses a higher security risk.
In the case of TensorFlow, its Conv2D function is critical in convolutional neural networks (CNNs) and is invoked in many real-world AI models. This makes it highly reachable and an attractive attack vector for adversaries. Similarly, OpenCV's image processing functions are often exposed to untrusted input, leading to high reachability.
Python
# Vulnerable Conv2D function handling untrusted input
try:
input_data = tf.random.normal([1, 1, 1, 999999])
output = tf.raw_ops.Conv2D(input=input_data, filter=tf.random.normal([3, 3, 1, 1]), strides=[1, 1, 1, 1], padding="SAME")
except Exception as e:
print(f"DoS triggered: {e}")
Definition & Example: Exploitability
Exploitability is a metric that evaluates whether a known proof-of-concept (POC) exploit is available and whether it can be executed to compromise the system.
For example, TensorFlow and OpenCV have publicly available POCs demonstrating denial-of-service (DoS) attacks, increasing their risk profile. Though vulnerable, PyTorch and Hugging Face Transformers have fewer POCs available, reducing their immediate exploitability in common environments.
Definition & Example: Fixability
Fixability assesses whether a patch or mitigation exists for the vulnerability. Most of the AI libraries reviewed have addressed their vulnerabilities in recent releases, with fixes available through updates.
Regularly upgrading AI libraries like TensorFlow and PyTorch can mitigate the impact of known vulnerabilities. Developers should also monitor dependency chains, particularly for AI libraries like Hugging Face Transformers, which are vulnerable due to the large number of external packages they rely on.
Definition & Example: Initial Access Potential
IAP is a metric that assesses how likely a vulnerability will be exploited as the first step in an attack chain.
For instance, frameworks such as TensorFlow and OpenCV have high Initial Access Potential (IAP) due to their frequent exposure to untrusted inputs through APIs and external services. These libraries are often the first point of entry for attackers. On the other hand, Keras, with lower direct exposure, has a reduced IAP, making it less likely to be targeted in the initial access phase of an attack.
Initial Access is the Key Entry Point in Attack Chains
In the context of an attack chain, initial access refers to the first foothold an attacker gains in a target system. Libraries such as TensorFlow and OpenCV are highly likely to be used for this purpose due to their frequent exposure via APIs and the handling of untrusted input. These attack surfaces are critical to monitor in any production environment. Meanwhile, AI libraries like Keras, though important, are less likely to serve as a vector for initial access due to their typical execution contexts.
Conclusion
The security of AI frameworks and libraries is a growing concern for researchers, engineers, and developers alike. Identifying and mitigating vulnerabilities is essential, as these libraries form the foundation for modern AI models. This report highlighted key vulnerabilities, their exploitability, and risk mitigation strategies.
Stay tuned for the next AI Security Insights edition, where we’ll explore real-world exploit scenarios and expand to other parts of the MLOps stack.
At Kodem, we aim to provide real-time attack chain analysis and visibility into potential security gaps. As AI technology evolves, securing its foundation is critical to protecting applications and systems across industries. Sign up for a personalized demo to see how Kodem leverages AI with our revolutionary runtime intelligence.
More blogs
A Primer on Runtime Intelligence
See how Kodem's cutting-edge sensor technology revolutionizes application monitoring at the kernel level.
Platform Overview Video
Watch our short platform overview video to see how Kodem discovers real security risks in your code at runtime.