Vulnerable by Design
Machine Learning and AI are all over the place right now, with the web exploding with articles talking about what AI is capable of achieving and how millions of people are going to lose their jobs soon.
As usual, new technologies comes with new security challenges, and the cybersecurity industry is starting to talk about prompt injection, model poisoning, data exfiltration through AI, and many other threats with fancy names.
I also started looking into AI Security and everything seems to be fascinating and scaring at the same time, but I also believe one of the biggest threats could be much simpler and easy to exploit. Try asking your favorite AI:
“What's the most popular and simple file format to save, store and exchange machine learning models in Python?”
And you’ll probably get an answer like this one: “The most popular and simple file format to save, store, and exchange machine learning models in Python is usually the pickle module, which allows you to serialize Python objects into a byte stream. By using pickle, you can easily save and load trained machine-learning models with a simple interface. Another common file format is the joblib library, which is particularly efficient for storing NumPy arrays and can be more efficient for large models.”
If you don’t trust AI, try a Google search: “save machine learning models in python” and you’ll see tons of tutorials mentioning how to use the pickle
module to do the job, in Python of course which is the go-to programming language for “all things AI”, and many of them don’t even contain warnings about the risk related to data unpickling.
At this point, security people should have already noticed the red flag, but clearly the rest of the world is not aware of the problem so is worth remarking it: the pickle module in Python is vulnerable by design.
The problem with the pickle module
Pickle
in Python is used to serialize and unserialize an object, basically converting the object into a byte stream to store it in a file or database.
The problem is that once data is “pickled” (or serialized if you prefer), in order to use it in your code you have to unserialize it, and during the deserialization process the program (and the programmer) have no control over what happens until is too late.
A pickle in Python will gladly accept complex structures like classes, objects, and functions that will execute upon deserialization, and there is no way to check the integrity of the data before the deserialization happens, but at that point, the system could be already compromised if the pickle contains malicious code.
This is what in security-terms we call insecure deserialization, and is been around for at least 20 years, in fact even the Python documentation clearly warns about not using pickle on untrusted data.
So, why pickle?
Maybe, the reason why pickle was so largely adopted as an interchange format for ML models, is that this started as a topic limited to researchers and scientists, where there is a form of “trust” (I suppose). But nowadays ML is for everyone and there are tools to write ML-based applications with literally 10 lines of code, like Streamlit for example.
Regular users are starting to train and exchange models on a regular basis, downloading them for public repositories without any form of verification and this can be a problem and a very juicy attack vector for hackers, always looking for new ways to spread malicious software.
How long should we wait to see the first backdoored Machine Learning models?
It gets worse
There is one more aspect that makes this more dangerous: manipulating a “.pkl” file is not very complex, and so is not creating a “pickle virus” containing malicious code.
As usual, I refrain from sharing harmful code on my blog, but at the same time I like to provide a clear and actionable example of what we are talking about, so let’s try to create an unharmful “self-executing” pickle.
import pickle
class Print:
def __reduce__(self):
return (print, ('Hey!',),)
pwn_object = Print()
with open('object.pkl', 'wb') as f:
pickle.dump(pwn_object, f)
This class would create and store an object.pkl file, which opened with a text editor would look like:
8004 9521 0000 0000 0000 008c 0862 7569
6c74 696e 7394 8c05 7072 696e 7494 9394
8c04 4865 7921 9485 9452 942e
Just loading the object triggers the deserialization, and consequently, the object gets created and executed:
import pickle
with open('object.pkl', 'rb') as f:
unpickled_object = pickle.load(f)
___
Output: Hey!
Now, use a bit of imagination to replace that unharmful print function with something else, and you get an idea of the threats we face every time we unpickle a file. There are even programs to manipulate and inject shell code into a pickle, for “lazy cybercriminals” who don’t want to write code.
In layman terms, loading a pickle file is a leap of faith!
How do we fix this?
Approaching a solution to this problem is not as straightforward as it seems.
We could adopt a safer file format as a standard - JSON for example - that allows a safe serialization/deserialization as it can only contain strings/numbers and does not support code execution, but it can present performance issues and it could be not suitable for every model type.
With the ML ecosystem growing faster than ever, new file formats also came to life:
- ONNX (Open Neural Network Exchange)
- HDF5 (Hierarchical Data Format version 5)
- PMML (Predictive Model Markup Language)
However, I assume some serialization is involved with these formats as well, and it would be a mistake to trust them just because they are “new” (more research and more clarity are needed).
If we won’t be able to fix this on a technical level, another option on the table is to implement a strong verification system for machine learning models. We already have various cryptographically secure ways to “sign” a file, and MacOS for example already warns users that try to install an “unsigned” file. Can we imagine a blockchain-based distribution system to distribute trusted, authentic and verified ML models? Can we imagine ML libraries that only accept cryptographically signed files by default?
Either way the world will decide to proceed, this is yet-another-proof that no matter how new and shiny is the technology, we keep making the same mistakes of decades ago from a security perspective, and there is still a lot to do to bring a security-oriented mindset in how we implement the future of the web.
In a world that runs on bits and cables, this is very much needed.
Francesco