NEW BLOG POSTS

Verifiable AI on Bitcoin

image-not-found

Wei Zhang

-

17 September 2024

blog-image

Combining AI, Cryptography and Blockchain

Verifiable AI is about the ability to ensure and demonstrate that an AI system operates according to its specifications, free of critical bugs, and adheres to ethical standards such as fairness, transparency, and safety — ideally without revealing proprietary information of the system. This is a relatively new research area, and we have just started exploring.

Not long ago, we successfully implemented verifiable computation on Bitcoin. One of our AI experts made an interesting comment: “Why not apply this to machine learning?! Training is computation. Inference (processing queries) is computation. If we can do verifiable computation, then we can do verifiable training and verifiable inference.”

Interestingly, two weeks after the suggestion, Ebrahimi et al. presented an academic paper on the same topic: “Blockchain-based Federated Learning Utilizing Zero-Knowledge Proofs for Verifiable Training and Aggregation”, winning the best paper award at a conference we attended. They have an implementation in a simulated local Ethereum environment. Their work represents a significant step forward in integrating blockchain and cryptography with AI. Many congratulations to them!

With verifiable computation approach, we can prove cryptographically that an AI model is trained on the claimed dataset. For example, it can be proven that a generative AI model is trained on a dataset that is known to have no bias. Probably more importantly, AI developers can prove that they did not tamper with the training dataset to train an AI model for malicious intentions. We can also prove that an output of an AI model is obtained by running the specified AI model on the given input. For example, a piece of artwork can have a proof showing that it is generated by an AI model on a given prompt; or as a paid user, you can be sure that you are getting answers from ChatGPT4, not the free versions.

With this context in mind, we are proud to announce that for the first time, we successfully demonstrated verifiable inference on Bitcoin. You can find relevant transactions on the Bitcoin SV mainnet, and the code that generates them on our zkScript GitHub repo.

0_7uDaq169c3PeFeWG.webp
Recognising hand-written digits

We trained a simple neural network on the MNIST dataset of images of hand-written digits, achieving over 90% accuracy. The model was then run on an input image, such as one provided by a user, and produced an output predicting which digit the input image represents. A cryptographic proof was generated to verify the output was indeed produced by running the model on the specified image while hiding all the information about the model (details to be found at the end). The proof was subsequently used to claim a small amount of bitcoin on the blockchain from the user. Here, Bitcoin not only provides a decentralised independent verification platform for but also facilitates a micro-payment between a user and an AI model. What more can you ask from a blockchain?

Very well done to the team that made it possible, Hamid, Luigi, Federico and Mathieu! We will continue our exploration in verifiable AI leveraging blockchain technology and cryptography. Stay tuned!

0_0IeslvuuymaA9OR1.webp
Photo by Gerard Siderius on Unsplash


PS: For those who are interested, here is some technical detail…

Floating Points vs Integers

The main challenge is the fact that neural networks work with floating points while zero-knowledge proofs work with integers. Fortunately, back in 2017, Jacob et al. described an approach to quantise AI models without losing accuracy in an academic paper. They needed this for migrating pre-trained AI models from powerful servers to less capable mobile devices. As part of the process, they convert floating points to integers to reduce the dimension of the model for efficiency. We managed to adapt the methodology to fit our purposes and maintain an accuracy above 90%. There are other techniques to quantise an AI model. For example, research groups such as ZAMA are exploring quantisation-aware training where a model is trained with an awareness that the weights will be converted to integers later.

Preserving Confidentiality of AI Models

We used a hash function to hide the weights of the model. Only the hash value of the weights is in public. The computation circuit consists of the computations in running the AI model and in evaluating the hash function to make sure the same weights are used for both computations. For proof generation, we use libraries provided by arkworks. Both the verification key and the proof are formatted in JSON files and passed to our zkScript tool to create locking script and unlocking script respectively. You can find the example use case here.

We are happy to hear from you if you have any questions or comments!

[email protected]