In the Lex Friedman Interview, Sergey Nazarov spoke about AI and Smart Contracts. Lex asked about integrating non-humans or AI systems managed by humans integrated into smart contracts. He asked Sergey to comment about the world of “hybrid smart contracts” which are about codifying agreements between “hybrid intelligent beings.”
Sergey stated that it makes perfect sense. And, stated that in terms of AI, he was not an expert and therefore he might sound simplistic or naïve.
He pointed to how everyone saw the terminator movie in the 90’s and thought that this is really scary. And, on how he personally thought that AI is amazing and on how it makes perfect sense. He pointed to how it will evolve to a place and pointed to how he works in a world of trust issues. How can technology solve trust and collaboration issues, using encryption, using cryptographically guaranteed systems, using decentralized infrastructure. He stated that is the world that he has been inhabiting in for many, many years now doing smart contracts.
He stated that his brain will work like what is the trust issue that people might have with AI. And, “My natural kind of response is well, let’s say AI continues to be built and improve at some point. I have no clue where we are on this. Now, I have seen different ideas that were very far from this. I have seen stuff that were very close to this.”
He pointed to how AI would do things we do not want it to do, but still we want to give AI a kind of control over our lives. How the trust issue can be solved with blockchains is actually very straight forward and I think that simplicity is quite powerful. You could have an AI that will have control over the key parts of your and our lives, but he points to how we can limit it by having private keys and blockchains and create certain guard rails and firm kind of walls and limits to what the AI could never go pass assuming that encryption continues to work and assuming that if it is not the AI specialization to break encryption that it won’t be able to do that.
So, if you have an AI that controls something very important or whatever it is, shipping or something in defence or something in the financial system or whatever it is but you are sitting there and you are kind of worried, hey this thing is unbelievable. If you bake in private keys and you bake in these kind of blockchain based limitations you can create conditions beyond which can AI could never act.