Bankless

Ethereum's Last Big Upgrade: The zkEVM | Ansgar Dietrichs

81 min
Feb 23, 2026about 2 months ago
Listen to Episode
Summary

Ansgar Dietrichs from the Ethereum Foundation discusses zkEVM, a zero-knowledge proof system that will fundamentally transform Ethereum's scaling capabilities. The episode covers the technical architecture, multi-year rollout plan starting with optional proofs in ~1 year, and how zkEVM will enable Ethereum to achieve 3x throughput scaling annually for the next 6 years while maintaining decentralization and verifiability.

Insights
  • zkEVM shifts blockchain verification from redundant re-execution across all nodes to cryptographic proof verification, eliminating computational waste while maintaining security through multi-proof redundancy
  • Ethereum's scaling roadmap balances immediate needs (3x annual scaling via traditional means) with long-term architectural transformation (zkEVM), avoiding the false choice between innovation and real-world usability
  • The transition to zkEVM is a multi-year process requiring hardened client diversity, new state tree structures (binary trees), and careful security validation before mandatory adoption, not an acute hard fork event
  • zkEVM creates second-order benefits beyond L1 scaling: real-time settlement for L2s, cross-chain composability, and potential applications in AI agent interactions and privacy-preserving identity systems
  • Client diversity in the zkEVM era requires verifying multiple independent proofs (e.g., 3+ different execution client/proving system combinations) rather than running single clients, improving redundancy
Trends
Shift from special-purpose to general-purpose cryptography enabling arbitrary computation proofs at scaleReal-time zero-knowledge proving becoming production-ready (5-second block proofs), moving from theoretical to practical blockchain scalingEthereum ecosystem consolidation around shared infrastructure (L1 + EVM L2s) leveraging common zkEVM technology for composabilityFormal verification and AI-assisted security becoming critical for blockchain client development in post-quantum eraCryptographic proofs expanding beyond blockchain into government ID systems, AI agent interactions, and privacy-preserving applicationsStateless and partially-stateless blockchain nodes reducing hardware requirements and enabling broader node participationData availability sampling (blobs/EIP-4844) becoming foundational infrastructure for both L1 and L2 scaling strategiesPost-quantum cryptography integration (binary trees with quantum-resistant hashing) becoming mandatory for long-term blockchain securityContinuous incremental scaling (3x annually) replacing discrete hard fork moments as the primary upgrade narrativeInteroperability between L1 and L2 improving dramatically through synchronized real-time proof settlement
Companies
Ethereum Foundation
Ansgar Dietrichs is a researcher there; EF leading zkEVM research and coordinating ecosystem-wide implementation
Galaxy
Sponsor offering institutional digital asset platform with $12B+ AUM, trading, DeFi lending, and AI data center infra...
Zcash
Referenced as early special-purpose zero-knowledge proof blockchain that preceded general-purpose zkEVM development
Bitcoin
Discussed as foundational blockchain with asymmetric proof-of-work verification, contrasted with Ethereum's execution...
Solana
Referenced as high-performance chain with higher computational requirements for node participation vs. Ethereum's phi...
Infura
Mentioned as RPC node provider that will need to adapt infrastructure for zkEVM-era Ethereum
Microsoft
Referenced as adopting second-generation cryptography (general-purpose ZK proofs) outside blockchain context
Xerox PARC
Cryptography research lab cited for first/second generation cryptography framework and terminology
People
Ansgar Dietrichs
Ethereum Foundation researcher discussing zkEVM architecture, scaling roadmap, and technical implementation details
Justin Drake
Ethereum Foundation researcher leading real-time zkEVM performance optimization efforts and ecosystem coordination
Guillaume
Ethereum researcher leading binary tree upgrade work, previously championed Verkle tree research
Kev
Ethereum Foundation team member doing 'absolutely amazing work' on zkEVM short-term scaling efforts
Tomas
Ethereum Foundation leadership focused on balancing long-term research with real-world adoption priorities
Choway
Ethereum Foundation leadership focused on balancing long-term research with real-world adoption priorities
Quotes
"A blockchain by its nature is a very symmetrical thing. Every node basically does the same thing... And now you're jumping to this world where you still have the same effort to build a block, but then verification in a way is effortless."
Ansgar DietrichsOpening segment
"The best part is no part at all. And what a cryptographic proof does is it removes the whole part of re-execution."
David Hoffman (paraphrasing Elon Musk)Mid-episode
"In Ethereum, validators get basically handed the current rules of the chain by the community... The ultimate power always lies with the community."
Ansgar DietrichsConsensus layer discussion
"We're now in this moment in time where real world adoption is here, right? Like it's no longer this future thing that we're building towards."
Ansgar DietrichsScaling philosophy section
"In six years, roughly 1,000x of where we started last year... first three years traditional, then the next three years ZKEVM."
Ansgar DietrichsScaling roadmap discussion
Full Transcript
But ZKVM is this fundamental insight that what you can do is you can basically allow nodes to verify that a block followed all the rules without having to re-execute the block. It's a very non-intuitive thing, right? A blockchain by its nature is a very symmetrical thing. Every node basically does the same thing. Of course, your block produces, but then every node kind of has to download, re-execute. You're duplicating the effort across the network. And now you're jumping to this, like, through this very fancy cryptography, you're jumping to this world where you still have the same effort to build a block, but then verification in a way is effortless. It has this magical compression element to it. Bankless Nation, I'm here with Ansgar Dietrichs. He's a researcher at the Ethereum Foundation. We're going to talk about the ZKEVM today on the show. Ansgar, welcome to Bankless. Hey, great to be here again. Pretty ambitious subject, Ansgar. Ethereum has had this history of very big forks, hard forks that have upgraded Ethereum from this early primitive proof of concept where it started in 2015 to what it is today, which is fundamental infrastructure, the backbone of Internet money and Internet finance. We had the Merge, which did proof of work to proof of stake. We had EIP-1559 that upgraded Ether economics and transaction user experience. There's also 4844, which just enabled Ethereum's roll-up environment to become its best self. With each of these forks, they all represented this rallying cry for the Ethereum community. They were this kind of grand unifying force of attention by the Ethereum community, and it allowed Ethereum itself to command attention from the rest of the world. The rest of the world paid attention to Ethereum when Ethereum had these forks, these incoming forks. The Ethereum was just loud. And I think these kind of represent some of Ethereum's best moments when Ethereum has these kind of cultural shelling points for technological upgrades to what we consider in the Ethereum community to be critical social infrastructure. Now, I think Onsgar, and I want to suss this out, this topic out with you, that there is another fork on the horizon. It's not soon. It's not this year. It's likely not next year either. But nonetheless, it is there on the horizon, and I think it deserves attention. I think it deserves the treatment that the Ethereum community has given previous forks. And I think in addition to all of the valuable things that we got from the three forks that I just mentioned, this one is actually the biggest upgrade that Ethereum will ever experience. because it relates to users more than any of the three forks in the past. And that is the fork that introduces the ZKEVM to Ethereum. Now, Asghar, these are the sentiments that I want to start this podcast off with. Before we get into what is the ZKEVM and all the technical details about it, I just want to give those sentiments to you and have you reflect upon them before we kind of dive into the technicals. I personally share your excitement on this topic. I really think that it's one of those changes that are really Ethereum at its best. It's one of those really ambitious technical projects that I think Ethereum is in a unique position to deliver. It will have a huge impact primarily through scaling, but in many ways. I'm sure we'll talk about all of this. And I really think it's something we can look forward for, we can be proud of. And yeah, I'm excited to talk about the details. I will say, by the way, you said hard fork. And the interesting thing here is like similar to if you think back at the merge, right? We had first the launch of the beacon chain, which was one moment in time. And then we later on had the mergers, like two separate moments in time. And I think similarly, maybe even to a larger degree with ZKVM, as we'll discuss, it actually, it has this nature of it's an ongoing transition that is basically about to start. Then we will have the main hard fork and then it will continue after. So it's much more like a ongoing transition. But yeah, let's dive in. So it is the introduction of an era of Ethereum rather than an acute hard fork. And I think the ZKEVM era will be, has the potential to be Ethereum's best era because of what the ZKEVM does for Ethereum. So let's stop hyping it up and start to get into the technical details. What do we need to know about what a ZKEVM is? What is it? And then we can talk about like why, what it is that's so significant to Ethereum. Yeah. So I think, you know, to understand this, like really kind of you have to start from the problem statement, right? So ZKVM really arose in the context of scaling. And basically the fundamental point is that a blockchain, if you run a blockchain, you have these three primary constraints. You have the data, right? You have to first like any new block you create, it has to get to the user. Then you have the IO, you have to like then go to disk, you have to get all the data you need to actually like then verify the block. and then you have the actual verification, the execution, the compute, right? So those are like the three main constraints, the bandwidth, the IO, and the compute. That's any blockchain, no matter the design, those are the main constraints. And so if you want to scale this, you can just do the thing where you take that and you just scale it up. And we'll talk about this in a bit. That's actually, to some degree, what we're doing in the short term. And that's what many other chains have been doing. That's a very natural thing, but you do run into limits. You do run into tight limits. And so ZKVM is this fundamental, like it comes from the cryptography side, these snags, zero knowledge proofs. And it is this fundamental insight that what you can do is you can basically allow nodes to verify that a block followed all the rules without having to re-execute the block. And that's, again, like that's something that's, it's a very non-intuitive thing, right? But normally, a blockchain by its nature is a very symmetrical thing. Every node basically does the same thing. Of course, your block produces, but then every node kind of has to download, re-execute. You're duplicating the effort across the network. And now you're jumping through this very fancy cryptography. You're jumping into this world where you still have the same effort to build a block, but then verification in a way is effortless. It has this magical compression element to it. And then specifically what's so important in the L1 context is the real-time element to it. So a ZK EVM just allows for this compression. And for example, many listeners I think will already be familiar with the concept of ZK roll-ups, right? So those have been around for a while. And that actually was a huge first jump in this technology, which just allowed for this compressed ZK verification in the first place. But so far, this is done in an asynchronous way. So meaning you have your L2 blockchain that, you know, it's its own chain basically. and it keeps progressing. And then afterwards, with some, you know, up to several hours of delay, you come and you basically, you compute over a long time, these proofs, and then you bring them to the chain. And what now is the second huge jump here is to go from this very asynchronous, delayed process to a proving, a verification loop from block creation, proving verification that all happens at the same speed of the blockchain synchronously. So like within a single Ethereum, slot. Right now, that's 12 seconds. We will bring that even further down. You have this entire loop, closed loop within that short amount of time. And so basically that's many orders of magnitude of performance improvement. And that really is what unlocks all of these huge gains for the L1. Galaxy operates where digital assets and next generation infrastructure come together, serving institutions end to end. On the market side, Galaxy is a leading institutional platform, providing access to spot derivatives, structured products, DeFi lending, investment banking, and financing. With more than 1,600 trading counterparties, Galaxy helps institutions navigate every phase of the market cycle. The platform also supports long-term allocators through actively managed strategies and institutional-grade staking and blockchain infrastructure. That scale is real. Galaxy has over $12 billion in assets on the platform and averaged a $1.8 billion loan book in late 2025, reflecting deep trust across the ecosystem. Beyond digital assets, Galaxy is also building infrastructure for an AI-powered future. as Helios Data Center campus is purpose-built for AI and high-performance computing, with more than 1.6 gigawatts of approved power capacity, making it one of the largest sites of its kind. From global markets to AI-ready data centers, Galaxy is serving the digital asset ecosystem end-to-end. Explore Galaxy at galaxy.com slash bankless, or click the link in the show notes. Euphoria brings one-tap trading to the palm of your hand. Built on MegaEth, Euphoria takes real-time price charts and projects it over a grid of squares. You tap the squares that you think the price will enter in just 5 to 30 seconds in the future. If the price goes into that quadrant, you can pocket anywhere between 2 and 100x your trade. No other application helps you trade faster and with more leverage on market-driving events like FOMC meetings, presidential speeches, or global macro events. Thanks to MegaEth's real-time blockchain, Euphoria is the way to get real-time price interactions with the market. On Euphoria, you'll be able to compete with friends using Euphoria's real-time social trading experience, allowing you to go head-to-head with your friends. A great party trick if you project the app on a TV. It'll be like the Mario Party of derivatives. To trade on Euphoria, people can deposit stablecoins from any chain or do direct fiat transfers, and everything gets converted into MegaEth's native stablecoin, USDM, in the background. Check it out at euphoria.finance and download the app or find it in Telegram as a mini-app. In 2024, emerging markets generated over $115 billion in annual yield for investors, with yields ranging between 10% to 40%. These are some of the highest, most persistent yields on Earth. The problem? DeFi can't access them. BRICS changes this. Built on MegaEth, BRICS takes emerging market money markets and sovereign carry and turns them into composable primitives you can access straight from your wallet. While DeFi investors earn 3% to 6% on stablecoins and T-bills, institutions have been harvesting 10 to 50% yields backed by sovereign monetary policy. BRICS connects these worlds with institutional gray tokenization, local banking rails, compliance across jurisdictions, and real-time stablecoin settlement. BRICS does the heavy lifting so DeFi can finally access real collateral and structured products on top of real world yield. Even the best carry trades can be within reach. BRICS brings DeFi's promise to the emerging world and brings emerging market yield to your wallet. Let the yield flow with BRICS. maybe going back to just like what makes a blockchain a blockchain bitcoin had this fundamental insight of the way that we get rid of a leader in a blockchain is that everyone checks the the legitimacy the authenticity the correctness of everyone else and so when some bitcoin miner mines a block but if it finds the correct hash and it proposes that that block everyone else in the network doesn't trust that leader. They re-execute all of the same work to verify it for themselves. And that's the way that Bitcoin discovered the way to have a decentralized network is everyone's checking everyone else. And that re-execute word has just been the status quo for all blockchains. Everyone re-does all of the work. And the way that that impacts blockchains, all blockchains to this day, is that it kind of is hamstrung by the slowest node in the network. Or at least there is some requirement for computation that every blockchain has that, you know, if you aren't at least this fast, you can't keep up with the network because you can't keep up with executing everyone else's work. And now, you know, some blockchains have different opinions as to like how much requirement you have. Bitcoins is very low. Ethereum has also been a very low requirement because we want to be decentralized. You know, as you said, like, you know, some chains like Solana or other very fast chains have had a higher opinion as to the computational requirements it takes to do the re-execution. But nonetheless, all blockchains to this day are re-executing all of the same work. And it's redundant. It seems unnecessary. It seems like, is there a way where we can not do all of that extra work and still have a blockchain? And a parallel to that, as you said, with like the Ethereum layer twos, what we understand is that there is a way to not do this. And that is with ZK proofs. So in addition to the technological progress of blockchains as a whole, we can make them more efficient. We can, you know, we can juice some of the throughput. But on a parallel path, there are cryptographic algorithms that instead of allowing or forcing everyone to do the re-execution, you can simply verify a cryptographic hash, a cryptographic proof. And that part is trivial. It's easy to verify. It's hard to produce in the same way a block in a blockchain is hard to produce, but it's trivial to verify the correctness of a cryptographic proof. And that's kind of the trick. That's where we remove the re-execution. A great Elon Musk quote here is, the best part is no part at all. And what a cryptographic proof does is it removes the whole part of re-execution. So blocks in a blockchain get executed once, and then no one has to actually re-execute it. They can just trivially verify it, which allows for a lot of redundant work to get removed from the system. and that allows for just work being constrained down to one block producer. And then everyone else is just like, thumbs up, that is correct. And we really like take off the brakes off of a blockchain system. Now, the reason why Bitcoin wasn't built like this in the first place, the reason why Ethereum wasn't built or any other blockchain wasn't built like this in the first place was, you know, technological progress along cryptographic hashes also needed to mature. Maybe you could like take everything that I just said and run with it, But also talk about just like the technological parallel path of cryptographic proofs as they've been progressing alongside blockchains. Yeah, absolutely. So actually, just to start with where you started with the Bitcoin example, because some listeners might have heard this and might have been like, hey, actually, isn't there this asymmetry as well where a miner does all this like very expensive work, but then not every other node has to like redo the same mining, right? Like you, indeed, in the mining process, there's the same efficiency, like asymmetry. And that's actually, it's a very common trick in cryptography, where basically like you try with mining, you try all these different hashes, you find one hash that has enough of like leading zeros. That's how the difficulty in Bitcoin worked. And then you can just show people and it's very cheap to verify. So Bitcoin on the consensus mechanism side already uses a similar trick, right? But on the actual content of the block, right? So like what is in a block, in a Bitcoin block, it's all the transactions. Each transaction comes with a signature. So you have to like actually like verify the signatures. You have to say, okay, balance was moved from this account to that account. All of the actual operations of the blockchain, that's the re-execution part, right? So Bitcoin does get, has this like, again, because this is a very typical trick in cryptography that you have this like asymmetry of generation versus execution. It uses that for mining because that's easy to do with proof of work. It's very, very hard to do this for the actual operations within a block. And so now this is what basically the main unlock here is that basically now we're bringing the same efficiencies that people are used to from this like one miner, everyone can verify easily. We're bringing that same efficiency to the entire block. And of course, on Bitcoin, the actual Bitcoin block is very small. It's a very simple operation on Ethereum because you can run smart contracts and we are massively scaling the throughput. It's much more complex. The vast majority of the overhead in processing and following the chain is not the consensus part, not the proof of stake part, but it is the actual contents of a block. So what has changed on cryptography? Actually, my friends from the Xerox PARC team, they are like one of those cryptography research labs. They always talk about, I think they call it, maybe I'm getting this slightly wrong, but they call it the first generation of cryptography and the second generation of cryptography. What was the first generation of cryptography? it was basically handcrafted algorithms for very specific use cases. So a signature algorithm or a hash function or anything that basically, it fulfills a very specific purpose and you can use it in a very specific context. And those are amazing, right? And that's been like the story of cryptography for the last 50 years, right? It's basically like more sophisticated, special purpose mechanisms. And those were already very mature when say Bitcoin started, right? This is why they were able to just take the concept of hash functions off the shelf, and you can do amazing things, signature mechanisms, all that kind of stuff. What is very new, it basically started, I don't know, a decade ago or something like this, probably academically a little bit earlier. I'm not actually a cryptography expert myself, so I don't know the exact kind of early story there, but that's basically like cryptography 2.0 in a sense. It's general purpose cryptography. It is basically now the ability to make cryptographic statements about arbitrary computation. Instead of having to handcraft it for a specific use case, You're going to this general purpose world. And this is like a huge leap because it means that instead of just, say, signing a message, you can prove whatever you want. Anything Turing complete, any execution whatsoever, you can now compress, you can make a cryptographic statement over. And that was a giant leap. It was pulled from academic theory to feasibility, I think, through a lot of funding that came from the blockchain space, of course. and it's really incredible progress and that progress, I think, I would think of it as several stages. So one was just, not just, one was what we saw with ZK rollups and then of course already prior to that special purpose chains like Zcash, right? Was just the ability at all. You have a protocol and you can make a proof of it. You can basically, you can prove that a block of a blockchain is valid. What we've seen since is like this progression of the tech stack. So for example, all of these earlier stages, like again, Zcash, early ZK rollups, what they all did is they basically, they handcrafted the rules of the chain that they were trying to verify into like very low level, like it's called circuits. It's basically like, you basically express it in like very low level constraints that you then make these your knowledge proofs about. And where we've been going from there is now we have this, and you can really, it's really, it parallels the early progression of computers as a whole, right? We went from, you have to specify, you have to manually specify every individual system you want to prove. Instruction, yeah. Yes, as like the set of constraints of circuits. It basically went from there to introducing, it's such an elegant idea, but it's crazy that it works. Just introducing this intermediate instruction set. So it's called an ISA, instruction set architecture. And you can think of it like how a processor in a computer has instruction sets. So x86, for example, right? Like Intel or ARM or whatnot, right? Basically, it's what instructions does your processor understand? And the way these modern ZK systems are now built is you pick one of those instruction sets, like the one that is actually becoming the standard in Ethereum right now is RISC-V. RISC-V is similarly, in principle, it's just like a list of operations that your processor could do, right? Like it's often run in a virtualized way. So it's not actually run on real RISC-V hardware. It's mostly run on in a virtualized kind of way. But basically, it's just like a list of instructions. And then you then write zero knowledge provers that can just prove arbitrary RISC-V code. So you're just saying like, look, give me any RISC-V code and I just have this machinery that can make statements, cryptographic statements about it. And what that now unlocks is instead of having to handcraft like the early ZK EVMs, they were literally handcrafted EVMs inside of ZK systems. Now you can just literally compile you can just take basically basically you can take an Ethereum client instead of compiling it to whatever your local machine has as an instruction set instead of compiling it to x86 or something you now just compiling it to RISC and then you just get the ZK proving for free because in RISC that just like a typical kind of endpoint for compilers, right? So basically you're modularizing the tool chain. Of course, that's only possible now with all the efficiency gains because of course you're losing some benefits of handcrafting all the optimizations. But this really, it's a phase change from how feasible it is to do this for just like big complex projects. And so really the way Ethereum does is EKAVM is, again, of course, the real world is a bit more complex, but in principle, you can really think of it. We take the existing Ethereum clients and we just compile them to RISC-V and then we just have provers that specialize in making proofs over RISC-V. And that's just, it's really amazing how far the industry there has gone to make that feasible. And then the last jump, the last big kind of conceptual jump from there to this is becoming feasible for us is the real-time element. So you arrived at that world and you could do that within an hour. And sometimes if the block is actually convenient to prove, maybe you can get it down to a few minutes, whatever. That's the world that we used to be in. And then we have had this massive industry collaboration effort that started a year, year and a half ago with Justin Drake really pushing super hard on this. And these teams, this is really mostly driven by teams outside of the Ethereum Foundation. These teams have done an absolutely amazing job. And I would say the last year was really the year of performance, of real-time performance. Throughout the last year, teams just kept pushing this down orders of magnitude. And now we're at the point where we are starting to achieve the target zone. So we are actually able to prove consistently, reliably, prove a full Ethereum block within five seconds, something like that. And that's basically the promised land. because now we have all the technological building blocks and now we can talk about the rollout and all these things, but we have all the like from the cryptography side, we now finally for the very first time ever, we have all the elements we need to run a general purpose blockchain at real time proving speeds. And that's something that has never been possible before. I really like the idea of there has been this, you know, three parallel paths of computing, first starting with computers where they were first narrow and then we were able to make them generalized And then we were able to make them generalized and fast, which is where, you know, modern computers are now to this day. And then we created blockchains, you know, virtualized ledger based computers in the, you know, in the sky, decentralized systems. They started narrow with Bitcoin and then we learned to generalize them with Ethereum. And then we learned to generalize them and make them fast with many other smart contract chains. And now we are doing the same thing with cryptography. Started narrow with cryptography, learned to make it generalized. And now we are making them generalized and fast. And that generalized and fast unlock on the computing tech tree of cryptography is now being able to be taken and bestowed into Ethereum, which is what we're going to talk about for the rest of this episode. So now that we have the ZKEVM and it's in the Ethereum blockchain and it's up and running, what does that actually change with Ethereum? When we get to this point, how does Ethereum actually change? Right. So of course, we're not there yet, but that's kind of, that's where we're going. And so why is this useful? So coming back to scaling, right? I said that there's basically these three main elements of scaling. There's the bandwidth, the IO, and then the actual compute. Now, the amazing thing about real-time ZKVM is that it actually is the core of a broad, like the way I would say it is like, it helps us scale all three of these, but not just on its own, but it's basically, it's the unlocking piece that allows, that basically enables a broader transition that addresses all of these elements of scaling. And so that's why when we talk about ZKVM, to me, it's more like the most exciting element of this broader change. And that's why when you said at the top of the podcast, this might be the biggest change ever, I would agree, not just the ZKVM itself, we'll talk in a second about statelessness, about data availability sampling, like all these things come together to unlock this. And so let's take it step by step. So of those three constraints, the one immediate impact you get is on the compute side, right? So because that's the nature of ZK proofs, right? You basically, you're able with very little compute effort on the verification side to verify arbitrary length execution. So no matter how much you fill the block, now, of course, we can talk about constraints. There's still block building, some node somewhere needs to do that. So it doesn't give you literally infinite throughput, but basically, right? Like you can have whatever length of computation you have, you can compress it down into a constant size proof, and then you can verify that with just very little compute. So compute scaling, that's in a way the easiest one. That's the one that you get very easily. Now you look at the other two and you're saying, okay, how does it impact IO, right? So historically, traditionally, when you execute an Ethereum block, what you do is you start executing, you do some compute. At some point, you want to load some state. Actually, already at the beginning of a transaction, you want to, you know, you need to load your account. You need to load the account that you're calling into, that you're sending ETH to. So you basically, you immediately need to go to disks, right? So you have this intermixing of sometimes you go to disk, you load value, sometimes you do some compute, then you go to disk again. It's like this, this intermixing. One actual change to Ethereum that we're already doing before ZKVM, it's called block level access list. So it allows us to, it basically, it adds some annotations to a block of like, this is the data you'll need. So actually what happens now is that you actually go to disk at the very beginning. You bring all the data and then you can do the execution. but you still have this element of having to go to disk both before the block and then again after the block to go and like be okay but what's you know like we have to update all the values and then we have to also like compute what is the new state root so how does it look with zkvm well there's a few things that that are fundamentally like improved by zkvm so the the the the important part is that zkvm is the already takes in a as part of the claim it's like hey assuming the the blockchain was in this state and i apply these transactions now then the next state is this so basically like you no longer need to go and load the data from the values from disk so you basically you're saving this io on the on the load side naturally and then the the thing that you normally still have to do is you have to like go and still write the updates right so you if you still have you still have the state of ethereum so after you after you verified the block you still have to go and say okay these these values change right so you have to go and apply that change one that's no longer in the critical path. So you can do that after you've already finished verification. So if you have valid data, you can already vote. You can say, ah, this block was valid. And then afterwards I go and actually apply the update. So in terms of what is the current price of this Uniswap pool or what's the balance of this account, right? I might only go update this on disk after I already know that the block is valid. So that's a natural benefit you get. But if you want to push it further, we have to, and this is what I was saying, this is one of those changes that is enabled by ZKVM, but it's its own change. It's stateless Ethereum or partially stateful Ethereum. So what does that mean? Well, instead of like today, any node in Ethereum network basically has to have the full state and that's with re-execution, that is unavoidable, right? Because if you want to verify a block, you have to go and again, load all the data. You have to have it all locally. Once you have ZKVM, that becomes optional because you don't actually need the data local to double check the validity of the block, right? So what you can do is you can, in principle, what you could do is you could throw away the entire data, right? So you can basically just, you can only keep like this root commitment and you can just always update the root commitment and that's it. In practice, what you'd want is because Ethereum nodes have multiple functions, they also operate the Ethereum mempool, they have to understand validity of transactions in flight, all these kind of things. What you'd want to do is you don't want to run fully stateless, you want to run in what we're calling partial statelessness. statelessness. So for example, there's this proposal called VOPs, Validity Only Partial Statelessness. So it means you specifically have a subset of the state and that can be defined by several different rules. It can be, say, the balances of all the accounts, or it can be, I don't know, if you are specifically interested in some state that belongs to you as the user or something, you can define what state you're interested in. But basically, now you can keep a subset of the Ethereum state and that's totally safe because of zkvm, right? And you only have to apply the div, you only have to go to disk, you only have to have the IO overhead of updating that subset. So that's the second, basically, like you have ZKVM for compute. Now you have partial statusness for more optimized IO. And also, by the way, for keeping your disk size contained. We'll talk about state growth maybe towards the end, but basically, you know, so you don't have to have like a huge disk. And then it leaves the third one, which is bandwidth, right? So, And how do you actually keep scaling the chain now with the ZK system while actually keeping bandwidth requirements the same or even reducing them? Well, that's yet another separate trick that's also, again, enabled by ZKavion, but it's separate. And that is, you no longer actually need to download the full block. And that makes sense, right? Because you get the ZK proof, you have to download the proof. And the proof tells you, hey, assuming there is a block with this hash or something, once I apply the block, this is the result. And that's proven. So the only thing you need to know about the block is that it exists. And that's a bit of a nuanced thing. Like, why do you even need to? I mean, someone clearly must have created it. Otherwise, they could not have created the ZK proof. So why do you have to verify that it exists? Well, that's for the nuanced reason that you can otherwise withhold the data. Like, that's also the same for, that's why, for example, we even have blobs in the first place, actually, for L2s. It's the same story. you have to publish, you have to basically prove that the blog was published so anyone can access it and anyone can get access to the transactions that were applied basically. But what you can do is, I mean, that's again where like the synergy with the L2s, it's just a beautiful story. We've already built out specialized functionality for verifying the existence of data very efficiently without downloading it all. It's called data availability, it's called blobs, right? So what we will do is we'll take the Ethereum blocks and we'll just basically become our own roll-up in a sense. We're putting the data into the blobs. It's called block and blobs, BIP. And with that, now all an Ethereum node has to do is just sample, sample the data. And we are in the progress of making that more and more efficient because we want to provide more and more data for our L2 partners. And that now naturally also benefits ourselves because now you can have more and more like bigger and bigger blocks while keeping the footprint in terms of bandwidth also very constrained. So now, you're right, coming back, we have ZKVM and we have partial statelessness and we have block and blob, state availability sampling. Together, they scale bandwidth, they scale IO and they scale compute. And that is how you basically like use all of these elements to scale the blockchain. And then there's some nuances. You don't get everything for free. You have state growth. We can talk about state growth that we have to separately address. And you have things like being able to efficiently sync an Ethereum client. There are things like being able to efficiently run an RPC node, you know, like what Infura is doing. these kind of things. So there's more to scaling than this. But the core story is that you have these three constraints and ZKVM directly and indirectly addresses all three. You zoomed in on each one of those three. And as you just said, you put those three together. That's how a blockchain becomes a blockchain. And we improve all three of those things. I want to zoom out and really focus at that level of advantage. When we reconstruct how a blockchain becomes a blockchain on like all three comprehensively. You really kind of said it when you said Ethereum uses its own data availability to be a ZK rollup. As I understand it, the ZK EVM, when it is up and running and operational and fully fleshed out and forked into Ethereum, the Ethereum layer one has the performance of a blockchain that would like be a zk roll-up in fact it maybe even is a zk roll-up it just also is an a the layer one itself and so we get all the performance benefits of roll-ups we get all we get to zk everything which unlocks the brakes undoes takes off the brakes on the ethereum layer one and we already have the infrastructure needed with the data availability sampling for this to get done and so from a performance perspective, the Ethereum layer one, which is known to be a slow, antiquated, you know, expensive blockchain to do computation on, upgrades itself to have the performance properties of a ZK rollup. Is that a true statement that I just said? Yeah, I think that's right. And I think, I think, like, just, I think it's important to understand, like, why even does Ethereum, like, why, why, you know, why is Ethereum so slow, right? Like, if we ask that provocative question. The one really important element is that core to Ethereum's design philosophy is this guarantee that Ethereum never wants to compromise on, which is easy verifiability and auditability. So the world that Ethereum always wants to be in is that anyone that wants, any user of Ethereum can easily, if they want to, verify or audit that the protocol is following the rules. And why this is so important, And people are always like, well, but in practice, many users don't do it. And other chains, yes. For example, if you're trying to join one of those high-performance chains, it's actually really, really hard to run a full node for one of those chains that scale just by increasing hardware requirements rapidly. Because not only do you need a heavy machine, but often you're not even allowed to join the peer-to-peer network because it's so performance sensitive that they have to have whitelists for who even is allowed, which nodes are even allowed into the network because otherwise they're just too brittle, right? And they just immediately collapse. So basically, and why does it matter? Because I think people think about proof of stake always in this like, well, there's validators and they vote on what's the current state of the chain. In Ethereum, validators get basically like get handed the current rules of the chain by the community, right? Like any hard fork is basically a social decision of, hey, it's a social governance act. The Ethereum community decides that now there are new rules to the chain and the validators only vote on like, Okay, given those rules, which blocks did I see? Which blocks follow the... It's a very... There's no individual decision that didn't attest to any theorem that makes. They just watch the chain and they just attest to what they see. In other proof-of-stake chains, while in principle that should be the same thing, what in practice happens is that because any non-validated user of the chain is just a light client because you can't just participate in the chain, you basically any user in those chains just trusts the majority of validators. So in practice, those validators determine what the rules of the chain are, right? Like in a chain that does not center verifiability, validators de facto control what the rules of the chain are. Like if the majority of validators want to run a different set of rules, they can do that. In Ethereum, that's not the case. Validators can't accept or reject a fork. They can just make a fork of their own. They just get handed the rules by the community and the ultimate power always lies with the community, right? And so that's why verifiability, auditability is so core to Ethereum. And that's why we have been historically slow to embrace scaling because that would endanger that property. And now with ZKVM, we have this magical way of getting the best of both worlds, getting the full verifiability and the full performance. Although I will say all of this is a bit too black and white. Actually, what's been happening. So, for example, I'm not actually like I'm personally, while I'm involved with our ZKVM work. We have experts. We have Justin, who's been on the podcast before many times. We have Kev, who's doing absolutely amazing work there. We have many people there that full-time work on this. And I'm actually focused much more on short-term scaling. And so while it is true that with traditional scaling, there's like a limit that you can reach. And otherwise, you basically have this fundamental trade if you can't escape. Ethereum historically has been very much in this mode of, well, we're working towards this eventual end state. And we know we want to eventually do ZK. So we'll focus on that. And as of like, say, a year, year and a half, two years ago, I think the mindset on Ethereum has shifted a lot towards saying, look, we're now in this moment in time. Real world adoption is here, right? Like it's no longer this future thing that we're building towards. So we have to like now, and it's actually, it's really, it's a very like non-trivial thing. We have to find the right balance between still working on these like Manhattan projectile type jumps, like real time ZKVM. I really, I think, I really, like you said, like I think it's the biggest thing Ethereum probably will ever have done. But we can't just wait for another three years for this to arrive. Like we have to do things now. And so this is why I think we actually like, we're now, scaling is this perfect example. We have this really good hybrid approach. The next, like we started last summer, we're saying ZKVM is three years out and we will in a second, I think, talk about more the sequencing of the exact rollout. But we don't want to wait three more years, right? This is what the old Ethereum would have done. What we're actually doing is we are, We came up with this scaling plan and it's a very continuous, smooth function. Our goal is basically, we have this rule of thumb, we're saying our goal is 3x scaling every single year. So we are increasing the throughput of the Ethereum blockchain by roughly 3x every year. This is more of like a goal, an ambitious statement. It's not clear that every single year we'll be able to hit that, but we think we see a path at least. It's a possible outcome. and in practice the first three years of that scaling with traditional means and then that from that point on basically we have the smooth handover into the zkvm paradigm so it's not all just black and white antithermis only doing zkvm but actually now i think i think we have the best of both worlds now we have like the next two three years we are we're doing this zkvm in parallel but we're still doing the traditional scaling and then we jump into the zkvm paradigm and so that means if you're a builder and you're considering building on ethereum one you have this like instead of having to like exactly think okay one is this hard fork and what is the exact no you can just say 3x every year you look at the throughput today you can and you can just like very simply calculate like you know what throughputs do needs do i have is the one a good fit or not it's it's a very simple story but under the hood it has this like these like two synergistic elements to it sorry that was long answer there yeah yeah well the idea is that we're pressing the gas on stealing scaling on multiple fronts, not waiting for the Manhattan project of the ZK EVM, which, you know, the ZK EVM has been in the Ethereum roadmap since Genesis, I think. Like we've understood theoretically of the possibility of turning the EVM into a ZK algorithm. And that, you know, we understood that theoretically back in 2015. Now we're in 2026. And like, oh no, this is now, you know, just an engineering challenge. And we're like in the last mile of this and like it's basically almost here and in the meantime we are scaling on the more traditional front as well i want to get into the the qualitative nature of the scale of the zk evm so with block times and block sizes those are the two ways that you have throughput you have how big is your block and how frequently do those come you know you know height times height times length so can we talk about what that the nature of scaling with a zk evm does does it help lower block times? Does it just increase block size? I want Onsgar both fast and big blocks. I like my blocks big and fast. It would be great if we could increase the size of blocks, but there is also a very important element of just like block times is critically important for trading and finance. So how does the ZKEVM impact both of these variables? Right. So to answer that question directly, ZKEVM, indeed, it's not a panacea. It specifically addresses the throughput level. So So it gives us much, much, much bigger blocks in the same kind of time constraints. It even to be fully transparent it is a small extra strain on the timing just because you have one extra step right You have to have this proving step that in between block creation and block verification You have to have proving, but that's a minor constraint. But it in itself does not give us a lower latency. And this is why when you said at the top, like it's the biggest ever change, I was actually tempted to say, well, to me, that's true on the execution side of the blockchain, right? Like same as with Bitcoin, how we said there's the consensus mechanism, proof of work in that case, in our case, proof of stake. And then there's the actual processing of the blocks, Bitcoin transaction, Ethereum transactions, that kind of thing. For the actual execution, for the transaction bits, the ZKEVM and the related changes really are the major story for the next five years. We, in parallel, are also now putting together this really, really exciting roadmap on the consensus layer side. And the latency, that's all a consensus layer story, right? Because that's where basically the heartbeat of the blockchain is determined. And so we have this separate process. And you should probably, you know, this is maybe setting us up for a separate podcast episode. You should bring someone on that's specifically focusing on that type of work at the EF and or the broad ecosystem. Because I think we have this really exciting roadmap there that's getting us to a much faster finality. So right now, finality in Ethereum takes two epochs, that's 64 slots on average, two and a half epochs actually even. So it's like long amount of time. We're bringing this down all the way to basically single slot finality, two slot finality. Like it's going to come down like orders of magnitude. So that's super exciting. And then even within a single slot, instead of 12 seconds, we have a story there that's going to gradually get us down from 12 seconds to, I don't know, eight, six, four, much, much, much faster. And then there's separate work streams around, can you get even faster inclusion guarantees, right? So that's the heartbeat at which the chain actually progresses and you get guarantees about that's the result of your transaction. But can you maybe get, in principle, speed of light, just round-trip time confirmation that your transaction will be included? Ideally, I want to click a button and before I can even, within the 100 milliseconds it even takes me to realize something happened, boom, I have the confirmation my trade will be included. And then within, say, four seconds, I know at which price. I think that's the world we ideally want to be in. And we have a really, really exciting roadmap there as well, but it is a separate roadmap from ZKEVM. Okay, understood, understood. So the ZKEVM massively increases block sizes. I don't know if you can put numbers around that. And then it adds a marginal increase in block times. Can that block speed come down in the future? Or what does it take for block times to get faster? And is that something that we are aspiring to in the roadmap? Yeah, that's what I was talking about. Like, we are aspiring to that. That's not just aspiring. That seems so indeterminate. indeterminate optimism, we actually have a plan that will come down. It will come down as early as towards the end of this year. That's not quite certain yet, but basically we're starting to make this a priority as well, and it will rapidly then become a major priority. Maybe the part that I wasn't sure of is like maybe the block speeds don't necessarily come down, but transaction assurances come down very, very fast. And you're kind of saying, well, that's what people want anyways. Is that correct? Well, it's basically you have three things. You have the time to inclusion confirmation, you have the actual time to the next block and you have the time to finality. All three of these will come down. The heartbeat of the chain, the time to next block will actually be the one that's only going to come down maybe by a factor of three, something like that, from 12 seconds maybe to four seconds eventually. Maybe we can go lower, but I don't, wouldn't necessarily want to promise this. I think the other two are actually the more exciting ones. Finality will come down massively and time to inclusion, that's a bit more of an exploratory process still, but that also will come down massively. So I think, I think basically like, yeah, but block times as well will come down but none of this will be through zkdm although of course it will be part of an integrated system right okay understood okay so you're saying there's a variety of ways in which ethereum speeds up broadly and then there's like zooming into what speeding up means you know has nuances which you just went into and as at least when it comes from a user experience perspective we have ways of providing essentially instant speeds from the perspective of a user right okay Let's talk about the rollout plan for the ZKEVM. We are in phase of Ethereum where there is no ZKEVM. In the future, we will be a phase of Ethereum where it is all ZKEVM, but it is not an acute moment, as I understand it. How do we go from A to B? What does that roadmap look like? of course because this is like a multi-year process it's it's as typical there's like very concrete steps as say for the next 12 months and then as you go further and further into the future i can more point out that's the current plan these are maybe the open question these are the directions right so that's how that these things always work the interesting thing as i said top of a podcast is that it's not just a one-time hard fork there will be a one-time hard and that is about the eventual switch from what will come first, which is optional ZKVMs for those nodes in the Ethereum network that want to consume proofs instead of re-executing. Then at some point there will be this moment in time where we say, okay, now Ethereum just runs on proofs. Of course, you can still run a node optionally in re-execution mode if you want to, but by default the network now guarantees that there will always be proofs basically. and then from this point where where this where the switch to mandatory proofs is is when you really get the scanning gains because before then you're basically not yet mandating that anyone right so like you're still allowed to run a full re-execution node and you're allowed to be slow and the network will hear you exactly and after that it's like okay if you want to be a re-execution node that's a special purpose role now that requires special purpose hardware of course that's internally it is a big project like how how do we make sure that if we run at much faster speeds that you can still run an RPC node in a performant way, right? So like this is a separate work stream that we're working on. But in terms of like the typical, both typical validator even and the typical full node out there, that's not even a validator. Those people basically by default will all at that point then switch over to ZK. Now, again, as I was saying before then is this phase of optional proofs. So that has not started yet. Like right now we're in the proof of concept phase. So like I think Justin presented that in Buenos Aires, there's this proof concept of, hey, see my validator can in principle already run on zk but that's not yet like if you're a validator like you can't use this yet today right but the idea is that very soon so meaning within say the next 12 months or so we are starting to to put this out there in a early production ready state where the idea is that we we will of course we will get very quick guidance of like these are this is the specific nuanced level of confidence we have yet in the these the security of the system all these kind of things, right? And for example, at that point, we could not yet have the majority of the network run on this yet, right? Because like if there's some bucket with it or something, right? You very much still want to have the backbone of all the major validators run on this. But if you are just a full node, just for hobby purposes, or maybe you're a validator on a very weak machine, you might be tempted to just, at that point, transition over. So that will be the first step. And then one thing we haven't really touched on yet is that like, well, I guess a little bit is that there's actually quite a few technical requirements that we need to hit before we can move the bulk of validators over. And I can briefly go over those. So one we already touched on, for example, is the block in blobs, which will come at some point where we basically say, look, we now put the block into the data. So there's also the sampling aspect to it. If you are a re-execution node, you still download all of it. But if you now are a ZK node, you can start only sampling it, right? But this will come after the initial optional proofs rollout. So before then, Evalidata basically has to download the proof, but also has to download the full block store. So it means they don't yet gain any bandwidth benefits. They only get the IO and the compute benefits. So basically, we have the block and blobs that will have to come. We have to have, in general, networking improvements that are in the works. We have the pricings, meaning we have to actually make sure that the parts of the Ethereum chain that are especially hard to ZK verify, we make a bit more expensive. We basically rebalance the cost. And then the most important technical dependency for the mandatory proofs, the full transition basically, is actually it's related to the statusness element. And that's specifically that we need to transition the Ethereum state tree over to a new format. Long-term listeners might be familiar with this elusive Verkertree idea, right? And so Verkle trees were this early Ethereum idea of like, hey, we can only have a Merkle tree. So like any account in Ethereum is part of this huge tree structure and every block, the entire tree is updated. And, you know, at the roots, you have your balance and you have all these individual elements about your account. The original idea was that transition is over to a more efficient form and it's called Verkle trees. And that was the unfortunate fate that Verkle trees had is that they were just never really necessary. They were always like one of those nice to have features. Back then, back when we were not quite sure, like how aggressive do we want to scale the chain? How quickly will state growth become a problem? There was some worlds in which it would have been a more urgent topic, but because we never went down those routes, it was always like right beyond the edge of urgent enough to ever do. So we never ended up shipping vertical trees. But the nice thing is we now already have a lot of prior work. And now we can actually go directly to the next generation of cryptographic structures here. And so instead of a vertical tree, we're going to something that's basically called a unified binary tree. It's somewhat similar. The main difference is that it has a very different kind of like, instead of like a vertical tree is a very wide tree, a binary tree is a very narrow tree. And the main, I guess, simple set, the main difference is that the binary tree uses a post-quantum secure hash function that is also very efficient to prove. So it's already basically like fitting into this like future world that Ethereum is going to. Whereas the vertical trees were basically the standalone piece that doesn't quite fit. But the nice thing is we have a lot of prior expertise. We have Guillaume, who has been the champion of vertical trees. And he's frustrated to no end that we never ended up shipping it. And now his time has come. So he's been very excited. He's now working towards this binary tree upgrade behind the scenes already. And he's doing an amazing job there with his team. And so actually over the next two years, I would say the biggest kind of individual story that we'll have in Ethereum will be this upgrade to binary trees. So that will probably over the coming months start to become a bigger and bigger topic. People will start hearing about it. And that will then enable very efficient stateless operations or partially stateless operations for nodes. So to recap, basically starting a year or so from now, we will roll out optional proofs. Those optional proofs will initially only be immediately effective for compressing computation and helping somewhat with IO load but you still have to run in stateful mode. And then we will bit by bit start bringing these pieces into the protocol that unlock the full potential of ZK EVM and in parallel keep hardening the ZK EVM security properties so that by the time we are running out of conventional scaling means, which is, that's why all of this is so beautiful. Like we basically have exactly like three years of scaling or like two and a half more years of scaling ahead of us of traditional scaling. And at that point, we will be ready to just seamlessly move over to ZK EVM. So one year from now, optional proofs, two and a half years from now ish, plus minus, this full transition to mandatory proofs. And then we'll have all the pieces ready to then immediately keep scaling based on ZKBMs after that. So that's the full out. right so so the dependence as i understand it the way that it happens is that in a year we will introduce optional proofs the ethereum enthusiasts of the world who just you know love ethereum tinker with ethereum run nodes for ethereum out of just pure passion will start to do these optional zk proofs they will be the pioneers of the transition of ethereum to be a you know a classical blockchain transitioning into a zk blockchain and that will give you know ethereum researchers like you, the EF, a lot of data of what it looks like to be in production because of these enthusiasts that are running this optionally because they just love Ethereum so much. That will give you guys the information you need to do the prerequisite upgrades that are needed to actually get a full mandatory ZKEVM fork. And as you alluded to, it will also give us just insight into, you know, in production use of the ZKEVM. Maybe there are bugs. If there are bugs we need to find them before we make them mandatory and so you know all the different clients will have their own version of the zk evm and we'll be stress testing all of those by using them into production basically there's a whole era of demo ethereum zk evm and that that will take i think you said you know somewhere two to three years as we run out of classical scaling that will have we will have the hardened data and the information we will do the prerequisite work to unlock mandatory ZKEVM. Around two and a half, three years from now, the mandatory ZKEVM hard fork will happen. And then Ethereum will make the transition to this is now a ZKEVM blockchain. The story doesn't end there though. What happens after the mandatory ZKEVM fork? How does the story continue beyond that point? And just by the way, to clarify a little bit for people that maybe think, oh, we are now gung-ho starting to release optional proofs for anyone who wants to be like a experimental, you know, like guinea pig here. I think when we are ready to start releasing this, there will be very explicit guidance around what is this for? What kind of production-grade readiness does this have for which use cases? I think you can imagine more like it's about how many nines after the comma. Ethereum mainnet must never go down. We have 100% uptime and we're not willing to risk this, so we're basically willing to take extra precaution there. but importantly if you're for example at some point running a zk validator and you actually days a bug or something right the worst that happens like no one will get slashed right like what happens is just you're briefly kicked off the chain and then you're you're automatically flipping over back to normal re-execution mode and then worst case if we're already in this partial status world you might have to first re-sync some of the state right so worst case you're offline for a couple hours and then you're back to back online back on the chain so so none of this, we do it very responsibly just because, you know, just to clarify this. But yeah, so, and basically I think the way that these, again, absolutely amazing ecosystem ZK teams are talking about this. I think last year was all about, I would say it was the year of performance, getting to real-time ZKABM. This year is the year of security, getting to like absolutely hardened. There's also like this bit of security measure, right? Like getting to a level where we are very confident in the security level. then next year I think will be the year of productionizing the zkvms and then the year after will be the year of like transition to mandatory so like that's basically the performance security production and then like full transition that's how we think about it is like one year at a time in terms of what comes after the transition well it's just I think and that's why I was saying earlier like with the further you go out the more unknown unknowns there are it's just about saying At that point, we will have all of the ingredients. Like, you know, we have the partial statelessness, we have the block and blobs, and we have the ZKVM to take advantage for scaling. But we don't expect that once we get closer that it's like a one-time switch and now we can run it a thousand times faster. Instead, we basically like right now, conservatively, quote unquote, are projecting this three times per year because we expect that there will be individual remaining challenges we have to address, right? Maybe we have to restructure the way nodes sync, or maybe you have to restructure the way RPC nodes, again, operate. So you're confident that the chain is still usable at higher rates, right? So this is just expressing that while we have the main architectural ingredients, there will still be a lot of detailed work. And so we expect instead of making use of it all at once, it's going to be this continuous process. And again, the nice thing about this rough 3x number is if you just say, look, every two years you get a rough 10x, 9x, 10x. So basically, we're thinking we have like a path for maybe five or six years of this. So six years at 10x every two years means 1,000x. So basically, the first three years of that we get traditionally, then the next three years, so this is EKADM. So in six years, roughly 1,000x of where we started last year. That's I think the, again, is this guaranteed yet? No, we don't yet have, we just think we see a path. We think we see a path. That's our goal. and then of course beyond that you could if you want to be like more in sci-fi world like now you can think about native roll-ups so maybe the way we then keep scaling beyond that is not through just the single chain you know maybe then we're back to this kind of sharding type setup of multiple chains synchronously composed yeah we'll have to see but that's that's the plan what if you could trade gold forex and global markets with the same tools and speed that you use for crypto. That's exactly what BitGet TradFi unlocks. After strong beta demand, including over $100 million in single day gold trading volume, BitGet TradFi is now live for all users. Inside of your existing BitGet account, you can trade 79 instruments across Forex, precious metals, indices, and commodities, all settled directly in USDT. No platform switching and no fiat conversions. This is BitGet's universal exchange vision in action. Crypto and traditional finance side by side. You get deep liquidity, low slippage, and leverage up to 500x, letting you apply crypto strategies to macro markets. New to TradFi? Start with gold. The gold USD pair is liquid, macro-driven, and a familiar natural bridge between crypto and traditional markets. Try trading gold on BitGet now at bitget.com. Click the link in the show notes for more information. This is not financial advice. Few people in crypto put real skin in the game when they make public top or bottom calls. The DeFi report is one of them. The week before the October 10th flash crash, Michael from the DeFi Report emailed his entire newsletter saying he's going aggressively risk off and sold the majority of his book from crypto into cash. This is when ETH was about $4,000 and Bitcoin was $110. Michael runs the DeFi Report, an industry-leading research platform built on data, cycle awareness, risk management, transparency, and most importantly, skin in the game. We like Michael at Bankless. We like his analysis, and that's why you hear him on the Bankless podcast about once a month. And the DeFi Report is giving Bankless listeners one free month of access to the DeFi report. So if you're looking for some sharp data-driven analysis to make better informed decisions around your portfolio, you can learn why and how Michael called the top and what he's doing next all in the DeFi report pro. Check it out. There is a link in the show notes. Ansgar, as I understand it, client diversity is a big topic here. Why is client diversity relevant to the ZKEVM and how does the ZKEVM impact it? So, I mean, of course, I think people will be familiar why client diversity is so core to Ethereum and to Ethereum's kind of 100% uptime, right? Like there's the redundancy factor you get from client diversity. And so the reason why this is relevant is just that like the nature of clients, the nature of client diversity changes in this world. And that is because, again, if we think back to how I explained, how there's like this basically most likely RISC-V kind of intermediate target for ZK. And then you basically, you just run a, of course, heavily modified, but basically like a traditional execution layer client that gets compiled to RISC-V. and then you take one of those new ZK proving systems that then take the RISC-V code and prove execution over it, right? So what that means is now basically the Ethereum execution layer nodes live inside of the ZK proofs, right? Which is of course conceptually like very different from what that used to be before. And so what it means is that now the actual node architecture is actually quite interesting. You basically, you run, and that is a little bit still TBD. Like it might be that you're still running this explicit split of two clients, like the consensus layer client and the execution client, but the execution plan's role is very different now. It basically just verifies the proofs the one that you run locally right It just verifies the proofs and does some maybe like mempool networking that kind of stuff state management But inside of the proof lives the ZK program that was also derived from an execution layer client. So if you think about the roles of clients now, basically it means that the main question is like, what about the diversity within those proofs, right? Because the outer system we are familiar with, but what about the diversity within those proofs? And so the nice thing is that in principle, you kind of, you get a very comparable, very parallel type of mapping where you can just, you know, you don't just take a single execution client and compile it into RISC-V. You take multiple. So, you know, you basically take kind of, you know, the existing ones, you know, or like also there's a few ones that will be specially written for that use case. You compile all of those. and then to make sure that the redundancy is full stack, not just the first half of the stack, you also have multiple of these proving systems that take RISC-V, because of course there could also be a bug in that part of the stack, right? Like they take the RISC-V and prove over it. So you say you have like, as an example, five of each, right? You have five execution layers that can be compiled into this RISC-V and then you have five different proving systems. And what you can do is you can basically build pairs of those. So you can say, and Justin has this really nice idea where you can even, you can in principle, even like say performance match them. So maybe the fastest execution client is paired with the slowest proving system. So you basically, so the pairs kind of balance each other out, but that's just an idea. But basically the point is you then have these combinations of like, okay, this execution client with this proving system. And then in the end, you basically, you're again, in this example of five, you'd be in a world again where you have like five different types of proofs that could all, and they're all kind of redundant. They all have different, you know, they're full stack different from each other. the the generally novel thing here is that today you run one execution client right like there's multiple of course and there's multiple consensus there client but you choose one one of each in this new world what you can do is you can just verify multiple proofs so for example and there's this idea and again just to use example numbers but they seem roughly ballpark right you could have a system where you say i only accept a block if i saw at least three different valid proofs for So I know that there are these five different ones and I have to have to have seen at least three of them. Otherwise, I don't accept the proof. So I accept the block. And so that actually gives you better redundancy because it's kind of almost as if every Ethereum node today would run three different client setups and would basically only accept blocks if they all agree. Which, of course, gives you much better properties than right now we only have the redundancy across nodes, not within a node. So it's actually, it's a better story, but it's also one we actually have to be intentional so that you don't accidentally collapse any layer of the stack. And as a side note, there is this experimental idea. And of course, in the age of AI, all the timelines collapse. So who knows? Maybe that's actually even short-term viable. But this experimental idea of a fully formally verified client. And you could imagine an EVM implementation in RISC-V that is fully formally verified to be correct. In that world, you would no longer need redundancy at that layer of the stack. But again, as I said, the further out items have some uncertainty. this is like one of those theoretical out there approaches, but that of course would be also really nice to have. And I think formal verification in the age of AI will become a much bigger deal anyway. So this might be a really nice synergy. Yeah. As I understand it, the clients are where all of the risk is with the ZKEVM and where we have to be, have like an extreme level of caution with the transition from a classical blockchain to a ZK blockchain. And like if something is going to go wrong, it's going to go wrong at the client level. I mean, I suppose that's always where it would go wrong. But when we, you know, we have, you know, Ethereum has over a decade of uptime because of client diversity, because of how hardened these clients are. And we are kind of resetting that to kind of go back to, you know, zero Lindy with the ZKEVM. You know, we have some properties that will be carried over, but nonetheless, it's risky in the sense that like we have all this great hardened infrastructure and we're kind of rebuilding it to be ZK. And so we have to have this like extra levels of redundancy, as you said, like three proofs, three correct proofs to make sure that, you know, not just two proofs because two proofs might have the same bug. So we might prove the same bug twice. So three things like this. And so, you know, what's your level of fear about this part of the transition for Ethereum from like the classical blockchain, which is so hard and 100% uptime to go where we go here? Like how scary is this? Oh, it's a really good question, right? Because I think the promise here is so huge that we're all very, very excited about this, but it is also generally like a huge, a very, very big challenge. And this is why I think it's not at all natural that we are even doing this two-step rollout with the optional proofs and then the mandatory proofs. In principle, we could switch over at the end of this year, right? And we already plan with this extra 18 months period specifically because of that, level of certainty that we want and that we project that will just take some more time. Again, it also gives us the extra time to roll out these other dependencies to really make use of CK proofs. So it's actually quite synergistic, but still, right? This extra 18-month delay is specifically for that reason. And to be clear, we would always be responsible with this. So if it turns out 18 months are not enough, of course, we would delay this full transition to mandatory proofs. Maybe we even find some more gains we can get on the classical scaling side until then, right so like maybe it wouldn't even matter but basically we would always wait until we're like really really confident and it's it's not in principle harder but it's just as you said right like it's it's a bit of a reset so like a lot of say our internal expertise both inside of the EF and across the client teams around security work testing work a lot of this is currently actively being restructured for this very new domain for this very new type of operations with zk understanding like what even are the weak points here like also say on the cryptography side like how we have absolutely world-class cryptographers inside of the Ethereum Foundation and in the ecosystem. And they are like very thoroughly like really turning around every single stone here in this overall like stack and really making us understand like what are the critical points here. And again, like how far are we from being willing to actually trust this? And it's actually, so for example, just to take a related example, I'm not sure if you already had maybe an episode on post-quantum, but that's also a big topic on Ethereum. We will soon, yeah. Yes, mostly unrelated, but of course, there's synergies here. And it has a similar nature where I talked about the binary trees and part of the binary trees is this choice of hash function that you need in the tree. And there, for example, we also like currently, not blocked, but like the longest piece of the timeline there is us talking with our cryptographers. We have a candidate, like a set of kind of a family of candidate hash functions. But getting to this point where we're saying, look, they are actually robust enough. they have been around long enough that we actually trust that they are secure, right? Like, especially something like cash function. So fiddly, you can't really prove security. You're just, it's basically like a lindiness to it, right? Like, how long has it been around? How many people have tried to find vulnerabilities? Has there been anything found, right? This kind of thing. And so some of these things you just can't accelerate, right? Like, how many years of academics having looked into this has there been, right? That's just like a hard constraint. And so both in this post-quantum, but also then binary trees we're also using for making use of ZKEVMs. It's not directly ZKVMs, but making use of them. There's just some elements of the timeline there that are dictated by the security needs that we have. And we just can't cut corners. So it's a big concern, but I think we are very responsible about it. Yeah, yeah, yeah. Which is why it's taking not a short amount of time. So just to maybe conclude this podcast, the timeline, it is now at the start of 2026. by the time we hit 2030 is a good guess for when we think we will have the full power, the full properties of the ZKEVM. You're nodding your head. Does that sound right? That sounds right. And I think we will be still probably in the process of making full use of it for scaling. So we will be, hopefully 2030 will be another 3X year, maybe more than 3X because we have AI and the hard fork timelines are compressing, but basically another 3X year in 2031 will look like another 3x year. So we will be on this continuous scaling path, but already squarely in the ZKEVM backed side of that scaling path. Right, right. I guess one point you made earlier, and I guess it's worth reemphasizing here, is the aspiration of Ethereum is to do a 3x scaling increase every single year, not just for the next three years, next three years for classical scaling, and then the next three years after that for ZKEVM scaling. So, you know, while I am excited about the ZKEVM and I think it's incredible and why I want to rally the Ethereum community around it. Acutely, there won't be a ZKEVM moment as felt by the transactors, users of Ethereum because we are doing a 3X scaling per year for the next six years, first with classical, then with ZK. And so while the merge acutely transitioned us from proof of work to proof of stake, EIP-1559 acutely transitioned us from, you know, to have the burn and better transaction UX. And like, same thing with 4844 is an acute transition. This won't be that because we are scaling anyways. But nonetheless, I think it is important to know that like only Ethereum will actually be able to access, you know, the final, you know, years three through six of scaling in that capacity because this is Ethereum's Manhattan project. Like we said, only Ethereum has been working on this. It's been working on this since Genesis. And while Ethereum makes this transition from a classical blockchain to a ZK blockchain, it will be leaving every other blockchain behind in the previous classical era. And so maybe that's why I'm so excited about it is like Ethereum is making the generational leap to the next gen blockchain. And no other blockchain will have these properties that we've been discussing about on this podcast. Well, and I think this is what I said earlier. Like it's not just that, like it's not an accident that you won't notice this transition. Like it's actually by design. Like we're trying, we're like, I think in this moment in time, we're really trying to balance this, continue the strength of Ethereum of being able to make these like leaps, these paradigm jumps that I think like other projects really struggle to be able to follow. I think, again, that's why we'll also just naturally have the post-quantum properties. I think many chains will actually like struggle quite a bit with actually getting there. and at the same time realize that now we're no longer in the sandbox mode we can't just like say i just wait just wait for three more years like how you know like don't be so impatient like no no no i mean people are coming on chain agents ai is a coming unchained like today right so like i think it's important that we basically just like we we have we are a continuously scaling blockchain and it is our responsibility to under the hood make that happen and like use whatever like both traditional and magical future ZK means necessary to make that happen. I will say, because you said like no one else will be able to do that. I think, I actually think it's one of those areas where there's again, natural synergy between Ethereum and like the EVM L2 ecosystem. I think one thing that, for example, we didn't talk about at all, but that I'm very excited about is that like, again, similar to how the initial jump to non-real-time ZK EVM came mostly driven by the L2s. I think now that we are driving from the L1 side, this move to real-time zKBMs, the L2s will also be huge beneficiaries of this because they will also just become the, like, or like, gain the ability of real-time settlement. So that means also, say, all the, like, bridging pain across the L2 ecosystem, right? Like, oh, in principle, it takes either I use a mint and burn bridge or it takes, like, seven days for my asset to move across chains. All of this will disappear, right? it's going to be a few seconds for any asset to move from any L2 to any other, any real-time ZK EVM proven L2 to any other real-time ZK EVM proven L2 through the Ethereum L1 or of course into or out of the Ethereum L1. So I think it's yet one of these cases where the fact that if you're part of the Ethereum family, we're like, this is kind of, this is the ecosystem that really has this principled approach to things. You get all of these benefits for free. You're basically, you are on the principled architectural path. And I think that has always been our competitive advantage. And I think while doubling down on the competitive advantage, I think we really are already trying very hard. I think we have to keep trying even harder to close where maybe we've had the competitive disadvantage, which is, I think that Ethereum in the past has sometimes been a bit too much in this pure research mode and maybe discounting the type of activity that already existed and saying, ah, that's just sandbox, whatever. The real world adoption will come later and then we'll start focusing on it. real-world adoption is clearly here. And so finding the right balance, I think, is the ongoing challenge. It's what, for example, Tomas and Choway in their time at the Ethereum Foundation have really put a lot of focus on. And I think that's how I would narrate the future of Ethereum, both the Manhattan Project and the short-term focus and ownership of the protocol as a useful thing today. One theme that I've picked up on a handful of your answers throughout this conversation on SCAR is that there seems to be a significant number of second-order positive effects of the ZKEVM that are not related directly to the main questline of the ZKEVM, which is just straight, you know, layer one scaling, but solves a bunch of second order problems, you know, you know, layer two scalability and composability being the one that you just said. How big is that second order effect? Like, am I correctly identifying that it's actually like somewhat large in the positive second order effect number? Yeah, I mean, I think there's the immediate second order, like the, as you said, the benefits to the broader EVM ecosystem, especially EVML2 ecosystem, because again, and I guess maybe I didn't mention this so much, I think it's much easier to adopt, to benefit from this technology for EVML2s, whereas other EVML1s, while I think that's actually, it's also very exciting for them, I do think basically you'd have to re-architect your entire chain, right? Similar to how I was saying, like the Ethereum L1, And the ZKVM is the core piece, but there's like many elements to it, right? Whereas because the L2s already have this architecture where they are just like naturally settling on the L1, they just have to compress the timeline, like the settling time. Like for them, it's almost like a trivial upgrade to follow us to this world. So I really think there's the unique synergy for the Ethereum L1 and then the Ethereum EVML2s. I think longer term, if I'm talking beyond blockchains here for a second, I think we've already seen how in the world outside of crypto, we are starting to see this like second generation of cryptography really impact and it become very impactful. It took a while. It took a couple of years for people to start taking it seriously. And so I think you can start to see it with all kinds of things like Microsoft is doing things like a lot of governments are doing like NLZKID type of systems. you're starting to really see use cases that go beyond just blockchains. Blockchains are like the most valuable. So that's why we always see the technology there. But you can imagine a world, and especially once you have this real-time element unlocked, you can imagine a world where like, just to be futuristic here, like AI agents might use real-time ZK proofs to make provable statements for trustless interactions with each other, right? Like some of that might be on-chain for direct asset-backed interactions, but some other things might just be literally just, I'm just proving that I have access to this data and this data has the structure and that I, you know, all these kinds of statements, you can just turn trivially real-time proof that you just couldn't before. So I think that's a five year down the road, maybe kind of thing, but five to 10 years, but that will come and that I think will be really exciting. And then for example, I don't know if you've seen this, like more and more countries starting to introduce, I don't know, social media bans for like a minus and that kind of stuff. And like, usually that's implemented in a super dumb way. You have to like, just the user service, you have to upload your ID to the service, right? And if we can replace that with like a ZK ID system where you really don't leak anything other than I own an ID and my birth date is above this level, this threshold, obviously that's a much preferable world. So I think we are currently like, I think blockchains and especially the Ethereum ecosystem is currently like funding this massive leap of the cryptography toolkit that we have. And with some delay, five to 10 years delay, it will also hit the non-blockchain space and I think it will be super impactful. Yeah. One idea I've had is that Ethereum and all the research that we have invested in over the years hopefully is one big contributing factor to kind of restoring the brand of crypto by helping the world overcome some generational challenges as you correctly identify. You know, crypto doesn't really have the best brand at this present moment. Hopefully, with some of these sci-fi tech advancements, this Manhattan project that Ethereum has been working on, we don't just improve the nature of our own blockchains, but we improve the nature of the world around us. And the second order effects upon Ethereum as a brand, as an ecosystem, ETH price is benefited downstream of all of that. Ansgar, this has been a super educational episode. I really appreciate you coming on here and giving me and the Bankless Nation the time about the ZKEVM. I think, you know, broadly, the crypto industry is looking for reasons to get bullish about something. I think this is a very valid thing to be excited about and to be bullish on. And so I'm trying to rally the troops around the ZKEVM fork just in mindshare in education. And I think you've done the job I've hoped we could do here on the episode today. So I thank you for that, sir. Sounds good. And one last caveat, just to repeat this, right? I'm not personally a ZK expert. I mean, obviously I'm in the loop on a lot of these things, but I'm more like our broader scaling expert. So this is part of my job. But really we have absolutely amazing people. So I'm sure I got some of the minute details a little bit wrong and the people will scream at their monitors, but I hope I got the broader picture roughly right. And I agree. It's very exciting. I think both the execution layer side, the ZKVM scaling story, and then on the consensus layer, like these next generation upgrades we're planning there, Very, very exciting. I do think we should understand though in this moment in time also, I think we should try to become more and more the boring infrastructure layer. And I think we should really like ready the stage for the applications. And so I'm personally like incredibly excited for like the actual real world application side of crypto. We're really starting to see this come online. agentic payments real world assets stablecoin payments all this is incredibly exciting i think it's a great moment to be in crypto and yeah and of course one last shout out there maybe actually if anyone listening to this was actually interested excited by these technical details of everything we talked about though and actually wants to help on the infrastructure side do reach out to me i don't know either on a twitter dm or unscathedetherium.org we're also always in principle hiring if any smart kid out there like really would want to to join us here on the infrastructure side. It's not the only exciting thing in crypto, but it is still very, very exciting. And please come join us. We'll make sure your Twitter is in the show notes on YouTube or Twitter or wherever people are listening to this podcast. Ankar, thank you so much. Thank you very much. Bankless Nation, you guys know the deal. Crypto is risky. You can lose what you put in. But nonetheless, we are headed into the future. We're going to ZK the future too, with the help of the ZKEVM. That's not for everyone, but we are glad you're with us on the Bankless Journey. Thanks a lot. Thank you.