$CHEETAH is live!
Type something to search...
Blog

Modular Blockchains: Engineering Scalable Web3 Infrastructure

Celestia and EigenDA are separating the blockchain stack into specialized layers. Here is what that means for developers building production-grade Web3 infrastructure in 2026.

Modular Blockchains: Engineering Scalable Web3 InfrastructureModular Blockchains: Engineering Scalable Web3 Infrastructure
Modular Blockchains: Engineering Scalable Web3 Infrastructure
Join Our Newsletter

Subscribe to our newsletter to get the latest updates and offers

* Will send you weekly updates on new features, tips, and developer resources.

The Architecture Shift Reshaping Web3 Infrastructure

TL;DR:

  • Monolithic blockchains bundle execution, consensus, settlement, and data availability into a single layer, creating fundamental throughput ceilings that cannot be engineered away without sacrificing decentralization
  • Celestia, founded in 2019 as LazyLedger, focuses exclusively on consensus and data availability, enabling any execution environment to plug into a shared base layer without inheriting its constraints
  • EigenDA leverages EigenLayer's restaking model to provide data availability backed by over 200 operators and more than $335M in restaked ETH, delivering 15MB/s throughput compared to Ethereum's 0.0625MB/s baseline
  • Mantle Network's full EigenDA integration achieved a 234x bandwidth expansion and a 20x improvement in censorship resistance by expanding from 10 to over 200 securing operators
  • The modular stack separates concerns across four layers: execution, settlement, consensus, and data availability, allowing each to scale independently and be replaced without rebuilding the entire system
  • Rollup-as-a-service platforms like Caldera, which has raised over $25M from Founders Fund and Sequoia Capital, are abstracting modular infrastructure into deployable primitives that teams can configure without deep protocol expertise
  • AI-powered development environments are becoming essential for navigating the cross-layer complexity that modular architectures introduce, where a single application may span three or four distinct protocol layers

The result: Modular blockchain architecture is not an incremental improvement over monolithic design, it is a structural rethinking of how distributed systems should be built, and the developer tooling ecosystem is only beginning to catch up.

The blockchain industry spent the better part of a decade trying to solve scalability by making monolithic chains faster. More validators, bigger blocks, shorter slot times, optimized virtual machines. Each approach produced marginal gains while preserving the fundamental constraint: when a single system is responsible for executing transactions, reaching consensus, settling finality, and storing data, those functions compete for the same resources. You cannot optimize one without creating pressure on the others. The result was a predictable cycle of congestion, fee spikes, and user frustration that no amount of parameter tuning could permanently resolve.

What Celestia and EigenDA represent is a different kind of answer. Rather than asking how to make a monolithic chain faster, they ask which functions actually need to live together and which ones are better served by specialization. The answer, it turns out, is that very few of them need to be bundled. Data availability, the function of ensuring that transaction data is published and retrievable, is a distinct engineering problem from execution. Consensus over transaction ordering is a distinct problem from settlement finality. Separating these concerns is not just an architectural preference, it is what makes genuine horizontal scaling possible. For developers building on Web3 infrastructure today, understanding this separation is no longer optional background knowledge. It is the foundation of how production systems are being designed.

The Monolithic Bottleneck: Why Traditional Chains Hit a Wall

To understand why modular architecture matters, it helps to be precise about what monolithic architecture actually costs. Ethereum in its pre-rollup form is the canonical example. Every full node on the network was required to download, execute, and verify every transaction. That design choice, which made sense for a small network prioritizing trustlessness, became a structural ceiling as the network grew. When demand exceeded capacity, the fee market responded by pricing out smaller transactions entirely. During peak periods in 2021 and 2022, simple token transfers regularly cost $50 or more in gas fees, and complex DeFi interactions could exceed $500. The network was not broken, it was working exactly as designed. The design itself was the problem.

The deeper issue is what Mustafa Al-Bassam, co-founder of Celestia, has described as the endless cycle of new monolithic smart contract platforms. Each new chain attempted to solve Ethereum's throughput problem by making different trade-offs, typically sacrificing decentralization or security in exchange for cheaper fees. Solana chose a small, high-performance validator set. BNB Chain chose a permissioned validator model. Avalanche introduced subnet architecture but kept execution and data availability coupled within each subnet. These were engineering compromises, not solutions. They moved the bottleneck rather than removing it. The throughput gains were real but came with trust assumptions that enterprise and institutional users were increasingly unwilling to accept.

What the industry needed was a way to scale without making those trade-offs, and that required rethinking the stack from the ground up. The insight that drove modular design is straightforward in retrospect: if you separate the functions that a blockchain performs into specialized layers, each layer can be optimized independently, scaled independently, and upgraded independently. A team building a high-throughput DeFi application does not need to run their own consensus mechanism. They need a reliable execution environment and a trustworthy place to publish their transaction data. Modular architecture lets them source those functions from purpose-built systems rather than inheriting the constraints of a general-purpose chain.

What Modular Actually Means: Separating the Stack

The term modular gets used loosely in Web3 conversations, so it is worth being precise about what the architecture actually involves. A fully modular blockchain stack separates four core functions: execution, settlement, consensus, and data availability. The execution layer is where transactions are processed and state transitions happen. The settlement layer is where finality is established and disputes are resolved. The consensus layer is where nodes agree on the ordering of transactions. The data availability layer is where transaction data is published and made retrievable for anyone who needs to verify the chain's history.

In a monolithic chain like pre-merge Ethereum, all four of these functions happened on the same layer, enforced by the same set of validators, stored in the same database. In a modular stack, each function can be handled by a different protocol. A rollup might use its own execution environment, settle disputes on Ethereum, order transactions through a decentralized sequencer, and publish data to Celestia or EigenDA. The components are interchangeable in principle, which means teams can swap out any layer as better options emerge without rebuilding their entire system. That flexibility is not just convenient, it is what makes the ecosystem composable at scale.

The practical implication for developers is significant. When you build on a modular stack, you are not writing code for a single chain. You are writing code that interacts with multiple specialized protocols, each with its own API surface, its own security model, and its own operational characteristics. A smart contract deployed on an Optimism-based rollup that uses Celestia for data availability and settles on Ethereum is touching three distinct systems before a user's transaction is considered final. Understanding the interfaces between those systems, and the failure modes at each boundary, is a non-trivial engineering challenge. It is also where most production incidents in modular systems originate.

Celestia's Design Philosophy: Data Availability as Infrastructure

Celestia was founded in 2019 under the name LazyLedger, and the name is instructive. The original insight was that a blockchain does not need to execute transactions to provide value. It can be a lazy ledger, one that simply orders and stores data without caring what that data means. Execution can happen elsewhere. This separation of concerns is what Celestia was built around from day one, which distinguishes it from projects that added modularity as an afterthought.

Celestia's architecture focuses on two functions: consensus over transaction ordering and data availability. It does not execute smart contracts. It does not settle disputes. It provides a base layer that any execution environment can use to publish its transaction data, with a guarantee that the data is available for anyone who needs to verify it. The mechanism that makes this guarantee credible is data availability sampling, a cryptographic technique that allows light nodes to verify that data has been published without downloading the entire block. A light node samples small random chunks of a block and uses erasure coding to confirm with high probability that the full data is available. This means Celestia can scale its data throughput by increasing block size without requiring every node to download every block, which is the key to its scalability model.

For developers, what Celestia provides is a reliable, decentralized place to publish rollup data at a cost that is orders of magnitude lower than publishing to Ethereum's calldata. The practical effect is that rollups using Celestia for data availability can offer dramatically lower transaction fees to their users while maintaining the security guarantees that come from a decentralized data availability layer. The trade-off is that Celestia's security model is distinct from Ethereum's, and teams need to understand what they are inheriting when they choose it as their DA layer. Celestia's validator set, its economic security, and its liveness assumptions are all different from Ethereum's, and those differences matter for applications where finality guarantees are critical.

EigenDA and the Restaking Security Model

EigenDA takes a fundamentally different approach to data availability, one that is rooted in Ethereum's existing security rather than building a new trust network from scratch. EigenDA is built on EigenLayer, the restaking protocol that allows Ethereum validators to extend their staked ETH to secure additional services. Rather than requiring a new set of validators with their own staked assets, EigenDA draws on the economic security of Ethereum's validator set through restaking. This design choice has significant implications for the trust model that applications inherit when they use EigenDA.

The operator model is central to how EigenDA achieves both security and throughput. Data is distributed across a network of operators who have opted into EigenDA through EigenLayer. Each operator stores a portion of the data, and the system uses erasure coding to ensure that the full dataset can be reconstructed from any sufficient subset of operators. As of early 2026, EigenDA is secured by over 200 operators and more than 163,020 mETH, representing approximately $335M in restaked assets. That economic security is not hypothetical, it is actively at stake for every piece of data the network stores. Operators who fail to make data available risk slashing, which creates a direct financial incentive for reliable behavior.

The throughput numbers that EigenDA achieves through this model are substantial. The system delivers 15MB/s of data throughput, compared to Ethereum's native 0.0625MB/s limit for calldata. That is a 240x difference in raw bandwidth, and it translates directly into the transaction capacity available to rollups using EigenDA as their data availability layer. Mantle Network's integration of EigenDA is the most concrete production example of what this means in practice. By moving from Mantle DA, which used 10 operators, to EigenDA's full operator set of over 200, Mantle achieved a 234x bandwidth expansion and a 20x improvement in censorship resistance. Those are not benchmark numbers, they are production metrics from a live network serving real users.

Throughput Numbers That Define the Gap

The gap between what modular data availability layers can deliver and what Ethereum's base layer provides is large enough that it fundamentally changes what is possible for application developers. Ethereum's EIP-4844, which introduced blob transactions in early 2024, was a significant improvement over calldata for rollup data publication. Blobs provide roughly 0.375MB of data per block at a lower cost than calldata, which reduced rollup fees substantially. But even with blobs, Ethereum's data throughput is measured in kilobytes per second. Celestia and EigenDA operate in the megabytes-per-second range, which is a different order of magnitude entirely.

To put this in application terms, a DeFi protocol processing 10,000 transactions per second generates a significant volume of data that needs to be published and made available for verification. At Ethereum's native throughput, that data publication becomes a bottleneck and a cost center. At EigenDA's 15MB/s throughput, the same data publication is a solved problem. The bottleneck moves to execution, which is where it belongs, because execution is the layer where application-specific optimization is possible. This is the practical argument for modular architecture: it moves constraints to the layers where they can be addressed, rather than leaving them embedded in infrastructure that every application shares.

The cost implications are equally significant. Publishing data to Ethereum's calldata has historically been the dominant cost for optimistic rollups. EIP-4844 reduced that cost by roughly 10x for most rollups, but the underlying dynamic remains: rollup economics are heavily influenced by data publication costs. Celestia and EigenDA both offer data publication at costs that are substantially lower than Ethereum blobs, which means rollups using these layers can offer lower fees to users while maintaining healthy economics for operators. For teams building consumer-facing applications where transaction cost is a direct factor in user adoption, the choice of data availability layer is not an infrastructure detail. It is a product decision.

The Developer Experience Problem in Modular Systems

The engineering benefits of modular architecture are real, but they come with a developer experience cost that the ecosystem has been slow to address. Building on a monolithic chain is conceptually simple: you write a smart contract, deploy it to a single network, and interact with it through a single RPC endpoint. The mental model is straightforward, and the tooling ecosystem, including Hardhat, Foundry, Ethers.js, and Wagmi, is mature and well-documented. Building on a modular stack is considerably more complex.

When your application spans an execution layer, a settlement layer, and a data availability layer, you are managing three distinct systems with three distinct APIs, three distinct failure modes, and three distinct monitoring requirements. A transaction that fails at the data availability layer looks different from a transaction that fails at the execution layer, and diagnosing the difference requires understanding the internals of each system. Cross-layer debugging is not something that existing Web3 developer tools handle well. Most tools were designed for monolithic chains and treat the network as a single system. When that assumption breaks down, developers are left piecing together logs from multiple sources and reasoning about state across system boundaries.

The tooling gap is particularly acute for teams that are new to modular architecture. Understanding how Celestia's data availability sampling works, how EigenDA's operator model distributes data, and how a rollup's fraud proof or validity proof interacts with its settlement layer requires a significant investment in protocol-level knowledge. That knowledge is not well-documented in a single place, and the documentation that exists is often written for protocol researchers rather than application developers. The result is that teams building on modular infrastructure spend a disproportionate amount of time on infrastructure concerns rather than application logic, which is the opposite of what modular architecture is supposed to enable.

Rollup-as-a-Service and the Composability Layer

One response to the developer experience problem has been the emergence of rollup-as-a-service platforms, which abstract modular infrastructure into configurable deployment primitives. Caldera is the most prominent example. Founded in 2022 by Matthew Katz and Parker Jou, Caldera has raised over $25M from Founders Fund, Sequoia Capital, and Dragonfly Capital, and describes itself as the AWS of blockchains. The platform allows teams to deploy customizable Layer 2 chains with a choice of execution environment, settlement layer, and data availability layer, without requiring deep expertise in any of the underlying protocols.

The value proposition is straightforward: if you want to launch a rollup that uses the OP Stack for execution, settles on Ethereum, and publishes data to EigenDA, Caldera handles the infrastructure configuration and ongoing operations. The team building the application focuses on their smart contracts and user experience, not on running sequencers, configuring DA clients, or managing the operational complexity of a multi-layer system. This is the same abstraction that cloud infrastructure providers brought to traditional software development, and it is having a similar effect on the pace of Web3 application deployment.

The composability implications extend beyond individual rollups. As more applications deploy on modular stacks, the ecosystem develops shared infrastructure that any application can use. A data availability layer that serves hundreds of rollups becomes more economically secure and more operationally reliable than one serving a handful. A settlement layer that processes proofs from multiple execution environments develops deeper tooling and more robust monitoring. The modular stack creates network effects at the infrastructure layer, where each new participant strengthens the system for everyone else. This is a fundamentally different dynamic from monolithic chains, where each new chain competes for the same validator set and the same developer attention.

Security Trade-offs and Trust Assumptions

Modular architecture introduces security trade-offs that developers need to reason about carefully, because the trust assumptions in a modular stack are more complex than in a monolithic system. In a monolithic chain, the security model is relatively simple: you trust the chain's validator set, and that trust covers execution, settlement, consensus, and data availability simultaneously. In a modular stack, each layer has its own security model, and the overall security of your application is bounded by the weakest layer in the stack.

Data availability is the layer where this complexity is most acute. If a rollup's data availability layer fails, meaning the transaction data is not published or becomes unavailable, the rollup's state cannot be verified and withdrawals to the settlement layer may be blocked. This is not a theoretical concern. Data availability failures have occurred in production systems, and the consequences for users can be severe. Celestia's data availability sampling provides strong probabilistic guarantees, but those guarantees depend on a sufficient number of light nodes participating in the sampling process. EigenDA's operator model provides economic guarantees through restaking, but those guarantees depend on the slashing mechanism functioning correctly and the operator set remaining sufficiently decentralized.

The security analysis for a modular application requires understanding not just the security of each individual layer, but the security of the interfaces between layers. A rollup that settles on Ethereum but uses a less secure data availability layer inherits Ethereum's settlement security only for the settlement function. The data availability function is secured by a different system with different assumptions. Teams that conflate these two things, assuming that settling on Ethereum means their entire stack is Ethereum-secure, are making a category error that can have serious consequences. This is the kind of nuanced reasoning that experienced Web3 engineers develop over time, but it is not well-supported by current tooling, which rarely surfaces cross-layer security analysis in a developer-friendly format.

How AI Tooling Fits Into the Modular Stack

The complexity of modular blockchain development is precisely the kind of problem that AI-powered developer tooling is well-positioned to address. When a developer is working across three or four protocol layers simultaneously, the cognitive load of tracking interfaces, security assumptions, and operational requirements across all of them is substantial. AI tooling that understands the modular stack can surface relevant context at the right moment, flag potential issues at layer boundaries, and help developers reason about cross-layer behavior without requiring them to hold the entire protocol stack in their head at once.

The most immediate application is in code generation and review for cross-layer integrations. Writing a smart contract that interacts with a rollup's bridge, verifies a data availability proof, or handles cross-chain message passing requires precise knowledge of each protocol's interface. Mistakes at these boundaries are common and often subtle. An AI assistant that has been trained on the specific interfaces of Celestia's light client protocol, EigenDA's dispersal API, and the OP Stack's bridge contracts can catch integration errors that a developer unfamiliar with one of those systems would miss entirely. This is not about replacing developer judgment, it is about extending the range of protocol knowledge that a developer can effectively work with.

Beyond code generation, AI tooling can help with the operational complexity of running modular infrastructure. Monitoring a system that spans multiple layers requires aggregating signals from multiple sources and reasoning about their relationships. An AI-powered observability layer that understands the causal relationships between a data availability layer's health metrics and a rollup's transaction throughput can surface actionable insights that a traditional monitoring dashboard would bury in noise. As modular stacks become more common, the teams that build reliable production systems will be the ones that invest in tooling that matches the complexity of their infrastructure.

What Teams Get Wrong When Building Modular

The most common mistake teams make when adopting modular architecture is treating it as a drop-in replacement for a monolithic chain. They deploy a rollup, configure a data availability layer, and assume that the hard work is done. What they discover in production is that the operational complexity of a modular stack is qualitatively different from a monolithic chain, and that the tooling they relied on for monolithic development does not transfer cleanly.

The second most common mistake is underinvesting in the data availability layer. Teams often choose their execution environment carefully, spending significant time evaluating the OP Stack versus Arbitrum Nitro versus zkSync's ZK Stack, and then make their data availability choice based primarily on cost. Cost matters, but it is not the only variable. The latency characteristics of a data availability layer affect the user experience of the rollup. The censorship resistance properties affect the security guarantees for users who want to exit the rollup. The economic security of the DA layer affects the overall trust model of the application. These are product decisions, not just infrastructure decisions, and they deserve the same level of analysis as the execution environment choice.

A third failure mode is neglecting the upgrade path. Modular architecture is supposed to make components swappable, but swapping a live data availability layer is a complex migration that requires careful coordination across the entire stack. Teams that do not plan for this from the beginning often find themselves locked into their initial DA choice even as better options emerge. The Mantle Network integration with EigenDA is instructive here: the migration from Mantle DA to EigenDA was a significant engineering effort that required careful planning and execution. Teams that treat their initial architecture as permanent will find that the flexibility that modular design promises is harder to realize in practice than it appears on paper.

Building the Modular Future with Cheetah AI

The modular blockchain stack is not a future state that the industry is moving toward. It is the present reality for teams building serious Web3 infrastructure in 2026. Celestia and EigenDA are live systems with production deployments, real economic security, and measurable throughput characteristics. Rollup-as-a-service platforms are making modular deployment accessible to teams without deep protocol expertise. The architectural shift that Mustafa Al-Bassam and others argued for years ago has arrived, and the question for developers is no longer whether to engage with modular architecture but how to do it effectively.

The tooling ecosystem is the part of this picture that has not kept pace. The protocols themselves are mature enough for production use, but the developer experience of building across multiple layers remains fragmented. Debugging cross-layer issues requires context that is spread across multiple documentation sources, multiple monitoring systems, and multiple protocol-specific tools. Writing secure integrations between layers requires protocol knowledge that is not well-encoded in existing code generation tools. Operating a modular stack in production requires observability that existing Web3 monitoring platforms were not designed to provide.

This is the gap that Cheetah AI is built to close. As the first crypto-native AI IDE, Cheetah AI is designed for the reality of Web3 development in 2026, where a single application may span Celestia for data availability, EigenDA for additional security, an OP Stack rollup for execution, and Ethereum for settlement. The tooling needs to understand all of those layers, their interfaces, their failure modes, and their security assumptions, and surface that understanding in the context of the code a developer is actually writing. If you are building on modular infrastructure and finding that your development environment was not designed for the complexity you are working with, Cheetah AI is worth a look.


Cheetah AI is built specifically for developers who are working at this level of the stack. The IDE understands the protocols you are integrating with, not just the syntax of the language you are writing in. It can reason about the security implications of a cross-layer call, surface relevant documentation from Celestia's light client spec or EigenDA's dispersal API at the moment you need it, and flag integration patterns that have caused production incidents in similar systems. That kind of context-aware assistance is what separates a tool built for Web3 from a general-purpose coding assistant with a few blockchain plugins bolted on.

If your team is building on modular infrastructure, or evaluating whether to make the move, the development environment you choose will shape how quickly you can ship and how confidently you can reason about what you have built. Cheetah AI is designed for exactly that environment. The modular era is here, and the tooling should be too.

Related Posts

Cheetah Architecture: Building Intelligent Code Search

Cheetah Architecture: Building Intelligent Code Search

Building Intelligent Code Search: A Hybrid Approach to Speed and Relevance TL;DR: We built a hybrid code search system that:Runs initial text search locally for instant response Uses

user
Cheetah AI Team
02 Dec, 2025
Bittensor Architecture: What It Means for Crypto Developers

Bittensor Architecture: What It Means for Crypto Developers

TL;DR:Bittensor's architecture is structured around three core components: the Subtensor blockchain (a Polkadot parachain with EVM compatibility), 64 specialized subnets, and a governance-focu

user
Cheetah AI Team
09 Mar, 2026
Bitcoin Treasury Protocols: Engineering On-Chain BTC Management

Bitcoin Treasury Protocols: Engineering On-Chain BTC Management

TL;DR:61 publicly listed companies hold Bitcoin treasury positions, with collective holdings reaching 848,100 BTC in H1 2025, representing 4% of the entire Bitcoin supply Corporate treasurie

user
Cheetah AI Team
09 Mar, 2026