Web3 faces a significant memory challenge. This isn’t about forgetting information; rather, it’s a fundamental architectural shortcoming. Currently, there is no robust memory layer in place.
While today’s blockchains may not seem entirely foreign when compared to traditional computing systems, they still lack a critical component of conventional computing: a memory layer designed for decentralization that is essential for the evolution of the internet.
Muriel Médard will be speaking at Consensus 2025 from May 14-16. Secure your ticket here.
Post-World War II, John von Neumann established the foundation for modern computer architecture. Every computer requires input/output processes, a CPU for control and arithmetic functions, and a memory system to store current data, accompanied by a “bus” for data retrieval and updates. Commonly known as RAM, this structure has served as the cornerstone of computing for years.
At its essence, Web3 represents a decentralized computing platform—a “world computer.” Its upper layers bear a recognizable resemblance, with operating systems (EVM, SVM) functioning across numerous decentralized nodes, supporting decentralized applications and protocols.
However, as we probe further, we discover that a vital element is lacking. The memory layer crucial for managing both short-term and long-term data doesn’t align with the memory bus or unit envisioned by von Neumann.
Instead, we witness a patchwork of varying methods striving to serve this need, resulting in an overall chaotic, inefficient, and perplexing system.
The issue becomes clear: if we aim to create a world computer that diverges from the von Neumann paradigm, we must have a compelling justification for doing so. Currently, Web3’s memory framework is not only different but also complicated and inefficient. Transactions experience delays, storage is slow and expensive, and the existing approach makes scaling to broad adoption nearly impossible. This contradicts the very principles of decentralization.
Yet, an alternative exists.
Numerous individuals in this field are striving to overcome this limitation, but we have now reached a juncture where existing workaround solutions can no longer meet demands. This is where algebraic coding becomes pivotal; it uses equations to enhance data representation for efficiency, durability, and adaptability.
The fundamental question arises: how can we implement decentralized code in Web3?
Establishing a New Memory Infrastructure
This realization motivated me to shift from academia, where I served as the MIT NEC Chair and Professor of Software Science and Engineering, to lead a team focused on advancing high-performance memory solutions for Web3.
I envisioned something greater: the chance to transform our understanding of computing in a decentralized future.
My team at Optimum is developing decentralized memory that operates as an independent computer in its own right. Our methodology relies on Random Linear Network Coding (RLNC), a technology cultivated in my MIT lab over nearly twenty years. RLNC is a validated data coding technique that optimizes throughput and resilience across high-reliability networks, from industrial applications to the internet.
Data coding involves transforming information from one format to another, facilitating efficient storage, transfer, or processing. This practice has existed for decades and many variations are currently implemented in networks. RLNC represents a contemporary approach specifically tailored for decentralized computing, reorganizing data into packets for swift transit across a network of nodes, ensuring high efficiency and speed.
With numerous engineering accolades from esteemed global entities, over 80 patents, and successful real-world applications, RLNC has developed beyond mere theory. Notably, it received the 2009 IEEE Communications Society and Information Theory Society Joint Paper Award for “A Random Linear Network Coding Approach to Multicast.” The significance of RLNC was further acknowledged with the IEEE Koji Kobayashi Computers and Communications Award in 2022.
RLNC is now equipped for decentralized ecosystems, facilitating rapid data distribution, efficient storage, and real-time accessibility, making it a vital solution for the scalability and efficiency issues facing Web3.
The Importance of This Development
Let’s pause for a moment. Why is all of this significant? It’s crucial because we require a memory system for the world computer that is not only decentralized but also efficient, scalable, and dependable.
At present, blockchains depend on makeshift, ad hoc solutions that only partially mimic the role of memory in high-performance computing. They lack a cohesive memory layer that integrates both the data propagation bus and the storage and access capabilities of RAM.
The bus component of the computer mustn’t become a bottleneck, which is exactly what’s happening now. Allow me to elaborate.
The common technique for data distribution in blockchain networks, known as “gossip,” involves a peer-to-peer communication protocol where nodes share information with random peers to circulate data throughout the network. In its current format, it struggles with scalability.
Picture a scenario where you need 10 pieces of information from your neighbors who are reiterating what they’ve heard. At first, your discussions yield new insights. However, as you reach nine out of ten neighbors, the likelihood of acquiring fresh information declines, making it increasingly challenging to obtain that last essential piece. The chances are 90% that you hear something you already know.
This illustrates how blockchain gossip operates today—efficient initially, but repetitive and sluggish when attempting to complete information sharing. You would need extraordinary luck to receive something novel each time.
With RLNC, we can effectively address the fundamental scalability problem inherent in current gossip methods. RLNC enables data transmission to feel as though you were fortunate each time, ensuring that every incoming piece of information is new to you. This results in significantly enhanced throughput and lower latency. RLNC-enhanced gossip is our initial product, allowing validators to seamlessly optimize data distribution for their nodes through a straightforward API call.
Now, let’s explore the memory aspect. Consider memory as dynamic storage, akin to RAM in a computer, or even our own closet. Decentralized RAM should function like a neatly organized closet; it ought to be structured, reliable, and consistent. Completing this analogy, a specific piece of data is either present or absent—no in-betweens, no missing parts. That represents atomicity. Order is preserved according to the arrangement—while you might encounter an older version, it will never be incorrect. This is consistency. Moreover, unless something is moved, everything remains fixed; data is not permitted to vanish. That signifies durability.
In lieu of a proper storage system, what do we rely on? Mempools are not components typically found in computers; so why are they present in Web3? The primary reason lies in the absence of a suitable memory layer. If we think about data management within blockchains as akin to organizing garments in our closet, a mempool resembles an unorganized heap of laundry on the floor, where one is left unsure of its contents and must sift through it to find necessary items.
Transaction processing delays can be particularly lengthy on any individual chain. Taking Ethereum as an example, it requires two epochs, or 12.8 minutes, to finalize a single transaction. Without decentralized RAM, Web3 depends on mempools, where transactions accumulate until processing occurs—this leads to delays, system congestion, and unpredictability.
Full nodes hoard substantial amounts of data, bloating the overall system and complicating retrieval. In traditional computing, RAM conserves presently needed data, whereas less frequently accessed information is transferred to cold storage, potentially in the cloud or on disk. Full nodes act like a closet stuffed with all the outfits you’ve ever worn (from childhood to now).
Such a scenario wouldn’t occur on our personal computers, yet it is commonplace in Web3 due to unoptimized storage and read/write access. By employing RLNC, we can establish a decentralized RAM (deRAM), providing timely, updateable state management in a manner that is cost-effective, resilient, and scalable.
With deRAM and RLNC-powered data distribution, we can eliminate Web3’s predominant bottlenecks by making memory operations faster, more efficient, and scalable. This would streamline data propagation, mitigate storage overload, and facilitate real-time access—all while retaining the principles of decentralization. This vital piece has long been absent in the world computer, but that will soon change.