Kaspa: From Ghost to Knight, off to heal the blockchain's plight - Mode Light
Black back cover blurb:
The years between 2010 and 2013 were not an era of academic development for Bitcoin, with the notable exception being the On Bitcoin and Red Balloons paper (2011, revised in 2016) by Professor Aviv Zohar and his colleagues. Bitcoin slowly stopped being seen as a distinct technology that could ever be finalized to meet the Nakamoto vision, and instead became something closer to a technological religion. However, after being encouraged by talks with industry leaders at the Satoshi Roundtable, Aviv's student and apprentice, Yonatan Sompolinsky, decided to take a new path. Instead of improving technology from the previous decade, he applied the insights from his research to a new Proof of Work (PoW) protocol. Unlike most PoW protocols at the time, this one was not based on the linear structure of a blockchain. Thus, it aimed to avoid using the Longest chain rule and its constraints in fast decentralized networks that must also be secure and decentralized. As a result, it allowed the use of parallel blocks, known as asynchronous PoW.
What started as the "GHOST" appendix in the Secure High-Rate Transaction Processing in Bitcoin (2013) paper became the origin of the complex Direct Acyclic Graphs (DAG) research line. The block-DAG paradigm heralds a new era in the history of distributed ledger technologies as it inches ever closer to fulfilling the original Nakamoto vision.
Variations of GHOST and Inclusive Blockchain Protocols have been used in the Ethereum, while protocols from the PHANTOM family, such as GHOSTDAG and DAGKnight, solved the trilemma of blockchain scalability, security, and speed. For the first time ever, they made it possible to reach 1 BPS in a pure PoW network.
This is a story about things that would never be possible without PoW, where users of distributed networks are limited only by their internet speed. This is the story of PoW, of PoW in a block-DAG paradigm, of Kaspa, and of Dr. Yonatan Sompolinsky. This book is about a man who might not be a universally known celebrity in the field but who redefined what is possible in the PoW paradigm once and for all and still has a lot of steam to keep on rolling.
Acknowledgments
I extend my heartfelt gratitude to the following members of the Kaspa community, who played a pivotal role in translating my original article on Kaspa into six world languages, thus helping it find a much wider audience: Buram, AMAO, ZOo Mèo, Rilragos, Frfn, and Mon. Also, I need to mention the great admin, moderator, herald, and ambassador in one person, Tim! Thank you for your substantial contributions, for recognizing me as a Kaspa writer, and for always being kind and easy-going.
Further, I want to express my profound appreciation to Jirka Herrmann for meticulously proofreading this book several times and assisting me in refining it to its finest form. I would never have made this without your help. Then, I want to thank Bubblegum Lighting, CoderOrStuff, and Elldee for their community reviews and the invaluable feedback they provided to this book. Your contributions have greatly enhanced this project. I would also like to extend my gratitude to Professor Aviv Zohar, who played a crucial part in developing the protocol that the Kaspa community is now optimizing to its fullest potential and beyond.
Thank you, Meirav, for helping us organize the interview process. Since you joined the party, things have really started to roll out!
Finally, I want to express my heartfelt thanks to Šárka Hyklová for crafting a resonant book art based on my description and to my beloved wife for the graphical edits on both the book and the web, where you can read this book for free. My dream has been realized, beautifully illustrated, and professionally applied thanks to your talented efforts.
Preface
Manual to this article: You have to read this work from front to back. After all, that is how literature works. (Unless it is an instruction manual, in which case no one reads them at all.)
This preface serves as a primer for readers new to Proof of Work concepts underpinning this book's discussions. It also provides context for Kaspa's approach to addressing blockchain's scalability, security, and decentralization challenges.
Books are read for a story; books are read to obtain information. There might be a little bit of a story in this book, but there is primarily plenty of information. The crux of the book consists of technological facts and my research. The only chapter where I add my own conjecture and ideation is "Kaspa: The Block-DAG paradigm in action." Based on my understanding and research, it talks about Kaspa's near future and the potential it can reach in the coming years.
NOTE: This book is independent of the project known as BlockDAG. Instead, it focuses on the general Proof of Work (PoW) framework, the scholarly contributions of Dr. Yonatan Sompolinsky and his colleagues, and the application of their research in PoW projects. It explores their impact on blockchain technology. The terms DAG, block-DAG, and Block-DAG are used here to refer specifically to a block structure for PoW introduced by Yoad Lewenberg, Yonatan Sompolinsky, and Aviv Zohar in 2014.
From the start, those unfamiliar with basic blockchain vocabulary might find this book harder to read. Don't worry; explanations and examples are provided along the way. At the end of each chapter, I also added clarifications for those who know crypto more from the marketing or speculative side of things. The book aims to share knowledge and educate you in a way that you can eventually say, "Yeah, I learned something new; now I am ready to dive into these academic papers and understand them better." I see this as the main goal. To help fulfill this goal, the content below will deepen your understanding of the topics discussed and equip you with the insights necessary to engage with this book fully. Additionally, it addresses some common misconceptions about Kaspa.
1) Strictly speaking, PHANTOM is a family of protocols rather than a protocol. A blueprint and a paper that describes the whole concept around the protocol called GHOSTDAG (GD), the current Kaspa consensus. The PHANTOM paper provides two variants of possible PHANTOM implementations. One is mathematically purer, called NP-Hard, and the other, the "greedy" one, is more realistic and easier to implement. GD is the greedy variation of the PHANTOM idea and Kaspa's current consensus protocol. The DAGKnight (DK), the GD successor, is also a protocol from this family.
2) Kaspa is not just a proposal to solve the "blockchain trilemma"; it actually solves the problem. Completely. The trilemma of blockchain Decentralization, Scalability, and Security has been solved, and in the following paragraphs, I will go into what that means. The later chapters of the book then provide a deeper explanation.
Let's start with Decentralization, which fundamentally depends on node hardware requirements. This initial Layer-0 challenge has been addressed by selecting affordable, long-term-maintainable hardware solutions featuring near-constant (stable) disk space and quick bootstrapping for new nodes. Kaspa enables each new node to trustlessly sync from scratch by downloading just the last two days' full history, along with a nearly constant-size proof of the past. Importantly, there is no need to download the block headers of the entire history. Kaspa is unique among block-DAG-based technologies in that it includes a pruning feature, which is notably challenging to develop and implement effectively in fast, decentralized networks. Because only essential information is retained to preserve linear history and determinism, node hardware requirements are kept more affordable, consuming less data. This allows for more modest hardware to be used, which lowers purchase costs and supports decentralization by enabling more individuals to run a node. Additionally, the complete network history is maintained by Archival nodes, which store the full history and serve as a permanent decentralized ledger, providing an extra layer of data integrity without pruning.
The next focus is Scalability. Kaspa was the pioneering PoW-based project to achieve a consistent network speed of one block per second (BPS), and it sustained an impressive 11 BPS on the testnet over several months, with ambitions to escalate this rate to 30 and eventually 100 BPS. Similarly, Kaspa's throughput in transactions per second (TPS) surpasses 300 TPS on the mainnet and has reached up to 3,000 TPS in testnet scenarios. Crucially, Kaspa enhances BPS without impacting confirmation times. This is where many other PoW projects fall short, as they may increase network speed—either by boosting BPS or by employing multiple or cross-chain strategies—but fail to maintain stable confirmation times. Unlike these, Kaspa employs a distinctive strategy where confirmation times are not influenced by block times. This enables protocols like GHOSTDAG (GD) to increase BPS while keeping confirmation times consistent, as they are dictated by network latency rather than the protocol itself. In GD and DAGKnigh (DK), the confirmation times scale with the network latency. The key difference is that GD scales like a hardwired bound on network latency, whereas DK dynamically adjusts by monitoring network latency, which allows DK to operate optimally with greater consistency.
Now, how does this contribute to Scalability? Let's explain it first on the PoW of the blockchain paradigm, and then we will add blockDAGs into the mix. So, in Bitcoin, the mechanism follows the longest chain rule with a linear structure where blocks are aligned sequentially like they would be strung on a thread. One new block is mined and added to the network every 10 minutes. Now, imagine this as text written on a sheet of paper, and these papers are then added to their envelopes, which are filled. Closed, sealed, and put into the card file to join the other envelopes. Your transaction (TX) is a text written on the paper, the envelope is a mined block, and the card file is a decentralized network.
Much like lines on a page are designed to be filled with words, a space in each block in a blockchain is designed to be filled with transaction information. A page has space limits (the number of rows or the paper size) just as a block in a blockchain does (the maximum number of bytes per block). When one piece of paper is full but a story is not finished, the writer must continue writing on a new piece of paper. Blockchains work similarly: when one block is full, it has to be signed and then cryptographically joined to the previous block, so they sit one after another, just like a train's cars in sequence behind the locomotive. This chaining is done by adding further information to the block header of a new block by miners, which you can think of as naming the block. These miners essentially compete for the right to name new blocks. When a miner names the block, he has "mined" it and will be rewarded by obtaining cryptocurrency. In blockchains, the new block always points back to the left and to its only predecessor. This is because the longest (heaviest) chain rule does not allow parallel mined blocks from other forks. This results in a simple, slow, and right-from-the-start linear structure, the "chain" in blockchain.
On the other hand, in a blockDAG, the new block can point back and refer to all visible blocks in its reach.
You can see how a new block-DAG block referrences to both known blocks.
Kaspa blocks, operating at a speed of, for instance, 30 BPS, form an expansive DAG structure where blocks are interconnected, creating a web-like structure of well-connected blocks. These blocks process and record your transactions. Yet again, visualize your TX as lines of text pages of paper, being filed and archived in sealed envelopes (blocks). Traditional blockchains ensure each page is completely filled before it is stamped and added to the network's historical record. In contrast, with a block-DAG network, the objective is not necessarily to fill each page to its capacity. Since blocks are generated in parallel, it is not crucial for them to be completely full. They simply continue accumulating. During TX traffic peak times, earlier blocks that were only half-filled begin linking with blocks that are nearer to full capacity. However, given the sheer volume of blocks and their rapid creation, the network avoids congestion.
You might wonder, "What stops all blocks from mining the same transactions, thereby wasting space?" The answer lies in the incentives for miners to introduce randomness into their transaction selection. This randomness ensures sufficient uniqueness among blocks. Once more, imagine transactions as a pile of papers. A miner has an envelope and fills it with whatever transactions they can randomly select at that moment, then sends off the envelope, AKA adding the block with TXs to the network. Sometimes, the envelope might be empty. The crucial point here is that blocks do not need to be full to proceed. Miners simply pick a set of transactions and dispatch whatever they have at hand without waiting for more. This approach significantly contributes to scalability.
The final aspect to consider is Security. Kaspa is at the forefront of protecting against Maximum Extractable Value (MEV) abuses, effectively countering frontrunning and sandwich attack bots as well as mitigating Dust attacks. A critical layer of Kaspa's defense against reorganization attacks, which are often linked with double-spending, relies on an enhanced version of the Nakamoto consensus. This protocol adaptation offers robust resistance against attacks by entities controlling less than 50% of the network's hash rate, and is further bolstered by advanced cryptography and mathematics. Yet, the most daunting security threat remains the "51% Attack," typically executed to obscure traces of double-spending. The prevention of such attacks hinges significantly on the efforts of honest miners. The good news for Kaspa is that, following Ethereum's transition away from PoW, many Ethereum miners have migrated to Kaspa, but even without Ethereum, Kaspa's hashing power has only been growing since 2021. At the time of writing this book, Kaspa's network hash rate is approximately 213.12 peta hashes per second (PH/s), indicative of substantial mining capacity. In this context, a hash rate of 213.12 PH/s means the network can make 213.12 quadrillion attempts per second to find the correct hash for the next block in the block-DAG. Moreover, the ultimate security factor is the decentralization network's speed. To provide a lightweight introduction, remember that a faster network generates more forks, leading to a higher blocks per second (BPS) rate. This results in stronger interconnections among blocks with a more robust and time-proven history record, making these blocks harder for attackers to manipulate.
It is also important to consider how miners' continuous efforts significantly fortify PoW security. In a blockchain that uses PoW, each block added after the one containing your transaction acts as a layer of security. Block #3 builds directly on top of block #2, block #4 builds on block #3, block #5 builds on block #4, and so forth. This sequence forms a growing stack of blocks. The higher it goes or the longer it is, the heavier it becomes. Each new block added to the stack reinforces the security of all the blocks beneath it, making it increasingly difficult for anyone to alter a transaction. To change any information in a previous block, an attacker would need to redo the work of that block and all the blocks that come after it. This cumulative layering of work makes tampering highly impractical as more blocks are added. If this is still unclear, imagine you have five huge stone blocks. The first block is put in its place, and it is so heavy that it is half-buried in the ground. That's the base (Genesis block), and the second stone block is then set on top of it, the third block on top of the second, and so on. With every new block, the pile grows in Height and Weight. Now imagine that before setting down the second stone block, you asked a stonemason to carve your secret on its bottom side. When the job was done, you put the stone with your secret inscription on the first block and then added another three heavy blocks on top of it.
This makes the block pile heavier and heavier (and also higher and higher) with each new block that gets added on, making it more and more difficult to change a block the further down the pile it is buried. This is because you would first have to remove and then re-add all blocks on top of the block you're trying to manipulate with. Now imagine that instead of a simple column of stone blocks, you had blocks chained to each other raining from the sky onto a massive, ever-growing heap of stone. Extracting and changing a specific block would be very close to impossible. And this is what a block-DAG does. You might wonder: OK, but if an attacker places a dishonestly mined block into the network, it would be buried under a mass of honest blocks, right? No. Imagine an attacker joining this "block rain" and attempting to insert their block on the edge of the "rain" accessible from their side. Even if they connect their block with some others, it will never reach the blocks in the middle or the blocks from the other side. Consequently, it will be poorly connected, and the network will recognize it as disconnected and suspicious.
In addition, Kaspa core developers also thought of cases in which an attacker would like to use the work of honest miners on their own behalf - and yes, this is also taken care of. High speed, asynchronous mining, strong sorting and ordering algorithms, underlined by strong PoW hash rate. This all adds to Security and Scalability.
But what about Decentralization? If there are more blocks from more forks, block-DAG is wider, thus producing far more data, which needs to be consumed and recorded by nodes. These nodes act as the mind and memory of the decentralized network. The more they need to consume, the higher the hardware requirements and the more expensive they are. This leads to a dramatic decrease in the node's cardinality caused by higher purchasing expenses. Lucky for us, the pruning algorithm mentioned earlier saves the day! Kaspa's implementation of GHOSTDAG introduces an asynchronous and inclusive PoW system designed for high speed to enhance security. This rapid pace leads to frequent forks and numerous blocks, which form a tightly interlinked network. These links between blocks mean the blocks are aware of each other, a stark contrast to Bitcoin's longest chain rule. In Bitcoin, blocks mined simultaneously compete for acceptance; the block not chosen as the longest chain is discarded as an orphan, which wastes the energy expended in mining it. Miners whose blocks become orphans receive no rewards. Ethereum addresses this inefficiency to some extent by naming these non-chosen blocks as Uncles and granting partial rewards to miners for them. Kaspa, however, adopts a different approach by purposefully creating forks and maintaining a record in a DAG structure. This approach not only includes all blocks but also integrates them into the broader structure of the block-DAG, which is then untangled to establish a final linear order, preserving the crucial characteristic of determinism found in blockchains. Determinism is critical for ensuring that the blockchain operates predictably and reliably. As highlighted earlier, Kaspa nodes can synchronize by downloading only the essential data needed to maintain this determinism, effectively tracking network activities without demanding extensive disk space. Additionally, archival nodes serve as a permanent record and backup, further bolstering network integrity and data preservation. This method significantly reduces the hardware demands on individual nodes, promoting more widespread participation and enhancing the decentralization and resilience of the network. Kaspa achieves high speeds not just for performance but also to enhance network security. By rapidly confirming transactions, it isolates attacker blocks as they appear suspicious, disconnected, and poorly linked within the network's vast expanse. This speed is not constrained by the protocol itself but by the user's network connection, which allows for numerous blocks that may be filled or empty depending on network demands. Kaspa's security is underpinned by a robust modification of the Nakamoto consensus, reinforced by cryptography, sophisticated mathematics, and active miner participation. Its decentralization is ensured as it operates across thousands of global nodes, effectively resolving the blockchain trilemma.
Some might say that Kaspa solves the trilemma because it is not a blockchain. Let me put this right: Kaspa indeed addresses the blockchain trilemma, but it does so uniquely as it isn't a conventional blockchain. It operates as a block-DAG, which can transform into a linear sequence similar to traditional blockchains through an additional operation that entangles its structure. This allows Kaspa to maintain the integrity and sequence of a blockchain while benefiting from the advanced capabilities of a block-DAG system.
3) Lastly, a message from the author.
Hello readers, Mickey here. I am a technical writer focusing on documentation, articles, and blogs; this work reflects that. All technical content, be it books, articles or documentation, tends to age quickly, and I am well aware of that. I put my best effort into delivering it in a compact and strictly informative form, decorated with a tiny space for imagination.
The story of writing this book started in 2022, just a few days before the Christmas holidays when I asked Dr. Yonatan Sompolinsky to give a speech on a call for the Red Hat blockchain enthusiasts. Due to the time of year, although these meetings usually had around twenty attendees, Dr. Sompolinsky and I were joined only by three people. However, the lecture about PoW, the block-DAG paradigm, and PoW in the block-DAG paradigm was so inspiring that I considered not letting this content go without an audience, so I rewrote Dr. Sompolinsky's speech and reworked it into the text that you can read in this publication. I added some examples and background information to provide a better understanding of the block-DAG paradigm and mainly about its outcome, Kaspa.
Between 2021 and 2023, Dr. Sompolinsky participated in several interviews. Some of these discussions proved insightful, but the majority merely echoed the questions of their predecessors. To address this redundancy and maximize the use of Yonatan's expertise, this publication offers a succinct summary, presenting essential background information and introducing Yonatan's history, current endeavors, and the summarization of his DAG research line. These pages are supplemented by an interview delving into Yonatan's present and forthcoming projects and efforts.
This work aims to provide information otherwise hidden away in complex academic papers, but to make it more approachable and comprehensible. This book can serve as a reference for all future interviewers, who can use it as starting material and ask questions that are not answered here, saving time for all interview parties: the interviewer, the host, and the audience.
This book is not just a theoretical exploration of the block-DAG paradigm and PoW. It positions Kaspa as a practical solution, often referred to as 'Digital Silver,' in the digital currency landscape. By delving into the technical aspects of Kaspa, readers will discover how it achieves unprecedented transaction speeds and robust security mechanisms, making it a standout choice among well-decentralized projects.
These pages are not just a collection of facts. It's a journey that takes readers through the challenges, visions, and innovations of advancing blockchain technology. By exploring the development of the PoW block-DAG paradigm, readers will gain a deeper understanding of the impact of block-DAG technology on the future of decentralized applications.
Embark on this short journey to discover how Kaspa sets new standards in distributive ledger technology, fulfills a major part of Satoshi Nakamoto's original vision, and potentially transforms the digital landscape with its pioneering block-DAG structure.
Abstract
In the turbulent realm of cryptocurrency, the innovation of Kaspa emerges as a beacon of progress, embodying the concept of digital silver. This book offers a comprehensive introduction to Kaspa, the Proof of Work (PoW) mechanism, the block-DAG paradigm, and the visionary mind behind these advancements, Dr. Yonatan Sompolinsky. It aims to equip you with a foundational understanding of these groundbreaking technologies and their potential to redefine the digital currency landscape. The heart of this work is an interview with Dr. Sompolinsky, whose pioneering research underpins Kaspa's innovative framework.
As you embark on this exploration, you're not just reading another technical manuscript; you're stepping into the future of digital transactions as envisioned by one of its most brilliant minds. However, please remember that Kaspa's development is not a lone individual's achievement, but rather a testament to the power of collaboration. It is the result of the core team's collective effort, significantly enhanced by the support from our community. A prime example of this teamwork is the security status of GHOSTDAG (GD), which was secured through meticulous mathematical proof by Shai Wyborski. Facing this intricate challenge, Yonatan recognized the need for Shai's expertise and invited him to contribute to the team. Michael Sutton further developed this foundation by adapting the proof for DAGKNIGHT (DK), employing a sophisticated strategy to mitigate vulnerabilities in low-latency DAGs and counteract potential exploitation by attackers. The team aimed to refine the GD framework to prefer well-connected DAGs, which suggest lower latency without compromising security. Their innovative breakthrough was the concept of "parameterless-ness," realized through the min-max optimization principle. This approach underwent extensive refinement over several years, with Yonatan providing essential insights from his deep analysis of DAGs, especially his work on SPECTRE. The comprehensive solution they devised for enhancing the security of DAG-based blockchain technologies is meticulously detailed in the DK paper, but you can read its summarization in the Appendix of this work.
On the technology side of things, it would be hard to overestimate the skills of Ori Newman, who contributes to a system where nodes synchronize quickly, have low hardware requirements, and use pruning. At the same time, the system still maintains everything it needs to achieve determinism. This paves the way for one of the greatest contributions to blockchain technology, alongside Bitcoin and Ethereum.
Finally, a note from the author: While I strive for accuracy, some information may become outdated or may not capture the full complexity of the subject. Should any core contributors of Kaspa come across inaccuracies or areas needing improvement, I welcome your insights.
Enjoy the reading.
- Mickey Maler
Written between August 2, 2022, and April 29, 2024.
Table of content (TOC)
Chapter 1 - Block-DAGs, the new blockchain meta
- A road to digital silver
- DAGs - Old solutions, new applications
- Intermezzo 1
- Searching for the cure to blockchain's maladies
- The block-DAG hook, or why you should care
- Technological introduction - What is PoW
- Technological introduction - How is block-DAG getting blockchain PoW beyond its limits?
- Technological introduction - Challenges within a block-DAG
- Kaspa: The block-DAG paradigm in action
- Intermezzo 2
- How I see Dr. Yonatan Sompolinsky (YS)
Chapter 2 - "An almost brief interview with a somewhat accomplished researcher"
- Dr. YS's background introduction - The pioneer of PoW block-DAG
- Dr. YS's background introduction - The academic
- The legacy of Dr. Yonatan Sompolinsky, in verse
- Intermezzo 3
- The interview - Phase 1: Intro and academic career
- The interview - Phase 2: Blockchain, block-DAG, and the world of crypto
Appendix
Chapter 1 - Block-DAGs, the new blockchain meta
A road to digital silver
"Gold. Gold never changes." However, to reach a proper focus of the following discussion, let's consider the words of James Blakely: "Gold is forever."
This yellow-colored metal has been valued since ancient times and, to this day, remains a symbol of timeless quality across the world. However, silver, less expensive but more practical, also warrants attention. Both gold and silver originally served as mediums of exchange and stores of value. Gold, closely linked to world currencies, especially the US dollar, saw a significant shift when US President Nixon abolished the dollar-to-gold exchange under the Gold Standard. Since then, gold has primarily been a value store, a non-functional metal. It is an asset for banks and wealthy individuals, with limited industrial use beyond jewelry and high-end electronics. Consequently, the liquidity of mined gold in global circulation is diminishing slowly. Due to its high value, gold investment is not typically accessible to the average person. Conversely, silver, historically more utilized in industry and less hoarded than gold, is gaining more attention. With over 140 industrial and manufacturing applications, including in green energy, electronics, and pharmaceuticals, silver is becoming increasingly significant. Although silver is rarer than gold based on the amount mined, over the past 20 years, the manufacturing industry has used more than 90% of the globally mined silver. This shows that silver is primarily used in production rather than being stored. Therefore, a significant increase in silver value is expected in the coming years. The growing demand for silver and slow mining production logically point to a potential short-term price surge.
Mining silver, often a byproduct of gold mining, is challenging and less lucrative, making miners hesitant to focus solely on silver. This contributes to the slow depletion of the silver supply. A significant shift could occur if silver reaches 50% of gold's value, potentially surpassing it in the long term. After all, silver has outpaced gold's price twice in modern history, notably during periods of high industrial demand and when market dynamics favored silver's diverse applications over gold's primarily value-store role. Speculating that silver will reach half of gold's value is intriguing. Such a shift would be remarkable, considering their historical price relationship. Silver's value, like gold's, depends on market demand, mining costs, and economic conditions. Traditionally valued lower than gold, silver is seen as more accessible. However, market fluctuations and changes in demand or economic conditions could significantly impact the value of silver.
Now, can silver reach 50% of the value of gold? For many, the question revolves around whether silver will regain its monetary premium. Currently, gold enjoys much of this premium, owing to its historical significance, endorsement by central banks, and the value placed on it by many people in the form of jewelry and endowments. In contrast, silver, demonetized by central banks, lacks this status. It would only become more pertinent in a scenario where the global economy shifts back to a hard/resource-based monetary system and where gold's high value makes it impractical for smaller transactions, making silver a more practical choice. Therefore, I am not fully convinced by the argument for silver's re-monetization. Digital currencies like Bitcoin (BTC) or Ethereum (ETH) offer even greater practicality in such a scenario. At least, that was my belief in 2020.
By mid-2021, my attention shifted to the Kaspa project, which I discovered to be akin to digital silver, complementing Bitcoin's digital gold. Kaspa's design addresses the well-known trilemma in blockchain technology - achieving scalability, decentralization, and security simultaneously. Its innovative features align closely with the concept of digital silver, more so than any other cryptocurrency I had researched up to that point. This project stood out due to its unique approach and potential to redefine the role of digital assets in mirroring traditional precious metals like silver and gold.
It is important to recognize how the considerable presence of silver in monetized forms, such as coins and bars, has historically played a role in suppressing its prices. In contrast to gold, silver boasts a wide array of real-world applications. Its uses span across various industries, including batteries, electronics, medicine, and photovoltaics. This practical demand for silver is a key factor that will likely propel its value forward. Analogously, considering the current standards in the digital asset space, Kaspa emerges as a strong candidate for the role of 'digital silver.' Its technological framework, rapid emission schedule, and strategic positioning in the cryptocurrency market align with the qualities traditionally associated with silver in the physical world.
Kaspa's innovative approach to addressing key challenges in blockchain technology further reinforces its potential while addressing both of the important factors: being a store of value and a medium for peer-to-peer and microtransactions. The focus on microtransactions typically emphasizes their small size and the ability of digital and cryptocurrency platforms to handle these efficiently, which traditional banking systems often find challenging due to higher transaction fees. This positions Kaspa uniquely in the digital currency landscape, mirroring silver's role and value proposition in the traditional precious metals market. It is used as money and can be used as finance. Kaspa is community-supported to claim global reach, geared with a strong base layer, and technologically highly evolved to provide a supreme application layer for smart contracts that can compete with the ones we know on Ethereum. Afer all, the circulation supply of Kaspa tokens was designed for them to be used in various products, not only to be held.
It is not by accident that "Kaspa" originates from ancient Aramaic and translates to "silver." However, this word encompasses a broader spectrum of meanings across various cultures. In certain African languages, "Kaspa" conveys the concept of being "unbreakable," while in American origin's contexts, we can uncover the meaning of the word Kaspa as "wealth."
Last but not least, instead of classical marketing tactics you are familiar with in crypto, Kaspa prioritizes academic and technological excellence over conventional promotional strategies. Like silver, which you rarely see advertised on banners and billboards, but which clearly states its importance by its significance and use in many industries, Kaspa lets its technological success speak for itself. On many occasions, its achievements in research and innovations have already captured the attention of industry giants, not just them. As Kaspa blazes new trails in decentralized ledger technologies, one would be forgiven to assume that patent offices start bracing for double shifts when faced with such prolific output. Or rather, triple!
If we imagine the global interconnection of the world as the wheel that propels modern civilization forward, TCP/IP was like the pneumatic tire added to the wheel, bringing us from the age of the electrical telegraph to the age of the Internet. Now, the blockchain promises to add yet another layer on top of the tire, which would enhance the wheel's functionality even further. Perhaps, if things go well, Kaspa's block-DAGs could be like an anti-gravity harness, giving the oft-lumbering vehicle of humanity a chance to soar into the heavens. And yes, the tickets will be paid in silver.
DAGs - Old solutions, new applications
DAG is not a novel solution that solves it all. Not only is it not novel, but it creates a problem rather than solves it. Block-DAGs are then nothing more than asynchronous or parallelizable Proof of Work (PoW). To make the block-DAG network fast and secure, we need a consensus protocol that needs to differ from the longest chain rule suitable only for linear block ordering, typical of blockchains. However, to design a good PoW block-DAG consensus protocol, it is important to realize that the less you assume about the network, the more secure it is, but it is also harder for researchers to design it. We need to remember the very basics, such as: If we know the important details about any system and its starting configuration parameters, we can approximate its behavior in future scenarios under different conditions.
But what if we don't have all the simulation constants at hand, or what if some variables change dynamically, such as network latency or a user's internet speed?
Parameterless is what we need now. This means removing any assumptions about the network. From then on, it will be about reading, optimizing, and adapting. This paves a path to a completely new realm. The new realm where DAGKnight acts as a gatekeeper.
The DAGKnight protocol, designed by Michael Sutton and Yonatan Sompolinsky, combines the best of SPECTRE with the fully optimized version of GHOSTDAG (more information about all mentioned protocols can be found in the Appendix section).
DAGKnight introduces parameterlessness to blockchain technology. This feature enables the protocol to autonomously adjust to network conditions without human intervention, significantly simplifying operations and enhancing scalability and security. By eliminating the need for manual parameter tuning, DAGKnight offers a resilient, efficient, and user-friendly system that can adapt to changes in network activity seamlessly, paving the way for broader adoption and development of innovative applications within the blockDAG paradigm.
"Is your consensus protocol, given with a certain Target and fixed block creation rate, secure enough for any latency?"
Yonatan Sompolinsky - DAG Knight presentation - Crypto economic security conference, November 2022
Intermezzo 1
To all Cyberpunk fans and Neuromancer enthusiasts - Mickey
Author: [July 2022] - Damn, appreciation for the debug (review), compadre. Thought my code (writing) was about to bluescreen. Might just jack out of the writing net for a bit, feeling like the muse's got firewall issues.
Editor: Yeah, might wanna let the system cool down a bit. Feels like you have been hitting the keyboard for like four years straight now. Seems you hit the writing flatline (cant write a damn thing).
Three months later…
Author: Matrix be praised! It's happening again!
Editor: What's on the screen this time?
Author: I keyed in something big. Feels like it's going to overclock the whole network!
Editor: Oh, great. Let me guess: you're about to dump a terabyte of data for review, right?
Author: Precisely, but no rush as always. Though, if you could start like decryption soon, and by soon, I mean now, that'd be stellar.
Editor: How deep does this rabbit hole go?
Author: Thought I was compiling The Silmarillion at first, but got it down to sixty pages.
Editor: Sixty pages?!?
Author: Yeah, plus fifty-one queries. It's a deep-dive interview with an industry leader in the area of blockchain, block-DAGs, PoW, and all that Matrix combined. Did I not mention that?
Editor: Negative. And fifty-one questions? It sounds less like an interview and more like you're hacking his brain in interrogation.
Author: Can't resist, dude. The guy's circuitry is hilarious, and he's got the bandwidth for a serious data exchange.
Editor: May the source be with us.
Author: So, you gonna help me patch it together? Can we beat the clock?
Editor: Well...Factoring in your cryptic style, the tech that's way out of my mainframe, the necessary data compression, and then translating it for the normies... Let's target Christmas... 2030.
Searching for the cure to blockchain's maladies
Without sugarcoating it, many blockchains still resemble the Wild West. In the realm of crypto, this translates into what resembles a casino where twenty-five thousand "degens" globally vie to win each other's money. Nonetheless, our approach is to allow crypto to pursue its course, while we concentrate on the technological aspects that underpin it—the technology that empowers the crypto ecosystem.
What have we seen in the blockchain industry so far? We have seen a 2015 hyper-inclusive idea being turned into a hyper-exclusive blockchain for the wealthy by 2020. Ethereum - A great achievement of blockchain innovation, became too expensive for normal people to operate daily due to the congestion of network transactions. A chain in which you need to bribe miners to prioritize your transactions, which still can be front-run by bots. Then, the DeFi and NFT boom of 2021 caused skyrocketing costs for interacting with smart contracts. In 2022, we observed the rise and fall of a project initially recognized as a "scalable" solution, Solana. However, the year was marred by frequent outages and overloads, leading to vulnerability to a centralized exchange.
Congestions, crashes, failed transactions, expensive fees.
People with resources could still make money, but smaller participants left most of their trading profits in fees. Nevertheless, the truth is that nobody was forcing them to interact with the network at these times. However, there is no one specific to blame; it is only the price for success.
Some projects aimed for speed but faced functionality issues. Others sought to sustain high traffic peaks while prioritizing speed, only to discover they became cost-prohibitive. Some attempted to address these challenges but essentially transformed into centralized solutions, contradicting the fundamental principles of what blockchain should embody. Certain Proof of Work (PoW) blockchains, like Bitcoin, achieve decentralization and robust security through miners' contributions (greediness). Conversely, blockchains prioritizing simplicity, as seen in the differences between Bitcoin and Ethereum (such as the longest chain rule and the inability to run smart contracts), recognize that speed is not a deliberate design and decision choice. In the end, the throughputs of PoWs, in general, remain decoupled from hash rates.
If any chain boasts about "many transactions per second (TPS) and the ability to scale," the question to be asked on their account is:
"What are you guys sacrificing to make this fast and not bottlenecked?"
The architects behind blockchain projects must decide on an inevitable trade-off:
Do they want to accelerate the first transaction confirmation, or do they want to raise the hardware requirements (which would be the way to centralization since the smaller miners create conflicting blocks)? There is an eminent urgency for a decentralized PoW network with low hardware requirements, fast synchronization running nodes, and affordability in the long run. For a network where high TPS implies that the same security can be bought with a lower fee per transaction and where fees fund security. A network that is capable of providing a new way of implementing smart contracts. A network where the users are not victims of the gas bidding wars, front-running sandwich attacks do not target decentralized exchanges, and the MEV predators do not profit by reordering, excluding, or inserting transactions in the blocks. This solution would also need to address the main cryptocurrency use requirements: money and finance.
Cryptocurrencies need to be fast and backed by a resilient base layer when treated as money. Finance needs the expressiveness of the smart contract with a focus on how the agreed state is in relation to other conditions: this and many other aspects of a well-designed application layer. A decentralized network that wants to contain all of this and address all the needs of today's "world of crypto" needs to be surveyed and reinvented from a fresh new perspective. Somebody needs to reuse the most significant contributions to the blockchain technology that we already have, pay close attention to what was phenomenal and revolutionary, observe and reiterate the mistakes and misfortunes of the industry first-runners, test heavily before prophesying results and mass adoption, and lastly, reinvent the aspect that stops these cryptographical achievements to scale while remaining decentralized and secure.
One might argue that the inherent problem with blockchain lies in its DNA, emphasizing the necessity to focus on the linearity of storing blocks in the chain and eliminating forks. This approach results in the phenomenon of a high orphan rate. Before we delve into the topic, let's familiarize ourselves with the two terms for unaccepted blocks, essentially orphaned: Orphan for Bitcoin and Uncle for Ethereum. An uncle block is a block that didn't make it onto the accepted chain. Only one block at a time can be mined and acknowledged as accepted on the blockchain. The remaining blocks are uncle blocks. Uncle blocks exist when two or more miners produce blocks nearly simultaneously. While uncle blocks share similarities with orphan blocks on Bitcoin, they exhibit subtle distinctions connected with the Ethereum protocol. Uncle blocks are valid blocks that the network has rejected after the new-block propagation period ends. Miners receive compensation for producing an uncle block, unlike an orphan block in Bitcoin, where miners aren't rewarded.
In Bitcoin, Orphan nodes refer to blocks mined simultaneously but not accepted into the blockchain, which adheres to the longest chain rule as its consensus. As the network speed increases, more orphans are generated. A high orphan rate is acknowledged to compromise security. When honest blocks find themselves outside the longest chain due to spontaneous forks, the overall security of the chain is diminished. While this issue might not manifest in slow networks, achieving true adoption requires the decentralized network to be fast, secure, and decentralized simultaneously, doesn't it? Addressing a radical change in decentralized network consensuses is imperative to establish a fast and secure network, essentially eliminating the issue of orphans from the outset. The trilemma of scalability, decentralization, and security in synchronous blockchain protocols says that you can have at most two out of the three of these qualities at the same time but never all three. Coming to the rescue, the block-DAG paradigm protocols and their capability to order blocks in graphs where a new block refers to all parallel blocks (forks) instead of a simple tip, which can resolve the trade-off between speed and security for blockchains that need to scale effectively. However, is it possible to reforge the blockchain so that you are not stuck with the chain from the beginning? The solution to this challenge may be found by approaching the technology with a fresh perspective and leveraging protocols like Kaspa's GHOSTDAG, developed by Dr. Yonatan Sompolinsky, Prof. Aviv Zohar, and Shai Wyborski.
The cure to that speed-security trade-off lies in the following challenge:
"Is it feasible for your network to achieve rapid consensus and confirmation times while concurrently preventing a 49% attacker from undermining and disrupting the ordering or the consensus itself?"
Using the block-DAG PoW with strong ordering protocols opens unprecedented opportunities for the issues of today's blockchain. A block-DAG network, similar to some blockchains, holds great promise as a solution to numerous challenges that individuals face today, including corruption and persecution by authorities. Decentralized networks can eliminate single points of failure, thanks to the security from miners' hashing power and robust consensus mechanisms. This trustlessness means there's no need for a central authority, making transactions more secure, cheaper, and transparent. Robust cryptography and ordering protocols would offer resistance against double-spending and history reorganization frauds. Simultaneously, its speed would be derived from a rapid block creation rate without being hampered by network latency resulting from the creation of numerous forks by miners - and this very last part is when blockchains do fail.
Before exploring a secure and decentralized solution that is also fast, we must address the issue mentioned—the imperative to mitigate the high orphan rate problem within decentralized PoW networks that want to benefit from the security of the Nakamoto consensus but also want to overcome its limitations. The problem with orphaned blocks compromises network security and squanders the energy invested in mining numerous forks within chains governed by the longest chain rule. To tackle this, Kaspa leverages inclusive protocols like GHOSTDAG, which encompass all blocks created by parallel branches, and references them comprehensively so that DAG edges connect the new block to all reachable tips created by all those available parallel blocks. This way, every fork and its blocks become integral to the network's history. Then, we need to maintain a property where the 50% security threshold is preserved for any network speed and block creation rate while going as arbitrarily low as 100 milliseconds for transaction confirmation time. Once more, the remedy for this issue lies in the overarching generalization of the Nakamoto consensus and its longest chain rule, as initially introduced in Bitcoin. The generalization that suits setups with fast block creation rate or large blocks. In contrast to off-chain solutions like the Lightning Network, where transactions occur on a separate layer, block-DAG PoW protocols like GHOSTDAG advocate for an "on-chain/on-DAG" approach to achieve scalability. It employs a greedy version algorithm, as introduced in the PHANTOM paper on block-DAG (and Preface of this work), to identify blocks mined by "honest" nodes by selecting the largest subset of blocks that maintain consistent referencing within a defined number of steps, ensuring they represent the majority-supported, legitimate extension of the network. At the same time, it does not include blocks from "non-cooperating" nodes that deviate from the mining protocol. The final element in constructing the ideal decentralized PoW network, enhancing confirmation times and addressing current blockchain challenges, is minimizing network assumptions. This entails removing the requirement to assume a bound on network latency. This objective is achieved by the block-DAG protocol's parameterless approach, employing the DAGKnight protocol.
The following pages will explain why the generalization was needed and why the direct use of the longest chain rule without forks is not good for the decentralized and secure networks that need to scale and have a high block creation rate and low block propagation delay.
So let's change the linear ordering of a blockchain - which needs a sequential operation mode that does not support parallelism and where you cannot introduce new transactions until you agree on the previous state of what the chain is - for a directed acyclic graph (DAG), a directed graph with no directed cycles. Thus, we create a block-DAG and switch the longest chain rule for a consensus from the research line of Dr. Sompolinsky.
"For the first time EVER, a pure proof-of-work protocol has been carrying THOUSANDS of transactions per second across dozens (maybe hundreds) of network nodes in a permissionless network, running on affordable hardware! This is history in the making, and we are just getting started!" - Shai Deshe Wyborski
The block-DAG hook, or why you should care
One from the left and one from the right.
Right direct, jab, left uppercut.
The following chapters will introduce you to block-DAG Proof of Work (PoW). This technology can go hand-in-hand with typical blockchain but can also challenge it. The strength of block-DAG lies in combining the stability of ordering blocks as graphs and the protection provided by the power of PoW. Hence, block-DAG seeks to overcome the shortcomings that stem from the traditional blockchain's linear nature - mainly that it does not reference parallel blocks. The ideas of Dr. Sompolinsky and his academic peers led to the creation of various consensus protocols and ordering protocols, which eventually became the digital soul of Kaspa. Kaspa is the real-world block-DAG implementation of Yonatan's DAG research.
Kaspa with GHOSTDAG, along with proper hardware configuration for nodes, solved the infamous trilemma of decentralization, scalability, and security - a trade-off that, until recently, PoW networks inescapably had to face. With Kaspa and GHOSTDAG, delivering a high block creation rate with instant transaction confirmations determined by network latency - not by the protocol - while maintaining security and decentralization is not out of reach anymore. Even though a robust block creation rate is not anything note-worthy by itself, what sets it apart in Kaspa's approach is a confluence of aspects: balanced trade-offs and the fact that a block-DAG PoW protocol, such as GHOSTDAG, increases blocks per second (BPS) while maintaining non-decreasing confirmation times. A high BPS also significantly reduces the fraction of hashrate needed for relatively short revenue delays for solo miners. This reduces the hardware requirements for solo miners and the number of miners required to operate a pool with consistent revenue. Consequently, it diminishes a strong incentive for using centralized pools, commonly found in low BPS chains. This book primarily focuses on PoW consensus, but enthusiasts of Proof of Stake (PoS) and other consensus types may also find it valuable.
Now, without any further ado, let us dive into the world of block-DAGs and Dr. Yonatan Sompolinsky.
Technological introduction
What is PoW?
Many people hold various opinions regarding the PoW, but if you ask a general audience, they will typically provide one of these three answers based on their level of understanding:
1) The activity of miners (whatever that may be).
2) That thing that is environmentally taxing due to its high carbon footprint and energy costs.
3) A process where mining difficulty adjusts based on the number of miners participating in solving a cryptographic puzzle within the Poisson process.
All three points offer somewhat correct answers and open up many opportunities for discussion, particularly regarding environmental impacts. PoW, a consensus mechanism based on computational hardware power, is notable for its high energy consumption and potential carbon footprint. However, considering large intermediaries such as banks with sprawling skyscrapers, how much energy is required to operate and heat these buildings during cold winter days? In addition, there are projects aimed at utilizing the thermal power of volcanoes or solar energy to enable greener mining practices. Notably, the PoW function of Kaspa was specifically designed to be compatible with Optical ASIC chips, a unique feature that potentially allows for PoW mining with significantly reduced electricity consumption. The trade-off involves the high initial capital investment needed for these machines and the ongoing maintenance costs of Optical ASIC chip mining versus the long-term benefits of reduced electricity usage due to light-electron interaction.
A solar system like Tesla's can supply this electricity consumption, thus creating an environmentally compelling PoW ecosystem.
Returning to the three responses we received about "What is PoW?" The last option, which mentions the Poisson process, most closely aligns with our main topic. Now, let's explore this in greater detail.
PoW is a process where miners aim to brute-force guess the correct nonce, create a new block name, and earn a reward. Most articles or books will probably describe PoW, a decentralized consensus mechanism where participants within the network must dedicate computational effort to decipher an encrypted hexadecimal number. This process is colloquially known as "mining," and those engaged in it earn rewards for their computational contributions. But more fundamentally, PoW is a technological primitive that identifies and signs the way for participants to reach a consensus. Bear in mind, though, that consensus protocols (CPs) have been around for decades, if not more, and it is not a term that arrived with blockchains. PoW was pioneered with Bitcoin, and the primary objective behind its use was to eliminate the need for a predefined set of participants. Previously, in the context of CPs, it was assumed that there were a certain number ('n') of nodes with known indexes (names). These nodes were responsible for agreeing on the history of events, especially in conflict scenarios. Since the nodes were named according to their cardinality, a consensus protocol could be established among them. This was typically achieved through "leader selection," where a leader was chosen to resolve conflicts. This decision, however, is not trivial and can be manipulated. Nevertheless, it's important not to assume that the leader node is always correct. It could potentially be a compromised or faulty node. Therefore, the assurance lies in not assuming the correctness of a node but in guaranteeing that the system reaches a consensus on the accurate history of events and their chronological order within the system.
The primary role of PoW extends beyond just shaping economic dynamics, although that aspect is indeed significant. PoW, as well as the core concept behind Nakamoto's consensus and Satoshi's Bitcoin, was intended to mimic the process of reaching consensus without prior knowledge of the number of servers in the network, their identities, global locations, or operators. Despite having limited information about the system, certain assumptions are made, and under these conditions, the consensus is achievable through the protocol's selection of the longest chain. With Bitcoin, the most famous PoW-based chain that uses the longest-chain rule, there is, somewhat shockingly, a chain of blocks. Every 10 minutes, a new block full of transactions is mined and stamped using the PoW mechanism and added in front of the latest block. However, what happens when conflicting transactions create a block with a different name? Conflicts in Bitcoin look like a fork in the chain. This leads to a situation where the linear structure of the blockchain results in a "tree of blocks." The longest "branch" (chain) of the "tree" is the only one that is preserved, and off-the-main-chain blocks are discarded.
This describes Bitcoin in 2009 and how it was proposed in Satoshi Nakamoto's Bitcoin white paper as the Nakamoto Consensus.
How is block-DAG getting blockchain PoW beyond its limits?
This section will explain why a generalization of the Nakamoto consensus was needed to overcome its limitations while maintaining its mining, transaction, and block model used in block-DAGs.
To address the security-scalability trade-off inherent in Bitcoin—which ensures it is secure and decentralized but slow—we need to enhance and continuously extend the dynamics of Bitcoin's native protocol, including the basic application of the longest chain rule. In this system, each miner interacts with the network from a local perspective, typically seeing only one tip of the chain or multiple tips in the event of a fork. In the Bitcoin paradigm, the miner selects the winning tip and continues their mining on top of it while ignoring the rest. This leaves some space for improvement. For instance, you can't change this dynamic by simply telling the miners: "No, no, please, do refer to all the tips that you see so that nobody's work is wasted, and let the protocol decide which one is the correct one that wins." Then, you would not hide any information from the protocol, but you can still use the longest chain rule, if you must stick to that concept.
Instead, the block-DAG paradigm is all about: "Hey, tell us about all the blocks that are mined in the PoW system, and let's start with all the tips that you see." Once we do that, we have already arrived at the directed acyclic graphs (DAG) paradigm.
Block-DAG technology enhances the PoW mechanism by expanding beyond traditional limitations, introducing a more dynamic, efficient, and inclusive approach to network participation and block validation. This advancement addresses inherent inefficiencies within the longest chain rule prevalent in Bitcoin's blockchain, and promotes a more comprehensive utilization of mining efforts and reducing wastage. By acknowledging and incorporating multiple block references, block-DAGs enable a richer, more interconnected network structure, diverging from the singular focus on chain tips to a broader, graph-based perspective.
Key concepts and enhancements:
1. Introduction of Directed Acyclic Graphs (DAGs): Unlike the linear progression observed in blockchains, block-DAGs employ DAGs to create a web of interconnected blocks. This structure accommodates parallel block creation and integrates these blocks into the network more efficiently, thereby increasing throughput and reducing the isolation of mining efforts.
2. Visualization and network morphology: The analogy of the Snake game and railway systems helps to conceptualize the evolution from a singular, linear expansion to a multi-dimensional growth pattern. This comparison illustrates the transition towards a system where blocks spread out in width and length, resembling the complex interconnectivity of railway tracks.
3. Genesis to latest blocks: The structure of block-DAGs is depicted as a flow from the Genesis block to the latest, with all intermediate blocks contributing to the network's history. This arrangement ensures that each block references its predecessors, creating a dense web of connections that enhance security and integrity.
4. Block referencing in block-DAGs: Unlike the longest chain rule, block-DAGs do not limit block references to the longest or winning chain. Instead, new blocks reference all visible tips and predecessors, fostering a more inclusive approach that acknowledges the contributions of all miners. This method enhances the network's connectivity and robustness.
5. Parameter 'k' and system latency: The 'k' parameter introduces a measure of the maximum anticone size of a block in an honest network, allowing the network to adjust for optimal throughput without compromising security. This aspect underscores the adaptability of block-DAGs to network conditions, a key feature that ensures efficient operation even as the rate of block creation increases. When put in relation to system latency and parallel block creation, the 'k' parameter further emphasizes its role as a tolerance parameter for parallel block creation, consistent with its purpose in managing the trade-offs between throughput and security in a block-DAG system.
A block's anticone can consist of blocks unknown to the block's miner and blocks created before the block's miner finishes its propagation. It is like a block's history (what is on the left from a specific block) and its future (what is on the right from a specific block).
The parameter 'k' controls the tolerance for simultaneously created blocks, allowing adjustments for higher throughput. When k=0, there are no forks, similar to Bitcoin's single chain and longest chain structure.
6. Forks and network width: The presence of forks within a block-DAG is indicative of the system's latency and the parallel mining activity. These forks contribute to the network's width, highlighting the capacity of block-DAGs to support a high degree of parallelism and scalability.
7. Ordering protocol: Despite the asynchronous and parallel nature of block creation in block-DAGs, an additional ordering protocol is necessary to establish a coherent linear history for the ledger.
This requirement emphasizes the balance between the expanded capabilities of block-DAGs and the need for a structured approach to transaction verification and block inclusion.
Conclusion
Block-DAGs represent a significant evolution in blockchain technology, addressing critical challenges related to scalability, efficiency, and inclusivity in PoW systems. By leveraging the principles of directed acyclic graphs, block-DAGs offer a compelling alternative to traditional blockchain architectures, promising enhanced throughput, reduced redundancy, and a more democratic mining process. This technology can potentially reshape the future of distributed ledger systems, making them more accessible, efficient, and scalable.
Challenges within a block-DAG
Because a DAG is not a chain per se but a graph, parallel blocks can have conflicts. Therefore, choosing the new blocks based on the longest chain rule in a DAG is not likely to be secure in fast decentralized networks today's world needs. If we create one block per 10 minutes, as Bitcoin does right now, there will be almost no forks. So yes, under this condition, it will be secure. The absence of forks stems from a massive propagation time (10 minutes), in which the network must announce the new block to all network participants. But when an honest network suffers from any meaningful latency, then choosing the longest chain will not necessarily represent the honest majority of the network reaching the consensus. Instead, the longest chain can represent a centralized attack that does not suffer network latency, whereas the honest network made from a distributed system is encumbered with the latency brought about by its many forks. That's why DAG has to adopt a different approach to control the ordering and counteract attackers attempting to dominate the majority in the DAG.
Another important thing to realize is the speed aspect of the block-DAG network, or, in other words, the network's block creation rate that leads to increasing the block-DAG width through all the created forks. The width that grows quadratically, not exponentially, is adjusted by the network parameter k. The wider the block-DAG is, and the more parallel forks it has, or the faster the block creation rate or the block size is, the more latency you will suffer and the smaller the attacker needs to be to create a fraudulent reorganization.
So, we need something that solves the security issue of reorganization attacks while keeping your network as fast as possible.
In a slow network, let's say one block per 10 minutes; you will obtain a trivial and narrow DAG whose width parameter equals one, so using the longest chain rule would be secure. However, if you scale to 100 blocks per second, the latency increases linearly, and the required size of an attacker that might disrupt the network decreases. Thus, to make a DAG network secure against revert attacks, you need to make it fast because a higher block creation rate provides better security against 49% hashrate attacks. But remember, it does not protect the network against 51% attacks - the only deterrent against those, regardless of how the blocks are arranged, is high mining hashrate.
A 51% attack refers to an assault on a blockchain executed by a coalition of miners who possess over 51% of the network's mining hash rate.
To achieve a high mining hash rate, whether within a blockchain or a DAG structure, you need to attract a lot of miners. To attract miners to the network and ensure its security, the network and its associated fees must be economically appealing. Additionally, global adoption and a strong use case are essential for attracting users and ensuring long-term viability, especially once all tokens are mined. User fees generated from mass adoption will hold the network alive.
This simply emphasizes the symbiosis between mathematical security, cryptography, hardware requirements, and intractability to miners and users.
Then, the mathematical security analysis is consistently based on the premise of an honest majority, meaning it applies under the assumption that more than 50% of the participants in the network are honest and cooperative. In this context, security is a function of the number of confirmations a transaction accumulates. Consequently, a 49% attacker faces the same probability of reversing a transaction with ten confirmations, regardless of whether each confirmation was obtained in 10 minutes or 0.1 seconds. To make the network secure, besides obtaining that important speed factor, it is imperative to ensure that 51% attacks are economically unfeasible. This is achieved by utilizing block rewards and fees, which effectively purchase security against these attacks. Cryptographic defenses alone are insufficient; thus, the economic incentives provided by rewards and fees play a vital role in network security.
Now, let's consider the speed of the network and the volume of data that nodes need to handle.
In the context of network speed, rapid block creation often results in the generation of multiple forks and the accumulation of substantial on-chain data. Robust pruning mechanisms are essential to manage this data effectively, especially in a fast Proof of Work (PoW) environment. These mechanisms are crucial in ensuring that long-term storage requirements are reasonable and synchronization times remain minimal. Pruning involves the selective removal of unnecessary block data while preserving the network's integrity. As a result, new nodes can deterministically attain the current network status and swiftly integrate into the system following synchronization, which is exceptionally efficient in a network equipped with a proficient pruning algorithm. The act of pruning leads to reduced hardware requirements for nodes. With less data to process, nodes can operate on more affordable and sustainable hardware, fostering decentralization and ensuring long-term viability.
A lower barrier of entry and great inclusivity means more decentralization! Great!
It is important to emphasize that higher hardware demands for network nodes can lead to fewer participating nodes, decreasing decentralization and security. Therefore, maintaining low hardware requirements and good pruning mechanisms is critical to securing affordability and sustainability and achieving substantial decentralization across the entire network while ensuring fast synchronization.
Let's summarize the hardware requirements for Kaspa nodes:
1. Standard Kaspa node: A node that retains the latest data until it undergoes pruning.
Minimum Specifications:
Storage: 100 GB
Processor: Quad-core, 64-bit (compatible with Intel, AMD, ARM, including Raspberry Pi platforms)
Memory: 8 GB RAM
Internet Connection: 10 Mbps
Recommended Specifications:
Storage: 100 GB SSD (Solid State Drive)
Processor: 8-core, 64-bit (compatible with Intel, AMD, ARM, including Raspberry Pi platforms)
Memory: 16 GB RAM
Internet Connection: 40 Mbps
2. Archival Kaspa node: A node that stores the complete dataset without pruning.
As of February 21, 2024, the total storage requirement is approximately 954 GB, with a daily increase of about 1.5 GB.
Key Difference for Archival Nodes:
Storage Requirement: As the node archives all data without pruning, the primary variance is in Hard Disk Drive (HDD) storage capacity, which should accommodate the current and anticipated growth of the dataset.
Since we cleared out the hardware aspects, let us explore another challenge we need to face in the block-DAG paradigm: restoring consistency within a DAG.
Block-DAGs and other asynchronous consensus or agreement protocols face difficulties when dealing with a high block creation rate, resulting in conflicts similar to those in systems generating blocks simultaneously. Consequently, it becomes uncertain whether you might inadvertently duplicate information, such as reusing the same unspent transaction multiple times from the previous block. In situations like this, multiple servers from the network might update the database with conflicting transactions, such as a double-spend. That is why the block-DAG paradigm needs to give us linear ordering over the DAG and all its events. With a linear ordering approach, we traverse the sequence by iterating from the earliest to the most recent transaction. We validate transactions that maintain consistency with the previous ones or the current state, while we skip or discard inconsistent transactions. An example of this linear ordering method is featured in the GHOSTDAG protocol. Within the block-DAG paradigm, blocks are permitted to contain conflicting transactions. However, the network does not update the state using these conflicting transactions. Instead, the conflicting transactions are subsequently ignored.
Below is a summary of the Kaspa approach to double-spend protection. Kaspa showcases an exemplary implementation of the block-DAG PoW mechanism, coupled with a robust consensus and ordering protocol.
All the bits that together prohibit the double spent or other reorganization attack:
1. Kaspa's approach main overview:
Combines the GHOSTDAG protocol and UTXO model.
Ensures each digital coin is used only once, even during parallel transaction processing.
2. GHOSTDAG protocol level:
Establishes a universally agreed-upon transaction order.
Resembles a standardized rulebook to prevent confusion in the customer order, akin to bank tellers following a common procedure.
3. Transaction sorting:
Segregates transactions into "blue" (main chain) and "red" (conflicting) sets.
Streamlines the resolution of teller-like disputes for efficient processing.
4. Block classification:
Divides blocks into "kernels" (approved) and "anticone" (pending approval).
Systematically manages conflicting transactions within the GHOSTDAG.
5. Unspent transaction output UTXO model:
Enhances security with "unspent outputs" instead of traditional balances.
Prevents double-spending by marking spent outputs as unusable.
6. Integration for coin security:
Combines GHOSTDAG protocol with UTXOs to ensure each digital coin can't be reused after a transaction.
This is analogous to a vigilant cashier ensuring that a spent dollar bill cannot be used again.
Now, let's address the challenge of maintaining fast confirmation times while ensuring security.
GHOSTDAG's instant transaction confirmation comes from a rapid and global agreement on DAG ordering, which comes in handy every time a conflicting transaction occurs.
The process flows like this:
GHOSTDAG protocol -> Consensus in block ordering -> All nodes follow the same sequence -> Agreed-upon ordering -> Uniform conflict handling
Confirmation time denotes the duration until you can confidently verify, with a high degree of certainty (assuming an honest majority), that the block containing your transaction will not be reordered. It confirms that inclusion in the ordering segment has reached a consensus. It maintains the same level of security as Bitcoin's Nakamoto consensus; the sole distinction lies in replacing the term "orphaned" with "reordered." The ordering converges in consensus and does so rapidly; the duration is solely dictated by network latency, regardless of block production rates.
Now, the block ordering. That's where the GHOSTDAG protocol, really shines, with its ability to establish a durable event order within a block-DAG structure. This means that the event sequence remains immune to retroactive changes.
The proper ordering provided by GHOSTDAG encompasses the following key attributes:
1. Topological Order:
A block cannot appear in the ordering before any of its parents.
2. In Consensus:
At any given moment, all nodes in the network must unanimously agree on ordering all but a constant number of new blocks.
3. Security:
A computationally inferior adversary cannot retroactively alter the ordering of blocks.
4. Liveness:
There should be a clear criterion for when a block is "finalized," meaning it will never change its place in the ordering. Every block should meet this criterion within a constant amount of time.
5. Efficiency:
Determining, calculating, and maintaining the order should be feasible for today's computers, even in the face of a continually expanding DAG.
SPECTRE, another important protocol from Yonatan's research line but outside of the PHANTOM family of protocols, has the same properties as mentioned above but weakened in a way that the ordering could not change retroactively in a way that affects the unspent transaction output (UTXO) set.
A little detour into protocols
Since I mentioned SPECTRE, the protocol developed in the pre-PHANTOM era, let's discuss its potential briefly.
SPECTRE, the first "49% attack"-resilient and parameterless (means that we don't assume anything about the network and protocol adapts to the best of its capabilities) protocol, recognized for its speed, was initially considered as a candidate for the first Kaspa consensus before the core team decided to adopt GHOSTDAG instead. Yonatan once told Shai Wyborski that SPECTRE is his most beautiful creation. This protocol provides many interesting features, such as throughput limited by hardware (and not by security like in Bitcoin) and confirmation times limited only by the delay of the actual network.
SPECTRE and GHOSTDAG are then somewhat parallel because they have properties that other protocols from Yonatan's research line do not. GHOSTDAG has linear ordering, while SPECTRE is parameterless. The aim was to craft a gem that extracts the finest attributes from diverse protocols, leading to an ultimate PoW consensus. This endeavor harmonizes all these distinctive advantages into a unified protocol named DAGKNIGHT (DK), credited to Michael Sutton and Yonatan Sompolinsky. The DK project originated in 2020/2021 in the long Covid quarantine. It was discovered as an unexpected byproduct of working on other challenges: achieving both linear ordering and being parameterless, thus acting as a technological diamond in terms of flow. Read about DK's uniqueness and perks shared from other DAG protocol research line below:
Dynamic confirmation times: capable of adjusting confirmation times to approach network limits safely; increases confirmation times automatically to maintain stability under any network condition degradation.
Self-scaling: scales itself as network latency improves.
Future implementation: planned to be the next consensus mechanism for Kaspa, with anticipated application between 2024 and 2025.
Eliminates assumptions: removes the need for certain assumptions about the network's conditions.
Nakamoto consensus security: achieves security independent of block rates, similar to GHOSTDAG and SPECTRE protocols.
Linear ordering: features a rapidly converging linear ordering, akin to GHOSTDAG.
Smart contract suitability: compatible with smart contracts, mirroring the capabilities of GHOSTDAG.
Network responsiveness: responsive to actual network latency, similar to the SPECTRE protocol.
The GHOSTDAG protocol and its advanced successor, DK, introduce significant enhancements to the PoW ecosystem. Through Kaspa's innovative work within the block-DAG framework, these protocols are instrumental in realizing the vision initially proposed by Satoshi Nakamoto. By addressing key challenges such as scalability, security, and decentralization, GHOSTDAG and DAGKNIGHT contribute to the evolution of blockchain technology, offering a robust foundation for the next generation of decentralized applications.
The SPECTRE protocol is often lauded for its innovation yet noted for its limitations in real-world, block-DAG-based PoW applications. SPECTRE, once considered a promising candidate for enhancing the blockchain landscape, requires careful consideration due to its unique approach to transaction ordering and conflict resolution, which directly impacts its suitability for certain blockchain functionalities. SPECTRE was designed to enhance the scalability and speed of cryptocurrency transactions. It addresses limitations inherent in traditional blockchain technologies by introducing a new way of achieving consensus even under high throughput conditions and fast confirmation times. SPECTRE is engineered to remain secure against attackers with up to 50% computational power and can operate efficiently at high block creation rates, ensuring transactions are confirmed in seconds. However, even though SPECTRE is a highly efficient protocol suitable for VISA transaction speed, it generates a pairwise ordering, which is potentially cyclic and nonlinear. This characteristic means it might not always be possible to linearize the ordering. In instances where a transaction conflict occurs before confirmation, SPECTRE theoretically allows for the possibility of delaying confirmation indefinitely, highlighting a vulnerability known as "weak resistance to Liveness attacks". Due to this potential for non-linear ordering, SPECTRE is generally considered unsuitable for smart contract applications, where linear transaction history is crucial.
Smart contracts depend on the absolute certainty of transaction sequences to ensure their conditions are met and executed correctly. And since the SPECTRE ordering method might not provide the rigid determinism that smart contracts require, Kaspa core contributors opted for GHOSTDAG.
A complete summarization of these protocols can be found in the Appendix section of this work.
Wrapping it up
The block-DAG parading can then be described in three steps.
Step 1: Mining the PoW protocol, which we are all familiar with from Bitcoin.
Step 2: DAG ordering.
Step 3: Iteration over the linearly ordered DAG, where you accept every transaction according to a certain order outputted by the ordering protocol and then accept every transaction that is consistent with the past.
Step 2 hinted at the last challenge of this chapter and also the part where it could get interesting for anybody who would like to create a successful PoW in a block-DAG paradigm:
"Can you get a good ordering algorithm?"
To understand this, let's say we have a good and bad ordering algorithm. An example of a bad ordering algorithm for a fast block-DAG is using the longest chain rule. Here, an attacker can come later than your transaction, inject its own transaction in, and then precede you in the ordering. The next bad thing to do is simply use the nodes and use them in their descending order, where the network would use the hash of the block to choose the next block to pop.
The issue with this straightforward approach is as follows:
You initiate a transaction, which is broadcast to the block-DAG network and functions as described earlier. However, one year later, an attacker could generate a conflicting transaction with your original transaction, executing a double-spend. The attacker would then mine this conflicting transaction concurrently (in parallel) with the block containing the original transaction, continuing until the nonce aligns to precede the original transaction. In such a flawed ordering protocol, even a year later, an attacker can potentially reverse users' transactions.
In a good ordering algorithm, these attacks from the "past" are recognized as very disconnected, thus suspicious, and not taken into account. Also, In a good ordering algorithm, an attacker cannot win by using the work of honest participants to do the job for them and gain credibility that would allow them to succeed. Hence, we require a system capable of identifying and eliminating manipulations, whether it be mining a block, withholding it off-chain and publishing it after a year, or mining a block and falsely claiming it was mined a year ago. Our goal is to retain only transactions that were correctly mined and successfully entered the order, wherein the miner references all recent blocks and promptly publishes the block. A crucial property of a robust ordering system involves reasoning about lateness over time, aligning with the topology of the block-DAG. Effective reasoning should render a block mined a year ago as an outlier in the block-DAG, indicating its complete disconnection from most blocks and raising suspicion, as a properly mined block would be well-connected to its surroundings. Additionally, in a good and effective ordering algorithm, once a transaction is published and a certain amount of time has elapsed, the probability of your transaction being preceded in the ordering by a new unpublished transaction should be close to zero. This is another desirable property that users need in the network: a stable order that remains unchanged over time. Period. When you, as a user, publish a transaction, it may take a few seconds (or, in the case of a less efficient protocol, a few minutes) to converge on the ordering of this transaction relative to others. However, after this brief period, in a properly decentralized network, you are assured a very high probability that no new transactions will precede yours, you will not be front-run by trading bots, and your position in the ordering is secure.
This feature would be highly appreciated by Uniswap traders on congested Ethereum in 2021-2022.
"Observing the 3 BPS block-DAG in a network visualizer is a very soothing experience.
I like to use this as a screensaver :)." - Yonatan Sompolinsky, IBM weekly call-meeting for blockchain enthusiasts, 2022
Kaspa: The block-DAG paradigm in action
This book also aims to share something directly from the author—something that emerges when you do your research and add a bit of imagination. In an attempt to do just that. The following text is my interpretation and theory about what is possible with protocols that solve the trilemma of scalability, security, and speed. This is not an announcement or a roadmap.
Kaspa's multichain solution
Many people have yet to grasp Kaspa's full potential and the groundbreaking use case envisioned from its inception. Beyond its applications as an investment tool or a peer-to-peer medium, Kaspa introduces a more advanced concept developed during the DAGLabs era, which Yonatan has discussed in his blog posts and early interviews. Kaspa is poised to act as a multichain roll-up solution, addressing scalability challenges that other blockchains face.
When enhancing transaction speed through roll-up technology, it is crucial to consider the aspects of security and fairness. This is where Kaspa's role as a transaction sequencer comes into play. By leveraging Kaspa's scalability, other blockchains can route their transactions through Kaspa's sequencing service, ensuring faster, more secure, and orderly processing without the risk of manipulation by front-running bots, and robust MEV protection overall. This leads us to the primary function of KAS, the Kaspa native cryptocurrency token, which fuels the entire ordering and sequencing layer. Like how GAS functions within Ethereum, KAS powers transactions and auction transaction orders within blocks. The order of the blocks determines the sequence of transactions. In instances where parallel blocks contain conflicting transactions, Kaspa's protocol resolves the conflict by disregarding the transaction from the block with a smaller 'past size' (the number of blocks preceding the current one, including direct and indirect references).
Is Kaspa set for a momentous collaboration?
One might wonder how Kaspa intends to integrate with other blockchains. To draw an analogy from the world of Grunge music, consider Kaspa as the Pearl Jam of the blockchain academic realm—an esteemed and influential Grunge band, but not without its detractors. Much like some music fans tend to disrespect the perceived competitors of their favorite bands, some adherents of crypto projects tend to be negative on social media and FUD the progress of others. And just as Pearl Jam holds a place of recognition and uniqueness in the Grunge world, Kaspa distinguishes itself with its academic prowess in the blockchain community.
Now imagine Pearl Jam seeking a collaboration with a band of a similar mindset, grounds, and vision, also known for their groundbreaking contributions and a revered frontman with acknowledged technical capabilities. This is very much what happened when Yonatan presented the innovative DAGKnight protocol, developed by Michael Sutton and himself, at the AFT 21 conference. The presentation concluded with a single question from Ari Juels of the Chainlink organization. In this analogy, Chainlink is akin to Alice In Chains, not just for the shared motif of 'chains' in the project names, but also for their role in seamlessly connecting diverse blockchain technologies, the same way Alice In Chains connected astonishing vocals with raw, simple, and vampy metal-like riffs.
Chainlink, distinguished by its robust foundation, notable market presence, and academic orientation, presented itself as an exemplary collaborator for Kaspa. This became clear upon hearing Ari Juels articulate a question following Yonatan's presentation at AFT 21. Ari inquired about the security implications of the consensus protocol concerning latency, the establishment of a specified target, and the impact of a fixed block creation rate on the protocol's security. Though directly addressed to Yonatan and his discourse, Ari's query prompted immediate exploration into Kaspa's connections with Chainlink, highlighting the potential for collaboration when the circumstances align and the need arises.
Perhaps the future will unfold similarly for Kaspa and Chainlink as it did for Pearl Jam and Alice in Chains. In 1994, Pearl Jam guitarist Mike McCready and Alice in Chains' Layne Staley created a shared project only: the supergroup Mad Season, which left an indelible mark on its genre.
Rest in peace, Layne. Keep rocking, Mike.
Kaspa's strong moral and synergistic future
In the initial discussion with Yonatan regarding Kaspa, the question arose about whether the aim was to surpass Litecoin or other Proof of Work (PoW) projects, regardless of whether they are based on blockchain or block-DAG. Yonatan clarified that comparing projects or setting goals not aimed at the leading position inherently disadvantages a project's potential. Kaspa's objective is to rank among the leading entities like Bitcoin, with an aspiration to secure a position within the top five.
While creating the first article in 2021, "The Power of Kaspa Block-DAGs: Go Beyond the Blockchain," which aimed to introduce Kaspa globally, I considered mentioning and describing Kaspa's competitors, too. Since this was a challenging task that required knowledge I didn't have back then, I asked Shai Wyborski for his opinion about negative reactions from supporters of other PoW projects, and how Kaspa technologically differs from them. Shai provided me with the technical details I needed and corrected any misconceptions about Kaspa. On social media, instead of getting into arguments with people attacking Kaspa, he stays calm and focused on explaining things clearly. He acts as a shield for Kaspa, addressing fears and handling tough debates professionally. Shai's respectful and knowledge-focused approach showcased the maturity and expertise of the Kaspa team, demonstrating why they have the respect they have in blockchain academia.
Towards the end of 2022, discussions emerged about Kaspa's potential rivalry with Ethereum or possibilities for collaboration. It has become evident that Kaspa not only has the capability to compete with Ethereum but also contributes to its enhancement, particularly by enhancing its performance and scalability by improving its Layer 1. Moreover, Kaspa envisions establishing its own ecosystem on Kaspa Layer 1, offering a range of applications and services comparable to, and potentially surpassing, those available on Ethereum. This collaborative and forward-thinking approach, looking ahead towards a future of enhanced blockchain technology, is what sets Kaspa apart.
Finally, let's also discuss the bold move in which the Kaspa development team began a comprehensive rewrite of the nodes' codebase, the "Rust rewrite." Previously hampered by technical debt from years of research and development, the original node codebase faced challenges in maintainability and extensibility. This overhaul was not merely a matter of tidying up; it was about laying a solid foundation for future innovations, such as smart contract support and consensus ordering algorithm improvements. It also aimed to make the platform more welcoming to new developers. The selection of Rust for this endeavor was a calculated decision driven by the goal of achieving higher efficiency and heightened block and transaction rates. Rust's advantages are manifold. It offers the necessary high-level constructs for managing Kaspa's complexities while ensuring the system's performance remains top-notch. This rewrite has already begun to bear fruit, evident in the expansion of the core development team and the capabilities afforded by Rust, which enabled the creation of a web-technology software stack over the rusty-kaspa repository. The live demonstration of the TN11 network, achieving a 10 BPS rate and processing thousands of transactions per second with a mere 100 ms block time, stands as a testament to these advancements.
Such a dedication indicates two things:
As Kaspa continues to evolve, the Rust development is enhancing its infrastructure, signaling a vibrant, forward-moving trajectory for the project. This ongoing commitment to improvement and innovation underscores Kaspa's potential to remain a key player in blockchain and cryptocurrency development.
Kaspa is set to maintain the drive for continuous development and upgrades, ensuring it doesn't suffer the fate of Bitcoin or Monero, where development stagnated after reaching the limits of their base layer (L1) systems.
Ensuring fair participation in token sales events
If you've ever used Ethereum, for example, during an Initial Coin Offering (ICO) or Token Generation Event (TGE) through Metamask, you might have faced this scenario:
You send funds to a contract and receive tokens in return. When a "First come, first served" rule applies, speed matters. Slow internet reactions could leave you with nothing, especially when token prices rise with demand or token sales are set on several rounds based on interest. Early birds get better deals before a few tokens remain, and prices have been increasing lately in later rounds. In such events, bots can exploit the system, front-running or delaying your transaction by paying slightly more GAS fees to outbid you. To speed up your Ethereum transaction and increase your chances, you might pay miners higher fees and set up a brutally higher slippage, leading to exorbitant costs during high-demand events. Besides, failed transactions that happen a lot still incur a hefty GAS fee.
Now, imagine if Kaspa was in the picture:
You'd send an instantly verified transaction and immediately know whether you can still buy tokens at the agreed price. With Kaspa's cheap and instant transaction confirmations, there is no need for high miner fees or battling bots, and users are OK with doing two transactions where each of the costs, let's say, is 0.00005 USD. You're promptly informed about your participation eligibility and the token price. Agreeing to buy, you confirm, and with swift processing, you secure your tokens without the hassle, knowing you bought based on your set and agreed conditions.
Bot-resistant Kaspa: A new era for MEV protection
Kaspa with GHOSTDAG (GD) or DAGKnight running at 10+ blocks per second (BPS) will resist malicious actions of frontrunning and sandwich attack bots. Basically, the 10 BPS + will eliminate all colluded miners and malicious bots developed and operating between 2019 and 2023. Kaspa became the fastest decentralized transaction processing supercomputer with a clock rate of 1 Hz (1 BPS) in January of 2023. With Kaspa planning to adopt more than 10 BPS in 2024, let us look at what this will bring:
Laying the groundwork for strategies resistant to maximum extractable value (MEV) by leveraging partial knowledge and fostering competition among concurrent miners.
Increasing transaction throughput to accommodate a higher volume of transactions, thereby enhancing network scalability.
Improving the user experience by minimizing the time until the initial transaction inclusion (also known as 0-confirmation time). Reducing block time below 100 milliseconds would not yield significant benefits in this context.
Reducing the variance in mining rewards lowers the Capital Expenditure (CapEx) barrier to achieving a Return on Investment (ROI). This reduction is theorized to contribute to increased network decentralization.
Minimizing the latency experienced by Oracles entails ensuring that updates from external sources are incorporated into the consensus state at a high frequency, thus improving the timeliness and reliability of the network's data.
Enabling large service providers to maintain a modest fraction of the total network hashrate while still delivering high-quality services to customers. For instance, with a 1% share of the hashrate, a service provider can achieve block times ranging from 1 to 10 seconds, depending on the network's block production speed (ranging from 100 to 10 BPS, respectively).
Enhancing censorship resistance to ensure robust and unfettered access to network services for all users. Censorship in crypto usually refers to the ability of miners to exclude or prioritize certain transactions being processed, by, for example, blacklisting addresses or certain transactions.
In conclusion, Kaspa's integration of DAGKnight, which leverages the full spectrum of GD's capabilities without requiring parameters or assumptions about a network, aims to enhance security levels, including resistance to Dust attacks significantly. By elevating the block production rate from 10 to over 30 BPS, the network anticipates eradicating bot interference and achieving resistance to MEV attacks. This increase in block speed, coupled with sophisticated mathematical-cryptographic challenges, positions Kaspa to handle transaction volumes on par with VISA, offering instant confirmation times while maintaining or surpassing Bitcoin's level of security. This strategic enhancement underpins Kaspa's commitment to fostering a secure, efficient, and decentralized network infrastructure.
The 100 BPS mark of excellence
In simple terms, 100 BPS proves the protocol is academically pure, perfect, and complete. You cannot provide much in this field anymore if there is DAGKnight with 100 BPS. In this high-end realm, you can decide if further increments in the BPS will still provide you with any benefits. If the cardinality of blocks per second does not add any benefits or rising demands that are not counterweighted by the benefits gained, you can sacrifice some of that speed to alleviate another aspect of the network. Besides, the higher the BPS goes, the more challenging and economically appealing it is for miners, while, as you should already know, the safer the whole network will be.
Going towards 100 BPS is mostly important for the MEV and Oracle-oriented goals. In both of these, you want a lot of parallel blocks so that you can run sub-protocols over the various "opinions"/"block suggestions", and then make decisions based on the combined knowledge.
Now, let us focus on Yonatan himself.TOC
Intermezzo 2
To all the benevolent individuals who do not hesitate to help those in need.- Mickey
Yonatan: What is your email?
Mickey: Sure, I will share it in a chat message.
Yonatan: CZ, what is that .CZ?
Mickey: It's the Czech Republic; I am Czech, from Europe.
Yonatan: Oh, you are a Czech???
Mickey: Yeah, I am Czech. If you're here one day, let me know, and I will gladly buy you lunch.
Yonatan: You should have said that earlier.
Mickey: Heh, why? Do you like Czech beer?
Yonatan: No, not because I like beer; it's because I am a big fan of the Czechs themselves…
How I see Dr. Yonatan Sompolinsky (YS)
Dedicated. In a word, he is dedicated—dedicated yet also impatient. Even a few seconds of waiting makes him nervous, which perhaps explains why he develops decentralized networks that are so fast.
He possesses a down-to-earth personality, maintains humility, and typically adheres to concise communication. However, when engaged in discussions about things he is passionate about, he is remarkably keen to delve into these topics with unwavering enthusiasm and depth, and to proceed as long as it takes to explore the subject fully. On the other hand, he is an individual who values his time and refrains from investing it in arrangements or discussions that offer little or no productive utility. He doesn't inquire about your knowledge of PoW. Instead, he requests your interpretation of the concept. Depending on your response, he adjusts the complexity of his explanation to ensure it's accessible, gradually increasing the sophistication of his insights in an effort to enhance your understanding and knowledge step by step. I extended five invitations to Yonatan for video calls after initially proposing short consultations or clarification meetings via Telegram. Without exception, he promptly joined these calls. Yonatan does not typically immerse himself in the agenda documentation or the concealed details within email invitations for video calls. Nonetheless, he consistently expressed his commitment to the present moment, suggesting that our best course of action was to collectively review the documents and optimize our time use. He offered to promptly address any material I wanted him to assess, affirming his dedication to efficient collaboration. During our most recent video call, Yonatan was in what looked like a bustling mall. Despite the subpar audio quality, I appreciated his commitment to accommodating our meeting, especially considering that he appeared to be on a family vacation at the time. This further reaffirms his reputation as a man who consistently upholds his promises.
Towards the conclusion of the IBM blockchain conference call, in which I had invited Yonatan to discuss PoW blockDAGs, I asked him for a brief overview of Kaspa's stance on the "blockchain trilemma." With a subtle smile, he responded that when engaging in discussions with a more scholarly vocabulary, it's prudent to avoid the term "blockchain trilemma," which he regarded as pure technological jargon. Recognizing my somewhat embarrassed countenance, he swiftly dispelled any nervousness by commending the quality of the question and assuring me that he had no intention of undermining my subtle marketing effort. Instead, he conveyed his preference for using familiar terminology, and with a warm smile, he proceeded to elucidate the proposal for the benefit of all the participants.
On a different note, I'd also like to recount a moment when I informed Yonatan about the birth of my daughter. I felt compelled to apologize for the previous delay in my communication and the numerous typos and less-than-articulate statements in our prior conversations caused by sleep deprivation. In response, Yonatan sent his warm wishes to my daughter and, as he said, her courageous mother. Yonatan then asked me about her name, Googled something out, and replied promptly with the word "Yiskah." When I was having conversations with the Kaspa core contributors around the end of 2022, they all clearly showed kindness and strong bonds to their families.
Thank you to all Kaspa core contributors, with whom I had the opportunity to cooperate for a short time yet learned so much.
Chapter 2 - "An almost brief interview with a somewhat accomplished researcher"
Background introduction
Dr. Yonatan Sompolinsky, the pioneer of PoW block-DAG
Yonatan's works have been appearing in academic papers since 2014 and have been mentioned or recognized across the technological and academic spheres.
Google's citation results scored 3790, while only 461 citations reflecting Yonatan's work were created between February 2022 and April 2024.
His main focus, however, remained on his thesis, which improved with every protocol he helped bring to existence.
Yonatan's focus timeline:
2014: As a graduate computer science student at Hebrew University, Yonatan started a lab project with Professor Aviv Zohar.
2014 - 2021: Involvement with the academia and with Bitcoin.
2018: Yonatan joined the crypto space and started DAGlabs.
2018 - 2021: Effort to implement PoW DAG in real-world applications.
After completing his undergraduate studies in mathematics, Yonatan joined the computer science grad program and joined Professor Aviv Zohar's lab. The main thesis question put forth by his advisor Aviv was around latency in Bitcoin and Bitcoin-like systems, and the implications of latency barriers on security, throughput, fairness, and more. This was really early in the Bitcoin era when almost no academic papers regarding Bitcoin had been released yet, with the notable exception of the "On Bitcoin and Red Balloons" paper by Professor Aviv Zohar and his colleagues. As a challenge, his advisor asked Yonatan to work on the question of lowering the latency in Bitcoin block creation to under 10 minutes. Satoshi Nakamoto originally proposed this limit so that the network would have time to propagate the latest block, and reducing the limit could cause significant security vulnerabilities.
Many large decentralized systems rely on information propagation to ensure their proper function.
Bitcoin relies on a peer-to-peer network to track blocks (= batches of transactions) that are performed with the currency. For this purpose, every new block that a node learns about should be transmitted to its neighbors in the network.
The block creation rate in Bitcoin must be suppressed in order to ensure that the block interval is much smaller than the worst-case latency in the network.
"The question of latency assumptions sits at the core of any consensus system. Regardless of designing a permission or a permissionless consensus system, you want to know how many and how fast messages can accelerate and what happens if you try to make it work at internet speed." - Yonatan Sompolinsky
Initially, these questions may seem straightforward, but numerous subtle nuances had to be carefully considered upon closer examination. Yonatan embarked on a journey to address these nuanced inquiries, ultimately leading to the publication of the GHOST protocol, marking the first significant milestone in his research. By the time Vitalik Buterin mentioned GHOST in the Ethereum whitepaper, Aviv and Yonatan were already working on the block-DAG paradigm, the next step of enhancing Nakamoto Consensus by embedding blocks in a full graph form (DAG) rather than a tree (as in Bitcoin and GHOST). This way, they created a new paradigm in which they needed to solve and prove many minute details and answer questions that needed thorough research first. The outcomes of working on this paradigm were published as the Inclusive Blockchain Protocols paper and then as the SPECTRE protocol, another perhaps lesser-known DAG protocol.
Yonatan was then invited to join the Satoshi roundtable to meet several industry leaders, where he figured out that the academic proof for SPECTRE should not be the focus of his post-academic career. At the roundtable, Yonatan discussed the need for a solution to implement his work into a stand-alone PoW platform. To make this idea a reality, he co-founded an RnD entity called DAGlabs around the beginning of 2018 (active from 2018 to 2021). The mission of DAGlabs was to commercialize the DAG protocols while using the outcomes of his research. DAGlabs was seed-funded by Polychain and other VCs and slowly turned from a University project into a startup. This was followed by the release of another iteration of a block-DAG consensus paper, the PHANTOM paradigm, based on the greedy variant of which the GHOSTDAG consensus was created. PHANTOM-GHOSTDAG, or just GHOSTDAG, is an inclusive consensus protocol that should fulfill the DAGlabs goal of applying Yonatan's work to a PoW project. A little bit later, DAGlabs members realized that for a PoW project to succeed, it was crucial to avoid centralization created by DAGlabs and ensure the community's organic growth. Instead of a project backed by a centralized entity, they rather decided on an open-source crypto community. They took the already implemented GHOSTDAG code and launched the Kaspa mainnet.
Kaspa's fair launch created equal conditions for all participants, including:
No pre-mine: Kaspa had no pre-mined coins, meaning the initial supply was not allocated to specific individuals or entities before the project's launch. This helped prevent any concentration of wealth or control in the hands of a select few.
No rewards for founders: The founders of Kaspa did not receive any special rewards or incentives beyond what was available to other miners. This further reinforced the idea of equal conditions and avoided potential distortions in the early stages of the project.
Equal conditions for all initial miners: Every participant in the initial mining phase had the same opportunities and faced the same conditions. This approach encouraged broad participation and ensured a more decentralized mining power distribution.
The Polychain VC investment funded the DAGLabs development, with only a small portion allocated to mining. DAGlabs mined approximately 800MKas (3% of the fully diluted supply). Half of the mined coins were distributed to investors, while the other half was divided among DAGlab's former employees.
The community voted on the mining algorithm one day before the network launched. The mining algorithm was also modified to prevent existing GPU/FPGA miners from mining it, allowing CPU mining during the first few weeks.
The fair launch allowed many people to CPU mine Kaspa from the start.
The rapid emission schedule ensured that most of the coin was already in circulation when the network became ASIC dominant. Until then, the coin was mostly mined by GPU/FPGA miners, who have far more operational expenses and thus sold more of their yield to the market and increased the circulation supply.
By adhering to these fair-launch points, Kaspa sought to establish a PoW economy that responded to natural market dynamics, where mining and demand would be driven by the actions and choices of the community as a whole. This approach aimed to foster a robust decentralized network while promoting a fair and inclusive ecosystem. DAGlabs was dissolved so that nothing stood in Kaspa's way, and Yonatan moved to Harvard as a postdoc. During that time, he worked with Michael Sutton, a core researcher, current tech lead, and developer in the Kaspa community, on the pinnacle of his block-DAG research line, the DAGKnight protocol, authored by Michael Sutton under Yonatan's guidance. The goal of the Kaspa core contributors now, besides finalizing the RUST rewrite of Kaspa nodes and increasing blocks per second (BPS) cardinality to 30, is to apply the DAGKNIGHT protocol as the new Kaspa consensus protocol, which combines the best of all the previous protocols as an ultimate solution for the PoW's overall performance.
Dr. Yonatan Sompolinsky, the academic
Yonatan's journey to the limelight of technological innovation started at the Hebrew University of Jerusalem, where he studied mathematics as an undergraduate and continued with computer science in grad school. There, he met Aviv Zohar, his future advisor. Together, they started a lab project, the results of which were formalized in an academic paper called Secure High-Rate Transaction Processing in Bitcoin, which contained the GHOST protocol - an alternative to Bitcoin's longest chain rule. This paper, published in 2015, became his most acclaimed work at the time.
GHOST utilizes the proof of work embedded in orphan blocks by traversing the tree structure (resulting from forks under high speed) and selecting the main chain differently.
As a mathematician, Yonatan was most intrigued by probability theory in his work with Prof. Zohar. However, in the final form of the GHOST paper, this aspect was relegated only to its appendix. What started as a theoretician's interest later became a full-blown pursuit of the mysteries of crypto—a pursuit that he still maintains. Besides GHOST, he published another paper focused on improving blockchain performance and security, SPECTRE.
At the same time as founding DAGlabs in early 2018, Dr. Sompolinsky continued his academic career as a computer science postdoc at Harvard University, researching transaction ordering incentives and dynamics. Continuing his research post-doctorate, Sompolinsky explored blockchain's transaction ordering protocols and minimal extractable value (MEV).
Beyond technical developments, Yonatan Sompolinsky engaged in discussions about blockchain's future, emphasizing trust reevaluation for wider adoption and practical applications, showcasing the potential for significant industry advancements.
Summarizations and links to all the DAG-related papers Yonatan authored or co-authored can be found in the Appendix section of this work.
The legacy of Dr. Yonatan Sompolinsky, in verse
A small reward for all readers who made it down here :)
In the halls of Jerusalem, his journey began,
A student of math, an aspiring man,
Challenged by his advisor, a quest he embraced,
To conquer latency, prod Bitcoin to haste.
Through patterns and numbers, he delved deep,
To breach the limits, Satoshi's secrets to keep,
The world of block-DAGs, he dared to explore,
A paradigm shift, untrodden before.
Their lab project bloomed, a seedling to sow,
GHOST emerged, a protocol to bestow,
A boon to boost Bitcoin's floundering chain,
Network latency promised to reframe.
Within GHOST's pages, an appendix lay,
Probability's allure, begging to stay,
The appendix was forgotten, its tale untold,
Yet Yonatan's heart yearned its secrets to unfold.
A shift in his path, towards crypto's new realm,
There tries to tackle the trilemma underwhelm
Lo! DAG, a graph of blocks, an ordering design,
Efficient, resilient - a goal so sublime.
With Professor Zohar, his early partner in rhyme,
They etched their names into the pages of time,
Inclusive Blockchain Protocols, their paper unveiled,
SPECTRE, their creation, two-fold in it availed.
His works emerged, in papers, profound,
Drawing attention, renown did resound.
DAGLabs was born, a venture sparked by,
Phantom-Ghostdag as its rallying cry,
But the pitfalls of commerce they could not withstand,
DAGLabs dissolved, yet his dreams still did expand.
Thus Kaspa emerged, an open-source gem,
From which more Yonatan's DAG-craft would stem,
Protocols co-authored, a scholarly mind,
Each one addressing the hurdles of its kind.
SPECTRE, a resilient consensus unveiled,
Yet linear ordering, alas, it failed.
PHANTOM, a Nakamoto Consensus refined,
The likes of which till then one hardly would find
And GHOSTDAG, Kaspa's consensus brand new,
Practical, efficient, works with no ado.
DAGKNIGHT emerged, a protocol bold,
Free of parameters, a tale to be told.
For GHOSTDAG, hmm, a great vision was there,
But proof of its security - still up in the air.
Enter Shai Wyborksi, a mind astute,
To add what was missing, a resolute pursuit.
Shai's labor, a testament to his skill,
Unveiling secrets, security's thrill.
Together they embarked, their minds combined,
Cryptographic expert and mathematician aligned,
Months turned to a year, the quest endured,
A new technique invented, brilliance assured.
And let us not forget, another soul,
It is Dr. Sutton, who played a vital role,
Extending the technique, with insight keen,
Proved that the DAGKNIGHT is secure and clean.
The search for proof, a collective quest,
A blend of expertise, each one possessed,
Cryptographic prowess and analysis vast,
Building foundations that will forever last.
Through protocols crafted, challenges met,
Kaspa's path forward has firmly been set.
Yonatan Sompolinsky, a pioneer's name,
Forever engraved in blockchain's grand frame.
Intermezzo 3
Mickey: And! Dr. Sompolinky! I was wondering about a clickable name for the Interview part, but I am not the best name-giver :) Here are some proposals. Does something resonate with you?
Talks with Dr. Yonatan Sompolinsky
Reduced to the spectrum of gray with Dr. Yonatan Sompolinsky
Unveiling the Gray Spectrum: Yonatan Sompolinsky's Contributions to DAGs
Breaking and bolstering the blockchain: Talks with Dr. Yonatan Sompolinsky
Reforging the PoW blockchain without the chain
Let's talk about block-DAG with Dr. Yonatan Sompolinsky
Block-DAG paradigm: The new blockchain meta
Go beyond the Nakamoto consensus...
Yonatan: "an overextended interview with an overrated researcher"
Mickey: "the foreword of which is basically a book"
"An almost brief interview with a somewhat accomplished researcher"
- An interview with Dr. Yonatan SompolinskyPhase 1: Intro and Academic Career
Good afternoon, Dr. Sompolinsky. Thank you so much for accepting this interview.
Let's get straight into it.
When did you realize that academia would be your career path? Is having an academic lifestyle common in your family, or did you have older friends aspiring to get into a university, and you thought being there with them would be fulfilling?
I never realized I wanted to be an academic, and I still don't. Yes, it's very common in the family, but rebelling is also a strong tradition in the family. There's a saying that you truly mature when you start choosing paths despite your parents suggesting those paths to you.
Interesting. What career path would you choose today?
Probably an author or a poet, if you can call this a career. Academia is the third or fourth option.
What were the first mathematical areas you fell for? Were you interested in topics such as magic numbers, probabilities, graph theory, and algorithms?
As an undergrad, I found Probability Theory more appealing than other areas, though I'm attracted by rather basic stuff. I can't say I'm over-sophisticated in my taste or scope. I also fell for Number Theory despite a very shallow knowledge of the field, like when you have a crush on someone that you know nothing about.
What did this crush look like?
I wasted a full semester trying to solve the Goldbach conjecture from Number Theory. Needless to say, no major or minor progress was made, and I was slowly disillusioned that I would earn the one million dollar prize for the solution. Having failed thus, I fell into clinical depression and opted for computer science, where I found perhaps more feasible ways to grab such prizes.
Have you ever experienced a scientific discovery or realization that was hard to believe when it occurred to you?
Overall, I made three, perhaps four, findings, and I find them natural in retrospect, which is not uncommon for a researcher.
What was the hardest part of your academic life, the hardest challenge you needed to face in your career, and what success of yours made you most proud?
The frustration of spending 2 or 3 years on the same maybe-unsolvable problem, the long loneliness that comes with it, the inability or uselessness of consulting with others, as things are too nuanced by then. It takes a mental toll.
From the introduction, we know that you solve blockchain challenges by using DAGs, but how did your DAG journey start, and what was your first experience with it?
I didn't read the introduction, but to your question, the idea to employ DAGs was my advisor's, Aviv. The first DAG protocol, Inclusive, was mainly about the game theoretic aspects of using DAG but still with Bitcoin-like chain rules, just in a DAG structure. Only later did we start with the spicy DAG consensus algorithms. Though I can't overemphasize that DAG is really not a solution, it's a better framework for the consensus problem. Focusing on the DAG aspect is a bias towards the visual form, it's equivalent to focusing on Satoshi's chain data structure.
As a seasoned academic, you can reflect on the many stages you took during your career. In retrospect, how do you see the process in which you started as a student, then turned to somebody else's apprentice, later shifted to a mentor, and finally became a founder and highly respected community contributor?
Mimicry is the best way to learn. I had an advisor who was available at the time to sit with me for hours of brainstorming. You learn and mimic the thought process and methodology. When Aviv became rather busy, I had my father around to think things through. He knows little to nothing about crypto, but that doesn't matter much because, as a theoretical scientist, he knows how to identify fundamental aspects of the problem at hand, and this thought process is what I strive to learn. There are, of course, gaps in the framework, but still, that was a primary source of training for me.
What gaps, for instance?
Well, in physics and life sciences, normal behaviour and Normal distribution convey most of the information about the system under inspection. Typically, you assume n is greater than 50, say that particles follow a Normal distribution, and from thereon, you continue your analysis. But you can't apply the Law of Large Numbers to consensus problems or game theoretic setups. Computer science deals with man-shaped environments where intelligent agents act, and this invites the atypical and manipulative and requires a lense for the peculiar, a worst-case scenario analysis.
You seem calm and easy during your interviews, but how do bigger conferences make you feel? Are you nervous when standing in front of a crowd with a microphone, or do you enjoy presenting your ideas to the public?
It depends on how much beer I drink beforehand.
Numerous academic references and acknowledgments related to the work you have done or coauthored recognized you, a notable one being the Ethereum white paper. Please share with us which mentions have made you proud the most.
Probably my favourite citation is by Elaine and Raphael. I visited them at Cornell in 2016, and we discussed responsive consensus. A year or so passed, and they published a "hybrid Consensus" paper with several important contributions, one of which followed up on that conversation, which they cited as "personal communication" with myself. So that was one nice citation, I recall. By the way, this paper motivated us later on to formulate the 50% resilient responsive consensus problem, which I wasn't able to solve until I met Sutton.
OK, the next question: In the comment sections of your YouTube interviews and Twitter spaces, your fans often call you Satoshi Nakamoto. How does it make you feel?
Like an idiot.
Why?
I mean, it's heartwarming to have some people think I can write a production-ready codebase like Satoshi. I scored 62 points in my C/C++ undergrad course, and even that was by the mercy of the lecturer. Perhaps if Bitcoin had been released in Java and with copilot, maybe then I might have been Satoshi, who knows.
Do you know who Satoshi is?
I can tell you, but then I need to kill you. Seriously though, no one knows, and I don't know that any of this matters. Satoshi's anonymity is core to the inception of Bitcoin and the decentralization movement. Also, the value of discovering Satoshi is overrated; I, for one, know much more about crypto and permissionless consensus than Satoshi knows; she hasn't been to conferences for years.
Did you consider creating Kaspa anonymously? Do you wish you had?
"I'm a count, not a saint."
The end of the Phase One...
A tragedy that marked this interview
The initial phase of the interview was conducted between May 18 and August 9, 2023.
After this period, Dr. Sompolinsky temporarily halted the interview to devote attention to research endeavors. Between this interview's first and second phases, the Kaspa network faced several minor attacks. These incidents were minor enough not to warrant detailed discussion here, but they contributed to identifying enhancements and implementing solutions that improved the network overall.
Regrettably, these were not the sole incidents of concern occurring between the end of the first interview phase and the start of the next phase on February 9, 2024. Toward the end of 2023, Israel was the scene of a devastating terrorist attack—a deliberate act of violence at a music festival, resulting in the loss of innocent lives. This attack incited further armed conflict in the Gaza area. During this time, the communication thread I held with several Kaspa members was interrupted. While some took care of their families or moved to safer zones, some stayed in their houses, running to their basements to take cover every few hours. During these days, the anniversary of Kaspa's founding came to pass. However, instead of cheers and celebrations, many minds in the project, especially those living in Israel, were focused on very different things. They prayed for the restoration of normalcy, the safe return of hostages, and an end to the bloody decades-long conflict.
These are two countries that are divided in faith yet are doomed to be neighbors to each other in a shared holy land.
Phase 2: Blockchain, block-DAG, and the world of crypto
When considering the creation of a significant project like Kaspa, which aims to stand alongside established giants such as Bitcoin and Ethereum, it becomes clear that a mere vision might not suffice. Beyond vision, what additional elements are essential to develop a blockchain project that not only challenges existing principles but also introduces innovative solutions within the Proof of Work (PoW) cryptocurrency domain?
You can't compare the vision required behind Bitcoin to the vision of any other follow-up project. It's like comparing the work done by the first mathematician who proved a theorem to the work done by successors. There are many cases where successors provide "better" – as in simpler or more beautiful – proofs, but still, the first prover paved a path in a desert, and the degree of vision required by successors is an order of magnitude smaller.
Kaspa addresses the blockchain's security-scalability-decentralization trilemma with block-DAG technology. Do you think Satoshi Nakamoto would have admired such a solution? Could it have brought a smile to his face?
Admired - no, a smile - definitely.
In developing Kaspa and aiming to adhere closely to the foundational principles established by Bitcoin, including the Proof of Work mechanism and the UTXO model, how did you approach the emission schedule differently to address market dynamics and ensure a more egalitarian distribution compared to Bitcoin's early mining period?
Kaspa attempted to stay faithful to Bitcoin's launch and design, proof of work, UTXO model, a verification-oriented (not computation-oriented) scripting language, etc. The emission schedule of Kaspa is indeed more rapid, but if you normalize it to market dynamics, it's, in fact, less rapid and more egalitarian than Bitcoin, in the sense that the core developers weren't able to mine "peacefully" like Satoshi and Hal for months and months. Satoshi is estimated to have been able to mine about 5% of the total supply of bitcoins, whereas Kaspa core devs were able to mine about 2.5% of the Kaspa total supply.
Yeah. Over time, your strategic decisions for Kaspa, such as its rapid emission schedule, have been seen as very insightful. What were the key factors you considered when launching Kaspa?
Regarding my strategic decisions, some were insightful, some an oversight. Shouldn't be too romantic about that.
What are examples of decisions that were an oversight?
Gamenet, with its preverse incentives; the lack of any preparation of mining infra for DAGlabs; the failed attempt to change denomination (rejected by the community). I'm sure there are several more.
What were the initial plans for Kaspa's launch regarding hardware development and ASIC presales, and what factors contributed to the eventual decision for a fair launch?
We had an idea to develop hardware and go for an ASIC presale; I wrote about it in my blog, and Nic Carter has a nice piece on this, too. But this path didn't materialize. Optical ASIC hardware wasn't mature at the time, and in the lack of sound alternatives, we ended up with a vanilla fair launch.
Do you view your company, DAGlabs, as a success or failure, and why?
It was a resounding failure. DAGlabs was a for-profit entity that was supposed to identify and implement a practical path to reconcile VC-backed and fair-launch models. There was no good strategy behind this – or behind any other aspect of the organization, for that matter. I ended up conveying the organization's failure to investors, and, to their credit, they encouraged me to launch nonetheless, even with the lack of a business model or ROI plan.
So is it fair to say that you gave up, or almost did, and that Polychain pushed you to launch?
Yes, that's one way to put it. I was also compelled to launch by my friend, Gadi, who recognized the potential.
I suppose it was a happy ending, and investors were still satisfied.
Mostly yes, I suppose. Some people came after the fact and wanted to eat the cake and have it, too, but most people involved comprehended the fair launch model and understood and accepted its implications.
You have seen the rise and fall of some blockchain projects and remember the early days of Bitcoin and Ethereum. Did these communities make any mistakes or unlucky decisions in their early days? Did you acknowledge them and write them on your "I will not repeat this with my own project" list?
Nah, unfortunately, I performed every startup textbook mistake that exists. But now I do have such a list, in case that's helpful.
What are some lessons you can share with young crypto entrepreneurs?
Bootstrap your project in a bear market. This will filter out people with weak convictions and risk "dodginess" and investors whom you wouldn't otherwise want to collaborate with. In general, work with people on the basis of symmetric risk only. A crypto startup's journey is too volatile, and people who have already achieved high net worth are likely to jump ship sooner than they would when things turn south.
Are there any other interesting or historical details you can share about the early days of Bitcoin or Ethereum?
One insightful crypto drama that I recall witnessing firsthand was the Ethereum DAO hack, or rather the rollback, which took place in the Cornell bootcamp I mentioned. It was amusing to see the minority chain (aka Ethereum Classic) forming in real time and the community split around it. The entire episode was perplexing; the exploit was discovered by folks from Cornell a few days before it was exploited, and the hacker published a rather provocative code-is-law manifesto. Overall, there was some sense that the hacker wasn't a complete outsider.
Decent. Hm, what is Your vision for Kaspa?
I do not have one.
Well, then, what is a fulfillment of Nakamoto's vision for you? Some believe it's Kaspa.
Of course, Kaspa faithfully fulfills at least parts of Satoshi's vision more faithfully than Bitcoin. The peer-to-peer electronic cash aspirations, at least for L1, have been abandoned by the Bitcoin community for some 7 or 8 years by now. And besides inherent shortcomings, LN's adoption is very low relative to Bitcoin's. Kaspa is not perfect on all metrics, and it's not the ultimate cryptocurrency; there's no such thing—there are only tradeoffs. Hopefully, we picked the sweetest spots on the curve.
Do you have a vision for crypto, or would you rather focus on research and walk your own path in your own time?
I actually do, and it's compatible with focusing on my research. It's around the concept of an expressive design of Turing Complete logic. When the time comes, I'll share more. Needless to say, I would love for my ideas and proposals to be accepted by our community and baked into Kaspa.
What led to Kaspa's creation and its role in your research?
I wanted a boutique crypto platform that will implement my research, and which will keep me incentivized to come up with new research lines.
Are you exploring new research areas for Kaspa?
Yes, several, some more exciting than others.
Can you give us a sneak peek?
In time due. For now, suffice it to say one research line resolves around MEV and Oracle data and another line around a new smart contract concept, which I hope would prove useful.
What is Kaspa's current development roadmap?
A roadmap assumes a team and structure. Kaspa operates differently, based on community grants, votes, per-project devs, dev funds, etc. I can share my wishlist with you if you want, as well as my loose prediction of how the process is likely to evolve in practice: DAGKNIGHT, 100 BPS, and an MEV resistance mechanism. These are on the part of perfecting the sequencing functionality. Other wish features: zkVM – an L1 op_code that allows for native zk rollups and account abstraction, a feature that supports native social recovery necessary for sound money.
What needs to be done so that DK becomes the new consensus for Kaspa?
We need the Rust+10 BPS project to be fully complete before we focus on this full-time, which I suppose is wrapping up these very days. Originally, the 10 BPS was meant to happen after the Rust completion and either after or in parallel with DK efforts. Priorities have shifted since it made sense to extend the Rust project to include being able to cope with very high BPS. So, we got Rust+10BPS together at the expense of a delay in DK implementation and upgrade.
Yonatan, imagine that Rust rewrite is complete, and Kaspa operates on DAGKnight with 30 BPS. What will this allow Kaspa to perform?
I don't know, but I'm sure new usage patterns will emerge in 30—and definitely 100 – BPS. They will probably revolve around instantaneous first confirmation for commutative state changes, i.e., when the ordering of this round's transactions does not affect your transaction or when there's no material risk for double-spend. Also, I presume that with 100 BPS, several service providers will run their own miners, giving rise to new mining dynamics.
And what can we expect from DAGKnight with 100 BPS? I guess that from 30 BPS higher, it is not about speed anymore.
DAGKNIGHT is also useful for 10 BPS. Anything above 10 BPS is about the decentralization of mining, MEV protection, and perhaps more properties that I haven't thought of yet.
Great. Let me ask about future Kaspa development. Can you provide minimalistic information about the topics you mentioned during the CECS conference and what logical milestones Kaspa should follow?
Selfish-mining bounds: In general, DAG makes selfish mining less risky but also much less profitable. DAG is more tolerant of late blocks, and you'd expect it, in particular, to give a selfish miner more room for error. It indeed does, but honest nodes enjoy higher tolerance to the artificial latency caused by selfish mining, so the attack is overall much less profitable. I doubt this observation merits another research paper, but maybe if I come across a bored student, I'll assign this to them.
Tightness of confirmation times: It is NP difficult – and arguably of little practical value – to provide tight bounds for theoretical liveness attacks on DAG protocols. These analyses assume an unrealistically powerful attacker who enjoys zero latency to or from honest nodes. A more useful approach would be – having proved theoretical liveness bounds – to use machine learning to tighten the bounds for practical attacks.
Additional benefits to parameterless (elastic throughput, MEV protection): Parameterlessness allows you to support a high variance of block sizes and block rates. I am still unsure where to take it from here. I was considering elastic throughput, but this also requires further thought.
What about the new thing in the Kaspa ecosystem, the Kaspa Ecosystem Fund (KEF)?
I chatted with KEF folks twice. They are very keen to support Kaspa R&D, particularly smart contracts. I'd be happy to work with them and similar organizations in the future, though I am not affiliated with them or with kaspa.org at that.
How do you view Kaspa's role or desired position in the crypto ecosystem?
Kaspa is all about sequencing—speed, security, decentralization, and, in the future, resistance to MEV, which stems from the informational gap between sequencers and transaction issuers. Sequencing is the linchpin of consensus; any compromise there infiltrates the entire stack. Perfect sequencing complements the rest of the stack—e.g., the VM—from other successful projects through friendly clones.
Does this mean Kaspa will serve as a rollup sequencer for other projects in the ecosystem, such as Ethereum?
Eth L2 terminology keeps shifting. A rollup and a sequencing layer mean today different things to different people. In any case, it might be a good first step to serve as a sequencing layer for the Eth ecosystem, but the end game should be to settle on Kaspa and develop our own ecosystem. While we are at it, I am not sure what will be the actual borders of the Eth ecosystem going forward since its L2/L3 ecosystem is increasingly assuming the role of the main chain – sequencing and data availability are gradually moving to centralized players, and while the main chain might remain the root settlement layer, there's also a non-negligible chance that Eth will undergo a "supernova" event and the ecosystem will fragment. This is not to say Eth will die, not at all, but rather that the actual technological meaning of "belonging to the Eth ecosystem" will become increasingly more ambiguous. I wrote about it in my blog if I may self-promote here: In which we'll bе reduced to a spectrum of gray.
Bitcoin rollups and DeFi are supposed to be the "next new thing." Take bitVm by Citrea, for example. How would this compare to Kaspa?
It's interesting to see the Bitcoin community embrace the use case of DeFi, which is traditionally considered a scam, at least in hardcore circles. I suppose it's inevitable that Bitcoin DeFi becomes a thing, and at the same time, I doubt it will dominate. Bitcoin's 10-minute block interval rules it out as a relevant sequencing layer for finance, and settling on Bitcoin can be done only via some bridge unless a new ZKP op_code is implemented.
What changes would you like to see in the Kaspa community?
First and foremost, a more distributed knowledge base. There are still several areas where too few researchers or devs fully comprehend, which implies we still have too few points of failure, even if the mining and control of the network itself are decentralized. More generally, a more technical or technological and less persona-oriented approach towards the project. Kaspa should not be based on trust in people, let alone one founder or a handful of community figures. I originated Kaspa, and I contribute to it; I do not have or wish to have a sense of ownership over it, and I'm happy for others to take it wherever they want (which is why I do not mention Kaspa in my bio). If the cost of reasserting this crypto-premise is some people leaving the project because they thought it's all about me, so be it. I believe that'd be a very healthy development, will pay off in the long run and probably sooner, and will accelerate Kaspa's path to antifragility. The more personality-cult-minded people leaving the project, the more crypto-principled folks will be attracted to it: more critical thinking and technical eyes, quicker development and testing cycles, and broader pressure to document the codebase.
During Ethereum's early stages, Vitalik Buterin and his team adopted a variant of the GHOST protocol for Ethereum consensus. However, they had to switch to an alternative approach due to improper implementation. Interestingly, it's believed that Ethereum still operates on a variation of another protocol you co-authored, the Inclusive Blockchain Protocols, which pioneered using a directed acyclic graph (block-DAG) for block structure. Were there any discussions or exchanges between you and Vitalik about this strategic shift? Did he give you a call back then? :) Did he ever seek your input, consult with you, or inquire about your ongoing research? If such interaction did not occur, do you think it would have been advantageous to them at the time?
Ethereum implemented a variant of Inclusive in their uncle's inclusion protocol. As far as I know, not implementing GHOST was not due to improper implementation but a combination of wanting to simplify and perhaps some original developers not fully comprehending it. Vitalik and I spoke about it twice, perhaps, but not in real-time. When Ethereum WP was released, Aviv and I were already aware of problems in GHOST and were halfway DAG wards.
What influenced your decision to choose the PoW path for your project?
My research is on PoW, I wouldn't have too much to contribute with other designs. PoW is much better notwithstanding.
The Kaspa Proof-of-Work (PoW) function has been highlighted as specifically designed to be compatible with Optical ASIC chips. This unique characteristic potentially enables PoW mining with significantly reduced electricity consumption. Could you elaborate on the potential impact of this development and discuss the trade-off between the high initial capital and maintenance costs associated with Optical ASIC chip mining versus the long-term benefit of reduced electricity consumption resulting from light-electron interaction?
If you believe ASICs are better than CPUs for network health, then the same arguments would apply to optical ASICs being healthier than digital ones. TLDR, a smaller fraction of the security investment is burnt at each time interval.
How does the Kaspa community plan to maintain the drive for continuous development and upgrades, ensuring it doesn't suffer the fate of Bitcoin or Monero, where development stagnated after reaching the limits of their base layer (L1) systems?
I'm not going anywhere, at least as long as LBJ stays in the game. I mean this more seriously than it sounds. Lebron James is a role model, sticking to the demanding work of your game, whatever that may be, striving, always striving, regardless of how many successes you have already in your bag. He's twenty years older than some of his peers, and he doesn't care.
We are almost done. It wasn't even that hard, right?
Yonatan, let me ask you this as my almost last question. When you realized I am Czech, you said you would treat this interview with a higher priority right from the start if you had known this earlier. Why?
Czech has a warm spot in my heart. The country has been historically, and still is, proudly committed to and consistent in supporting the promotion of Western values in the middle east.
Is there anything you want to share with the community before we finish this interview?
Please read the appendix to my first paper, Secure High-Rate Transaction Processing in Bitcoin. It has a nice lemma about the race between a Poisson random process and an increasing hazard rate one. Also, don't take everything I say too seriously. Cheers.
Nice recommendation. By the way, readers, I suggest you read the first three episodes of my Blockchain Academy to understand the PoW and mining basics. There is plenty of information regarding the Poisson prices and much more.
Cheers!
Support the publication of free Kaspa-related content
With Dr. Sompolinsky's approval and embracement of the open-source ethos, donate any amount of KAS using the address below if you enjoyed this content and are looking for more. Your support is greatly appreciated!
Appendix
A summarization of Dr. Yonatan Sompolinsky's DAG research line
This appendix provides a scholarly overview of block-DAG protocols by Yonatan Sompolinsky and collaborators, tracing the evolution from the foundational GHOST protocol to the advanced DAGKnight Proof-of-Work (PoW) protocol. It aims to offer a succinct introduction to block-DAG technology, highlighting the significant contributions of Yonatan Sompolinsky, Michael Sutton, Shai Wyborski, and Aviv Zohar to overcoming blockchain limitations. The content reflects the author's interpretation and is enriched by insights from Kaspa's core developers.
Secure High-Rate Transaction Processing in Bitcoin
Yonatan Sompolinsky, Aviv Zohar — 2013
The Secure High-Rate Transaction Processing in Bitcoin paper analyzed Bitcoin's longest chain rule in scenarios with high latency or throughput. The final two sections of the paper introduced the GHOST protocol, which proposes an alternative approach to Bitcoin's longest chain rule. This protocol utilizes the proof of work present in off-chain blocks by navigating the tree structure resulting from forks at high speed. By selecting the main chain differently, the GHOST protocol aims to overcome limitations caused by network latency.
GHOST
The appendix of the above-mentioned SHTPB paper introduced new analysis techniques and ideas for Bitcoin and the Nakamoto consensus. GHOST was used in Ethereum for a few days right after its launch, but the non-ideal implementation of GHOST forced Ethereum founders to switch to the longest chain rule after a while. GHOST has some vulnerabilities that were solved later in SPECTRE. Still, even though GHOST was later found to be insecure against liveness attacks, the long-term value of the GHOST paper remains unquestioned.
(This doesn't mean that Ethereum has a liveness problem.)
Liveness attack: If you're trying to be inclusive about transactions and not just about work, and a conflict appears before the transaction is confirmed, then it is theoretically possible (though rather impractical) to delay the confirmation of this transaction for an arbitrarily long period.
GHOST rule can securely allow you to include the work put into cousin blocks without having liveness issues, but you'd still only use TXns on the GHOST CHAIN and discard TXns from cousin blocks.
Motivation:
Bitcoin is a decentralized cryptocurrency that has gained significant traction. A critical factor for its success is scalability, particularly its ability to handle a large volume of transactions. In the Secure High-Rate Transaction Processing in Bitcoin study, authors examine the impact of increased transaction throughput on Bitcoin's security against double-spend attacks. Their findings reveal that higher throughput can enable weaker attackers to reverse payments long after acceptance. To address this concern, the authors propose the GHOST rule, a modification to Bitcoin's blockchain construction and re-organization process.
Inclusive Blockchain Protocols
Yoad Lewenberg, Yonatan Sompolinsky, Aviv Zohar 2015
The Inclusive Blockchain Protocols paper first proposed the directed acyclic graph structure for blocks — the block-DAG, but its focus was on increasing throughput while not decreasing security and on linearizing the block rewards across miners. A modified version of the Inclusive Blockchain Protocols was used in Ethereum and may still be used today.
Motivation:
Distributed cryptographic protocols like Bitcoin and Ethereum rely on the blockchain data structure to synchronize events across their networks. However, the current blockchain mechanics and block propagation have limitations that impact performance and transaction throughput.
To address these limitations, the authors proposed an alternative structure called the block-DAG, which allows for higher transaction rates and more forgiving transaction acceptance rules.
Paper quotation:
"We propose to restructure the blockchain into a directed acyclic graph (DAG) structure, that allows transactions from all blocks to be included in the log. We achieve this using an "inclusive" rule, which selects a main chain from within the DAG and then selectively incorporates contents of off-chain blocks into the log, provided they do not conflict with previously included content. An important aspect of the Inclusive protocol is that it awards fees of accepted transactions to the creator of the block that contains them — even if the block itself is not part of the main chain."
The block-DAG incorporates a directed acyclic graph of blocks and can tolerate larger blocks with longer propagation times. Additionally, the proposed system reduces the advantage of highly connected miners
and addresses security concerns related to potential malicious attacks.
This paper also describes that these attacks can be easily countered.
SPECTRE
Yonatan Sompolinsky, Yoad Lewenberg, Aviv Zohar — 2016
(parameterless; nonlinear ordering( potentially cyclic, nonlinear, pairwise), responsive to actual network latency)
SPECTRE was designed as a partial synchronous network model and was the first parameterless 49% resilient proof-of-work consensus protocol, making it robust against network congestion and bandwidth constraints. However, it does not provide linear ordering, making it unsuitable for applications requiring full linearization, such as smart contracts (SC). The challenge of the SPECTRE and PHANTOM was to recover block-DAG consistency by ordering the blocks so that attackers' blocks and conflicts are excluded. This paper also mentions the term anticone, which you can read in the following papers. Anticones are blocks that are neither a block's past nor future; this is a definition inside the DAG, regardless of the protocol. To recognize an honest block from a dishonest one, protocol categorizes subdags on connected and well-connected graphs. The anticone is presented in both categories, but the honest subdag is that well-connected dag mined by honest miners, the anticone of which is not bigger than the set parameter k. The k parameter is the maximum anticone size of a block in an honest network, thus acting as a tolerance parameter.
SPECTRE generates a pairwise ordering, which is potentially cyclic and nonlinear. This characteristic means it might not always be possible to linearize the ordering. In instances where a transaction conflict occurs before confirmation, SPECTRE theoretically allows for the possibility of delaying confirmation indefinitely, highlighting a vulnerability known as 'weak resistance to Liveness attacks.' Due to this potential for non-linear ordering, SPECTRE is generally considered unsuitable for smart contract applications, where linear transaction history is crucial.
Pros:
High block creation rates
Transactions are confirmed within seconds
Limited primarily by network round-trip time
No reliance on message delivery time as a protocol parameter
Adaptation to the current network delay
Enables efficient convergence
Cons:
Not suitable for use with SC. In SC, you don't want to make the entire contract stuck in a pending state.
Nonlinear ordering; pairwise.
Not guaranteed fast confirmation times in the event of active balancing, "Liveness", attack
SPECTRE can't tell the full linear ordering of several transactions where transaction `a' comes before `b,` `b` before `c,` and `c` before `h.`
SPECTRE, however, can tell you which one came first between each pair. Problematic can be a cycle, such as a <- b <- c <- a. These cycles can occur if we let the audience prove a ranking of ordering between these transactions, and then you check the majority of votes and obtain cycles as a result. SPECTRE uses pair-wise ordering by taking each pair of conflicting transactions, such as [x,y], where x comes before y, and we delete the y.
Motivation:
Bitcoin and other permissionless consensus protocols rely on the Nakamoto Consensus to achieve agreement in decentralized and anonymous settings. However, ensuring security becomes more challenging as transaction throughput and confirmation times increase. In this context, the authors introduced SPECTRE, a new consensus protocol designed for cryptocurrencies that maintains security even under high throughput and fast confirmation times. A key feature of SPECTRE is that it satisfies weaker properties than classic consensus protocols. While traditional consensus requires agreement on the order of all transactions, SPECTRE focuses on ensuring order among transactions performed by honest users. It recognizes that dishonest users can only create conflicting payments published concurrently, allowing for delayed acceptance without compromising system usability. This framework formalizes these relaxed requirements for a distributed ledger in a cryptocurrency, and the authors provide formal proof demonstrating that SPECTRE satisfies these requirements.
Yonatan once described this protocol to Shai Wyborski as: "his most beautiful creation."
PHANTOM
Yonatan Sompolinsky, Shai Wyborski, Aviv Zohar — 2018
(paradigm, linear ordering):
PHANTOM is a parameterized generalization of the Nakamoto Consensus. Please note that the PHANTOM paradigm (illustration, not a protocol) and PHANTOM-GHOSTDAG, or just GHOSTDAG (PHANTOM implementation), were covered in the same paper. The PHANTOM can be categorized as a family of consensus protocols, with GHOSTDAG and DAGKNIGHT being two specific instantiations of this family. PHANTOM serves as a conceptual framework, representing an idealized version that cannot exist in reality but serves as a theoretical precursor to GHOSTDAG. GHOSTDAG, on the other hand, represents the practical implementation or a practical approximation of the theoretical foundation laid by PHANTOM.
The PHANTOM paper presents a mechanism to prevent attackers from exploiting the work generated by honest nodes for their advantage. To achieve this, it proposes a method to distinguish between blocks created by attackers and those created by honest miners, organized in a k-cluster composed of well-connected blocks. In other words, to recognize a cluster of honest blocks and discard or penalize (being put later in the DAG order) the rest. PHANTOM searches for the largest k-cluster, orders its blocks using a topological ordering and iterates over blocks in the prescribed order while accepting transactions consistent with history. In the weight of the chain, PHANTOM counts the weight of orphans that are well connected and increases the weight of the honest chain by it. The suitable parameter k is important for those wishing to implement a protocol from the PHANTOM family. To do so, specify your node requirements first, then decide about the desired throughput, and lastly, choose the proper parameter k.
The PHANTOM rule differentiates between honest and attacker's DAG based on the anticone (blocks that are neither block's past nor its future) factor. We can describe the PHANTOM rule shortly as "find the biggest sub-DAG where no block has an anticone greater than k." Notice that this is a generalization of the Bitcoin longest chain rule, where Bitcoin can be described as PHANTOM with k=0, where k > 2Dλ.
D = network delay
λ = the block creation rate
k = the maximum anticone size of a block in an honest network
A block's anticone can consist of blocks unknown to the block's miner and blocks created before the block's miner finishes its propagation.
The PHANTOM optimization rule is to return max k-connected clusters.
An honest set of blocks is a set of blocks mined by honest nodes.
The largest k-cluster is a set that, with high probability, includes all properly and honestly mined blocks.
In theory, PHANTOM is easy to implement efficiently, provides protocol-unlimited throughput, and remains limited only by the network. However, when conflicts are visible, increased waiting times can be expected.
PHANTOM Algorithm versions:
[1] NP-hard — The mathematically pure one:
- Step 1) Search for the largest k-cluster; the cluster is honest.
[2] Greedy — More implementable one: selecting the chain with the largest weight and counting all k-uncles. This version is known as GHOSTDAG.
- Step 1) Search for a chain with the largest weight (the most mining hash power work put in) of uncles of degree = k; then, the chain and uncles are honest.
Common part for both NP-hard and greedy versions:
- Step 2) Order its blocks by using some topological ordering.
- Step 3) Iterate over blocks in the prescribed order and accept transactions consistent with history.
PHANTOM issues:
Being inefficient if applied in NP-complete DAG, where is the problem with finding a maximum k-cluster. [1]
If a Directed Acyclic Graph (DAG) is NP-complete, determining certain properties or solving problems related to the graph's structure or content is computationally challenging.
NP-completeness is a complexity class in computational theory that represents problems for which there is no known efficient algorithm
to find a solution.In the context of a DAG being NP-complete, tasks such as finding a specific path, calculating certain metrics, or solving optimization problems on the graph may require exponential time or resources to complete. This can make such tasks computationally infeasible for large and complex DAGs, and they may fall into
the category of problems considered difficult to solve efficiently.
2. Being not incremental — Every time the DAG updates, the entire computation must be restarted. In particular, it requires storing the entire DAG structure. [2]
GHOSTDAG (GD)
Yonatan Sompolinsky, Shai Wyborski, Aviv Zohar — 2020
GD achieves Nakamoto consensus security independent of block rates (same as SPECTRE). GD has a rapidly converging linear ordering (the ability that SPECTRE lacks). GD, published as a part of the PHANTOM-GHOSTDAG (PHANTOM) paper, is Kaspa's current consensus protocol, which is a practical and efficient (greedy) variant of PHANTOM and its realization and application to a PoW project. GD greediness solves both issues [1],[2] that PHANTOM has by incrementally maintaining an approximate k-cluster. Each block is labeled with a number representing its blue score, indicating the number of past blocks in the k-cluster. When a new block is created, it inherits most of the k-cluster from its selected parent, avoiding recalculating the entire k-cluster. The remaining portion is chosen from the anticone of the selected parent.
Like PHANTOM, the GD protocol selects a k-cluster, which induces block coloring as:
- Blues: blocks in the selected cluster/on the chain.
- Reds: blocks outside the cluster/off the chain.
The greedy algorithm finds the Blue set with the best tip and then adds the data from outside the set. The combination of Blues and Reds forms a chain, with the block from the selected tip coming last. The second step is to find a proper ordering of DAG within the secure Blue set. GD utilizes user-defined and mined data to create a topological order in the chain.
This inclusive protocol also ensures no blocks are orphans thanks to pointing to all forks instead of following the longest chain. The inclusiveness of GD assures that transactions aren't lost after a reorg attack (either organic or adversarial). That's the essence of why Kaspa provides instant confirmations. GHOSTDAG, the first PoW protocol to enable reducing block rates on a non-sharded network, also allows for unprecedented confirmation times. However, a limitation is that it doesn't respond to network latency. An upper bound on network latency must be set (which we can assume holds 95% of the time), and the rest of the network properties, particularly confirmation times, are derived from this bound. This implies that performance doesn't improve as latency improves, and, worse, security is compromised if network latency deteriorates. This is true, however, for all existing PoW algorithms, with the only exception being SPECTRE.
Drawbacks of having a parametrized consensus, such as GD:
- If you underestimate the delay, then the system is not secure.
- If you overestimate the delay, the system is slower than it can be.
GD con vs. all other Yonatan's protocols (and DG, that is Michael's :)) is that GD has a prior delay bound, which means that confirmation times are a function of the bound, not the latency.
The huge advantage of GHOSTDAG over all other PoW algorithms is that it removes the security constraints on throughput.
SPECTRE and GHOSTDAG possess unique properties that are not found in other protocols. Thus, the key to creating an ultimate protocol lies within these two, which will create the foundation for DAGKNIGHT. Shai Wyborski proved GD security.
The initial protocol setup when launching Kaspa with GD was:
- delay = 5 seconds
- k = 18
Kaspa and their block-DAG inclusive ordering protocol, GHOSTDAG, solves the blockchain trilemma issues while delivering high block creation and transaction verification speed while not sacrificing security and decentralization — all while taking TPS/Confirmation times deadly seriously.
GD perks not mentioned in the paper:
- A novel approach to difficulty adjustment.
- A novel approach to coloring (not ordering) and security that arises from it.
- A fancy pruning mechanism.
- An ability to provide infrastructure for Layer-2 applications.
- Kaspa's Proof-of-Work (PoW) function is specifically designed to be compatible with optical ASIC chips. This unique characteristic enables PoW mining with significantly reduced electricity consumption.
DAGKNIGHT (DK)
Michael Sutton, Yonatan Sompolinsky — 2022
(parameterless, linear ordering)
DK is the first protocol published since the Kaspa fair launch. It evolves the PHANTOM paradigm, drawing inspiration from both PHANTOM and SPECTRE.
The development has matured over the years, making it the first parameterless 49% resilient proof-of-work consensus protocol with no speed limitations beyond hardware. One of the ideas DK draws upon from SPECTRE, to name one, is the cascade voting procedure. Where GHOSTDAG still assumes an explicit upper bound on the network latency, DAGKNIGHT doesn't. Both these protocols allow similar BPS, but DK utilizes those BPS with better security and confirmation times. GHOSTDAG exhibits linear ordering, while SPECTRE is parameterless. Only DAGKNIGHT combines both properties, making it a highly advanced solution. DK's parameterless-ness originates from the "min-max optimization" definition in the DAGKnight paper.
Knight optimization:
For each k=0...∞ , iterate over the DAG and find the max k-connected cluster. It is about searching for the minimal k (where k is a nonnegative integer) such that the max k-connected cluster is> 50%. In other words, you search in real-time for the most connected cluster (min k means most connected; for instance, k=0 is a chain, which is the most connected structure), which has a majority. That's part of its beauty. DK's core idea is a one-second spark that can be easily communicated.
Return the minimal k such that the max k-connected cluster is> 50%.
DK Perks:
It achieves Nakamoto consensus security independent of block rates (same as GHOSTDAG and SPECTRE).
It has a rapidly converging linear ordering (same as GHOSTDAG).
Suitable for SC (same as GHOSTDAG).
DAGKnight is responsive to actual network latency (same as SPECTRE):
Confirmation times can approach the network limits without risk, as any degradation in network conditions will result in DK automatically increasing its confirmation times to maintain stability. This means that DK scales itself as network latency is improved.
The evolution of DAGKNIGHT (DK) from GHOSTDAG (GD) involved a collaboration between Yonatan and Shai to establish GD's security through conclusive mathematical proof. Subsequently, Michael Sutton extended and refined this proof to apply to DK. The challenge arose when Michael and Yonatan realized that hypothetical attackers could exploit a worst-case apriori assumption of higher latency to gain an advantage when observing a low-latency DAG. To address this, they aimed to create a coloring method that favored well-connected DAGs representing lower actual latency, leading to the concept of "parameterless-ness."
For three years, Yonatan guided Michael as they encountered numerous challenges, each requiring significant enhancements to the initial idea. This iterative process culminated in the unique DK solution, detailed in the DK paper, drawing on Yonatan's extensive DAG analysis experience gained from analyzing SPECTRE.
While we still need to wait for DK to be a new Kaspa consensus and thus see this PoW diamon in real action, we can appreciate GD as the fastest blockchain trilemma-solving PoW as of today, capable of achieving a robust amount of BPS without sacrificing decentralization or security, accompanied by instant transaction confirmations determined by network latency — not by the protocol. Even though the term "robust amount" is not anything astonishing by itself, what makes it astonishing is the set of confluences patched by a batch of novelty features and the fact that Kaspa with GD increases BPS while maintaining the non-decreasing confirmation times.
Every Night and every Morn,
Some to Misery are Born.
Every Morn and every Night,
Some are Born to Sweet Delight.
Some are Born to Sweet Delight,
Some are Born to Endless Night.
- William Blake
The world will spin, and the color will fade,
And we'll be reduced to a spectrum of gray.
Bе my color, SuperNova
- SuperNova, Averno
Mickey Maler's Kaspa: From Ghost to Knight, off to heal the blockchain's plight