A lot of people who are just getting started with cryptocurrencies normally don't know much about consensus algorithms and what they are. However, they play a crucial part in making sure the blockchain remains a decentralised and fair system.
What are consensus algorithms?
In simple terms, consensus algorithms are processes which are used to achieve agreement on a value of data among distributed systems, such as the distributed ledger technology.
In the crypto world, consensus algorithms do two very important things:
- They ensure that the next block in a blockchain is the one and only true version of the block (prevent double spending);
- They keep the system from derailing if powerful parties decide to take control of the chain (prevent attacks like DDoS by requiring a valuable action from the attacker).
This is the beauty of consensus algorithms. They require a lot of work to produce the desired result, but at the same time, that result is very easy to verify.
So, what consensus algorithms are out there?
The idea of the consensus algorithms has actually been around for awhile. For example, the most popular consensus algorithm in the cryptocurrency world Proof-of-Work was originally proposed by Markus Jakobsson and Ari Juels in a 1999 paper. Way before cryptocurrencies were around.
As cryptocurrencies started gaining popularity more and more consensus algorithms started to get proposed, developed and implemented. Let's make it clear from the start - no consensus algorithm is perfect. They all have their strengths and weaknesses, so the aim of this article is to give a balanced overview of what is out there. So, let's jump right in.
The original consensus algorithm used in the cryptocurrency world. Currently used by Bitcoin, Ethereum, Litecoin, Dogecoin and most others. It was originally formulated by Satoshi Nakamoto for the use on the Bitcoin Blockchain. In Proof-of-Work miners race to solve (guess the answer to) complex, useless, puzzles in order to get a reward for creating a new block on the blockchain.
Miners currently have complex mining operations around the world in order to solve those puzzles and whoever solves that puzzle first gets a reward. Unlike with other consensus algorithms, Proof-of-Work has already proven itself as a working system that can be trusted.
However, Proof-of-Work is increasingly becoming considered a legacy technology. One of the biggest arguments against PoW is energy consumption and environmental concerns. Those mining operations require a lot of electricity which in turn, the argument goes, hurts the environment.
Another criticism of Proof-of-Work is that this consensus algorithm runs on a principle where "the longest blockchain wins". Therefore, in theory, the system can be compromised if more than 50% of the work put in by the miners is dishonest (fake).
The final argument against PoW is slow throughput. The current block time on the Bitcoin blockchain is around 10 minutes. Yet, this doesn't mean that there is a new block being created every 10 minutes. 10 minutes is actually the average time for the miners to create a new block, therefore, the block time may vary from a couple of minutes to over 20 minutes.
Ethereum, the second largest cryptocurrency, is actually migrating from Proof-of-Work to Proof-of-Stake in the near future and many other cryptocurrencies are considering that too. With a large number of new consensus algorithms emerging it is hard to see why new blockchains will use PoW.
This is currently the second most popular consensus algorithm. Not many blockchains use it, but right now it is the most likely replacement for PoW. Cryptos such as Decreed or Peercoin are a few examples that already do.
In PoS there aren't any miners, but instead, there are minters. Minters stack their tokens in order to "bet" on the validity of blocks. This means that there isn't any actual work (calculations) being done. This makes this algorithm more energy efficient and more decentralised because there is no need to have a mining farm.
This also means that attacks on the blockchain become a lot more expensive as attackers will also have to "bet" in order to invalidate the data on the blockchain and those "bets" will have to be huge (remember, you need to convince more than 50% of the blockchain of your truth).
However, the biggest argument against PoS is that there is Nothing At Stake. Because it costs minters no computational power to be a part of the blockchain it is therefore in the minters best interests to validate everything that comes their way. There will be no punishment if you behave badly on the blockchain because you won't lose anything.
Let's take a fork, as an example. In the case of a fork, minters spend their token to vote on the fork they want to support. Assuming most people voted on the correct fork, validators who voted on the wrong fork will "lose their stake" in the correct fork. So far, so good. The problem is that there is nothing stopping the minters from voting for both forks at the same time because it won't matter to them which fork wins.
Forks in PoS system can become very common (much more common than in PoW). This could, in turn, harm the credibility of the currency. For example when someone decides to fork the blockchain in order to reverse a transaction minters will just support it no matter what because they won't lose anything.
Delegated Proof-of-Stake (DPoS)
DPoS might sound very similar to PoS, but in reality, it is actually quite different. Delegated Proof-of-Stake was proposed by Daniel Larimer, a programmer of BitShares, Steem and EOS (these cryptos currently use DPoS). The principle behind DPoS is that token holders don't vote on the validity of the blocks, but rather vote to elect delegates to validate the blocks on their behalf. The rule of thumb is that there are normally around 21-100 delegates elected for the DPoS system.
Having so few delegates allows them to organise themselves in an efficient manner and choose designated time slots that work for each of them to deliver and publish their blocks. Delegates are shuffled occasionally and are given orders to deliver their blocks. If a delegate constantly delivers blocks with invalid transactions or miss their blocks entirely, the stackers can vote them out and replace them with another delegate.
Because of this arrangement, miners in DPoS don't compete with each other, like in PoW or PoS, but instead can collaborate to create blocks. This allows the blocks to be created much faster. For example, EOS has a block time of < 1-second vs. that of Bitcoin which is 10 minutes. In addition to that, transactions are much cheaper and also more energy efficient.
However, as you can see from the structure, a disadvantage of this algorithm is that it is partially centralised and only has a maximum of 100 delegates who are allowed to participate in the mining process.
This is very centralised algorithm where transactions are validated by the approved users (just like admins). These "admin" provide the trustful data of the transaction to all other nodes on the network. Because the system is centralised it has high throughput and can be easily scaled by adding more "admins" to the network to verify transactions.
PoA is clearly centralised and is not designed to be run on public blockchains. A far better use for it is in private blockchains. For example, it is currently run on the POA.Network and Ethereum Kovan testnet.
This consensus algorithm is very customizable which is its biggest strength. Originally based on the Algorand consensus model, PoWeight looks at some relatively weighted value of any variable on the network in order to represent the "probability" of you discovering the next block. If this sounds familiar, it's because it is. Proof-of-Stake uses the relative percentage of the tokens you own to represent your probability of discovering a new block.
For example, Filecoin uses a system called Proof-of-Spacetime which weights how much IPFS data you’re storing to represent the probability of you "discovering" a new block. Other systems could include weights for things like Proof-of-Reputation.
One of the biggest challenges with this algorithm is incentivization as it is hard to reward miners without the use of valuable tokens. Despite that, this algorithm is currently used by Algorand, Filecoin and Chia.
Byzantine Fault Tolerance (BFT)
There is a classic problem in distributed computing which is best explained by the analogy of Byzantine generals. The problem is that the Byzantine army has surrounded the city and the army's generals are positioned around the city. They have no instant form of communication, but they must decide whether to attack a city or not. If one general attacks the city without others - the attack fails. If all attack at once - the attack succeeds. Because they are separated by distance they have to pass messages to communicate. Let's look at different solutions to this problem which are implemented in various cryptocurrencies.
One of the first solutions developed for this problem is called Practical Byzantine Fault Tolerance (PBFT). The solution has a low number of pre-selected "generals" (validators), normally less than 20. This way the generals can communicate efficiently and hence have a high transaction throughput. The disadvantage of this system is that it is centralised.
Federated Byzantine Agreement (FBA) is another solution proposed to this problem. Ripple and Stellar use it as a way of validating transactions. The principle behind FBA is that each "general" (validator) is responsible for their own chain. The "general" sorts through the messages they receive and establishes the truth. In Ripple, the "generals" are pre-selected by the foundation, but in Stellar anyone can be a validator, so you choose what validators to trust.
There are a number of cryptocurrencies which use BFT to come to the consensus. As mentioned above Ripple and Stellar are among the popular ones. But it is also used by Hyperledger and Dispatch. It is an attractive consensus because of high throughput, low cost and the scalability. Yet, it is a semi-trusted algorithm meaning you have to trust the validators to some extent.
Directed Acyclic Graphs (DAGs)
DAGs are super hot in the crypto space right now. They are a form of consensus that doesn't actually use the blockchain data structure and handle transactions asynchronously. A huge benefit of this type of consensus is that they (theoretically) can have an infinite number of transactions per second. Despite that, DAGs, like all other consensus algorithms here, have their strengths and weaknesses.
Tangle is a consensus algorithm used by IOTA. In order to send a transaction on the IOTA network, you first have to validate two previous transactions you have received. This pay-it-forward, two-for-one, approach strengthens the validity of transactions as more transactions get added to the Tangle. In theory, it is possible to add an invalid transaction to the network if you manage to fake 1/3 of transactions. That could convince the network that your transaction is actually valid. However, IOTA currently runs a centralised node called "The Coordinator" which double-checks all transactions, just in case. Once the transaction volume gets large enough, so that it would be unfeasible to invalidate 1/3 of transactions, IOTA will shut down that node.
Hashgraph is a gossip-protocol consensus originally developed by Leemon Baird. In this algorithm, nodes share all their known transactions to other nodes, just like a gossip spreads around a small town until eventually, everyone knows everything. Hashgraph is very fast. It can handle 250,000+ transactions per second. It is normally used in private networks due to its vulnerability to Sybil attacks. However, Braid has recently published a paper describing how Hashgraph can be made resistant towards those, so we might see this algorithm appearing in more public networks soon.
Nano (formerly known as Railblocks) runs a very interesting consensus algorithm called Block-lattice. The principle of this algorithm is that each new address on the network gets their own blockchain and only that address can write to that blockchain. Everyone's blockchain is stored on the network and every transaction is broken down into a send block on the sender’s chain and a receive block on the receivers chain. Despite its simplicity, Block-lattice is already up and running. However, because of the architecture of the algorithm, Block-lattice is vulnerable to some attacks which are specific to it. One of them is a Penny-spend attack, where attackers inflate the number of chains and nodes must keep track of them by sending negligible amounts to a wide array of empty wallets.
Serialization of Proof-of-Work Events: Confirming Transactions via Recursive Elections or just SPECTRE. This is a potential scaling solution which could, in theory, allow Bitcoin and other cryptocurrencies to scale and handle more transactions per second without the Lightning Network. This approach utilises a mix of PoW and DAG to reach consensus and works in a way where the mined blocks are pointing to multiple parents, not just a single one. This way, the network can potentially, handle multiple blocks per second instead of 1 block every 10 minutes. This way, mining a block pointing to some parent blocks supports those blocks validity. In other words, in Proof-of-Work "the longest chain wins", but in SPECTRE “blocks with the most children wins.” SPECTRE is a pretty new solution and hasn't been battle tested yet, but it is a very promising and clever solution which could allow cryptocurrencies to scale.
Have questions? Love this post? Join the discussion on our Facebook Crypto Discussion Group!