Answers to frequently asked Bitcoin questions

Comparison between Avalanche, Cosmos and Polkadot

Comparison between Avalanche, Cosmos and Polkadot
Reposting after was mistakenly removed by mods (since resolved - Thanks)
A frequent question I see being asked is how Cosmos, Polkadot and Avalanche compare? Whilst there are similarities there are also a lot of differences. This article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important.
For better formatting see https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b
https://preview.redd.it/e8s7dj3ivpq51.png?width=428&format=png&auto=webp&s=5d0463462702637118c7527ebf96e91f4a80b290

Overview

Cosmos

Cosmos is a heterogeneous network of many independent parallel blockchains, each powered by classical BFT consensus algorithms like Tendermint. Developers can easily build custom application specific blockchains, called Zones, through the Cosmos SDK framework. These Zones connect to Hubs, which are specifically designed to connect zones together.
The vision of Cosmos is to have thousands of Zones and Hubs that are Interoperable through the Inter-Blockchain Communication Protocol (IBC). Cosmos can also connect to other systems through peg zones, which are specifically designed zones that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Cosmos does not use Sharding with each Zone and Hub being sovereign with their own validator set.
For a more in-depth look at Cosmos and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three
(There's a youtube video with a quick video overview of Cosmos on the medium article - https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b)

Polkadot

Polkadot is a heterogeneous blockchain protocol that connects multiple specialised blockchains into one unified network. It achieves scalability through a sharding infrastructure with multiple blockchains running in parallel, called parachains, that connect to a central chain called the Relay Chain. Developers can easily build custom application specific parachains through the Substrate development framework.
The relay chain validates the state transition of connected parachains, providing shared state across the entire ecosystem. If the Relay Chain must revert for any reason, then all of the parachains would also revert. This is to ensure that the validity of the entire system can persist, and no individual part is corruptible. The shared state makes it so that the trust assumptions when using parachains are only those of the Relay Chain validator set, and no other. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. The hope is to have 100 parachains connect to the relay chain.
For a more in-depth look at Polkadot and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three
(There's a youtube video with a quick video overview of Polkadot on the medium article - https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b)

Avalanche

Avalanche is a platform of platforms, ultimately consisting of thousands of subnets to form a heterogeneous interoperable network of many blockchains, that takes advantage of the revolutionary Avalanche Consensus protocols to provide a secure, globally distributed, interoperable and trustless framework offering unprecedented decentralisation whilst being able to comply with regulatory requirements.
Avalanche allows anyone to create their own tailor-made application specific blockchains, supporting multiple custom virtual machines such as EVM and WASM and written in popular languages like Go (with others coming in the future) rather than lightly used, poorly-understood languages like Solidity. This virtual machine can then be deployed on a custom blockchain network, called a subnet, which consist of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance.
Avalanche was built with serving financial markets in mind. It has native support for easily creating and trading digital smart assets with complex custom rule sets that define how the asset is handled and traded to ensure regulatory compliance can be met. Interoperability is enabled between blockchains within a subnet as well as between subnets. Like Cosmos and Polkadot, Avalanche is also able to connect to other systems through bridges, through custom virtual machines made to interact with another ecosystem such as Ethereum and Bitcoin.
For a more in-depth look at Avalanche and provide more reference to points made in this article, please see here and here
(There's a youtube video with a quick video overview of Avalanche on the medium article - https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b)

Comparison between Cosmos, Polkadot and Avalanche

A frequent question I see being asked is how Cosmos, Polkadot and Avalanche compare? Whilst there are similarities there are also a lot of differences. This article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important. For a more in-depth view I recommend reading the articles for each of the projects linked above and coming to your own conclusions. I want to stress that it’s not a case of one platform being the killer of all other platforms, far from it. There won’t be one platform to rule them all, and too often the tribalism has plagued this space. Blockchains are going to completely revolutionise most industries and have a profound effect on the world we know today. It’s still very early in this space with most adoption limited to speculation and trading mainly due to the limitations of Blockchain and current iteration of Ethereum, which all three of these platforms hope to address. For those who just want a quick summary see the image at the bottom of the article. With that said let’s have a look

Scalability

Cosmos

Each Zone and Hub in Cosmos is capable of up to around 1000 transactions per second with bandwidth being the bottleneck in consensus. Cosmos aims to have thousands of Zones and Hubs all connected through IBC. There is no limit on the number of Zones / Hubs that can be created

Polkadot

Parachains in Polkadot are also capable of up to around 1500 transactions per second. A portion of the parachain slots on the Relay Chain will be designated as part of the parathread pool, the performance of a parachain is split between many parathreads offering lower performance and compete amongst themselves in a per-block auction to have their transactions included in the next relay chain block. The number of parachains is limited by the number of validators on the relay chain, they hope to be able to achieve 100 parachains.

Avalanche

Avalanche is capable of around 4500 transactions per second per subnet, this is based on modest hardware requirements to ensure maximum decentralisation of just 2 CPU cores and 4 GB of Memory and with a validator size of over 2,000 nodes. Performance is CPU-bound and if higher performance is required then more specialised subnets can be created with higher minimum requirements to be able to achieve 10,000 tps+ in a subnet. Avalanche aims to have thousands of subnets (each with multiple virtual machines / blockchains) all interoperable with each other. There is no limit on the number of Subnets that can be created.

Results

All three platforms offer vastly superior performance to the likes of Bitcoin and Ethereum 1.0. Avalanche with its higher transactions per second, no limit on the number of subnets / blockchains that can be created and the consensus can scale to potentially millions of validators all participating in consensus scores ✅✅✅. Polkadot claims to offer more tps than cosmos, but is limited to the number of parachains (around 100) whereas with Cosmos there is no limit on the number of hubs / zones that can be created. Cosmos is limited to a fairly small validator size of around 200 before performance degrades whereas Polkadot hopes to be able to reach 1000 validators in the relay chain (albeit only a small number of validators are assigned to each parachain). Thus Cosmos and Polkadot scores ✅✅
https://preview.redd.it/2o0brllyvpq51.png?width=1000&format=png&auto=webp&s=8f62bb696ecaafcf6184da005d5fe0129d504518

Decentralisation

Cosmos

Tendermint consensus is limited to around 200 validators before performance starts to degrade. Whilst there is the Cosmos Hub it is one of many hubs in the network and there is no central hub or limit on the number of zones / hubs that can be created.

Polkadot

Polkadot has 1000 validators in the relay chain and these are split up into a small number that validate each parachain (minimum of 14). The relay chain is a central point of failure as all parachains connect to it and the number of parachains is limited depending on the number of validators (they hope to achieve 100 parachains). Due to the limited number of parachain slots available, significant sums of DOT will need to be purchased to win an auction to lease the slot for up to 24 months at a time. Thus likely to lead to only those with enough funds to secure a parachain slot. Parathreads are however an alternative for those that require less and more varied performance for those that can’t secure a parachain slot.

Avalanche

Avalanche consensus scan scale to tens of thousands of validators, even potentially millions of validators all participating in consensus through repeated sub-sampling. The more validators, the faster the network becomes as the load is split between them. There are modest hardware requirements so anyone can run a node and there is no limit on the number of subnets / virtual machines that can be created.

Results

Avalanche offers unparalleled decentralisation using its revolutionary consensus protocols that can scale to millions of validators all participating in consensus at the same time. There is no limit to the number of subnets and virtual machines that can be created, and they can be created by anyone for a small fee, it scores ✅✅✅. Cosmos is limited to 200 validators but no limit on the number of zones / hubs that can be created, which anyone can create and scores ✅✅. Polkadot hopes to accommodate 1000 validators in the relay chain (albeit these are split amongst each of the parachains). The number of parachains is limited and maybe cost prohibitive for many and the relay chain is a ultimately a single point of failure. Whilst definitely not saying it’s centralised and it is more decentralised than many others, just in comparison between the three, it scores ✅
https://preview.redd.it/ckfamee0wpq51.png?width=1000&format=png&auto=webp&s=c4355f145d821fabf7785e238dbc96a5f5ce2846

Latency

Cosmos

Tendermint consensus used in Cosmos reaches finality within 6 seconds. Cosmos consists of many Zones and Hubs that connect to each other. Communication between 2 zones could pass through many hubs along the way, thus also can contribute to latency times depending on the path taken as explained in part two of the articles on Cosmos. It doesn’t need to wait for an extended period of time with risk of rollbacks.

Polkadot

Polkadot provides a Hybrid consensus protocol consisting of Block producing protocol, BABE, and then a finality gadget called GRANDPA that works to agree on a chain, out of many possible forks, by following some simpler fork choice rule. Rather than voting on every block, instead it reaches agreements on chains. As soon as more than 2/3 of validators attest to a chain containing a certain block, all blocks leading up to that one are finalized at once.
If an invalid block is detected after it has been finalised then the relay chain would need to be reverted along with every parachain. This is particularly important when connecting to external blockchains as those don’t share the state of the relay chain and thus can’t be rolled back. The longer the time period, the more secure the network is, as there is more time for additional checks to be performed and reported but at the expense of finality. Finality is reached within 60 seconds between parachains but for external ecosystems like Ethereum their state obviously can’t be rolled back like a parachain and so finality will need to be much longer (60 minutes was suggested in the whitepaper) and discussed in more detail in part three

Avalanche

Avalanche consensus achieves finality within 3 seconds, with most happening sub 1 second, immutable and completely irreversible. Any subnet can connect directly to another without having to go through multiple hops and any VM can talk to another VM within the same subnet as well as external subnets. It doesn’t need to wait for an extended period of time with risk of rollbacks.

Results

With regards to performance far too much emphasis is just put on tps as a metric, the other equally important metric, if not more important with regards to finance is latency. Throughput measures the amount of data at any given time that it can handle whereas latency is the amount of time it takes to perform an action. It’s pointless saying you can process more transactions per second than VISA when it takes 60 seconds for a transaction to complete. Low latency also greatly increases general usability and customer satisfaction, nowadays everyone expects card payments, online payments to happen instantly. Avalanche achieves the best results scoring ✅✅✅, Cosmos with comes in second with 6 second finality ✅✅ and Polkadot with 60 second finality (which may be 60 minutes for external blockchains) scores ✅
https://preview.redd.it/kzup5x42wpq51.png?width=1000&format=png&auto=webp&s=320eb4c25dc4fc0f443a7a2f7ff09567871648cd

Shared Security

Cosmos

Every Zone and Hub in Cosmos has their own validator set and different trust assumptions. Cosmos are researching a shared security model where a Hub can validate the state of connected zones for a fee but not released yet. Once available this will make shared security optional rather than mandatory.

Polkadot

Shared Security is mandatory with Polkadot which uses a Shared State infrastructure between the Relay Chain and all of the connected parachains. If the Relay Chain must revert for any reason, then all of the parachains would also revert. Every parachain makes the same trust assumptions, and as such the relay chain validates state transition and enables seamless interoperability between them. In return for this benefit, they have to purchase DOT and win an auction for one of the available parachain slots.
However, parachains can’t just rely on the relay chain for their security, they will also need to implement censorship resistance measures and utilise proof of work / proof of stake for each parachain as well as discussed in part three, thus parachains can’t just rely on the security of the relay chain, they need to ensure sybil resistance mechanisms using POW and POS are implemented on the parachain as well.

Avalanche

A subnet in Avalanche consists of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance. So unlike in Cosmos where each zone / hub has their own validators, A subnet can validate a single or many virtual machines / blockchains with a single validator set. Shared security is optional

Results

Shared security is mandatory in polkadot and a key design decision in its infrastructure. The relay chain validates the state transition of all connected parachains and thus scores ✅✅✅. Subnets in Avalanche can validate state of either a single or many virtual machines. Each subnet can have their own token and shares a validator set, where complex rulesets can be configured to meet regulatory compliance. It scores ✅ ✅. Every Zone and Hub in cosmos has their own validator set / token but research is underway to have the hub validate the state transition of connected zones, but as this is still early in the research phase scores ✅ for now.
https://preview.redd.it/pbgyk3o3wpq51.png?width=1000&format=png&auto=webp&s=61c18e12932a250f5633c40633810d0f64520575

Current Adoption

Cosmos

The Cosmos project started in 2016 with an ICO held in April 2017. There are currently around 50 projects building on the Cosmos SDK with a full list can be seen here and filtering for Cosmos SDK . Not all of the projects will necessarily connect using native cosmos sdk and IBC and some have forked parts of the Cosmos SDK and utilise the tendermint consensus such as Binance Chain but have said they will connect in the future.

Polkadot

The Polkadot project started in 2016 with an ICO held in October 2017. There are currently around 70 projects building on Substrate and a full list can be seen here and filtering for Substrate Based. Like with Cosmos not all projects built using substrate will necessarily connect to Polkadot and parachains or parathreads aren’t currently implemented in either the Live or Test network (Kusama) as of the time of this writing.

Avalanche

Avalanche in comparison started much later with Ava Labs being founded in 2018. Avalanche held it’s ICO in July 2020. Due to lot shorter time it has been in development, the number of projects confirmed are smaller with around 14 projects currently building on Avalanche. Due to the customisability of the platform though, many virtual machines can be used within a subnet making the process incredibly easy to port projects over. As an example, it will launch with the Ethereum Virtual Machine which enables byte for byte compatibility and all the tooling like Metamask, Truffle etc. will work, so projects can easily move over to benefit from the performance, decentralisation and low gas fees offered. In the future Cosmos and Substrate virtual machines could be implemented on Avalanche.

Results

Whilst it’s still early for all 3 projects (and the entire blockchain space as a whole), there is currently more projects confirmed to be building on Cosmos and Polkadot, mostly due to their longer time in development. Whilst Cosmos has fewer projects, zones are implemented compared to Polkadot which doesn’t currently have parachains. IBC to connect zones and hubs together is due to launch Q2 2021, thus both score ✅✅✅. Avalanche has been in development for a lot shorter time period, but is launching with an impressive feature set right from the start with ability to create subnets, VMs, assets, NFTs, permissioned and permissionless blockchains, cross chain atomic swaps within a subnet, smart contracts, bridge to Ethereum etc. Applications can easily port over from other platforms and use all the existing tooling such as Metamask / Truffle etc but benefit from the performance, decentralisation and low gas fees offered. Currently though just based on the number of projects in comparison it scores ✅.
https://preview.redd.it/4zpi6s85wpq51.png?width=1000&format=png&auto=webp&s=e91ade1a86a5d50f4976f3b23a46e9287b08e373

Enterprise Adoption

Cosmos

Cosmos enables permissioned and permissionless zones which can connect to each other with the ability to have full control over who validates the blockchain. For permissionless zones each zone / hub can have their own token and they are in control who validates.

Polkadot

With polkadot the state transition is performed by a small randomly selected assigned group of validators from the relay chain plus with the possibility that state is rolled back if an invalid transaction of any of the other parachains is found. This may pose a problem for enterprises that need complete control over who performs validation for regulatory reasons. In addition due to the limited number of parachain slots available Enterprises would have to acquire and lock up large amounts of a highly volatile asset (DOT) and have the possibility that they are outbid in future auctions and find they no longer can have their parachain validated and parathreads don’t provide the guaranteed performance requirements for the application to function.

Avalanche

Avalanche enables permissioned and permissionless subnets and complex rulesets can be configured to meet regulatory compliance. For example a subnet can be created where its mandatory that all validators are from a certain legal jurisdiction, or they hold a specific license and regulated by the SEC etc. Subnets are also able to scale to tens of thousands of validators, and even potentially millions of nodes, all participating in consensus so every enterprise can run their own node rather than only a small amount. Enterprises don’t have to hold large amounts of a highly volatile asset, but instead pay a fee in AVAX for the creation of the subnets and blockchains which is burnt.

Results

Avalanche provides the customisability to run private permissioned blockchains as well as permissionless where the enterprise is in control over who validates the blockchain, with the ability to use complex rulesets to meet regulatory compliance, thus scores ✅✅✅. Cosmos is also able to run permissioned and permissionless zones / hubs so enterprises have full control over who validates a blockchain and scores ✅✅. Polkadot requires locking up large amounts of a highly volatile asset with the possibility of being outbid by competitors and being unable to run the application if the guaranteed performance is required and having to migrate away. The relay chain validates the state transition and can roll back the parachain should an invalid block be detected on another parachain, thus scores ✅.
https://preview.redd.it/li5jy6u6wpq51.png?width=1000&format=png&auto=webp&s=e2a95f1f88e5efbcf9e23c789ae0f002c8eb73fc

Interoperability

Cosmos

Cosmos will connect Hubs and Zones together through its IBC protocol (due to release in Q1 2020). Connecting to blockchains outside of the Cosmos ecosystem would either require the connected blockchain to fork their code to implement IBC or more likely a custom “Peg Zone” will be created specific to work with a particular blockchain it’s trying to bridge to such as Ethereum etc. Each Zone and Hub has different trust levels and connectivity between 2 zones can have different trust depending on which path it takes (this is discussed more in this article). Finality time is low at 6 seconds, but depending on the number of hops, this can increase significantly.

Polkadot

Polkadot’s shared state means each parachain that connects shares the same trust assumptions, of the relay chain validators and that if one blockchain needs to be reverted, all of them will need to be reverted. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Finality time between parachains is around 60 seconds, but longer will be needed (initial figures of 60 minutes in the whitepaper) for connecting to external blockchains. Thus limiting the appeal of connecting two external ecosystems together through Polkadot. Polkadot is also limited in the number of Parachain slots available, thus limiting the amount of blockchains that can be bridged. Parathreads could be used for lower performance bridges, but the speed of future blockchains is only going to increase.

Avalanche

A subnet can validate multiple virtual machines / blockchains and all blockchains within a subnet share the same trust assumptions / validator set, enabling cross chain interoperability. Interoperability is also possible between any other subnet, with the hope Avalanche will consist of thousands of subnets. Each subnet may have a different trust level, but as the primary network consists of all validators then this can be used as a source of trust if required. As Avalanche supports many virtual machines, bridges to other ecosystems are created by running the connected virtual machine. There will be an Ethereum bridge using the EVM shortly after mainnet. Finality time is much faster at sub 3 seconds (with most happening under 1 second) with no chance of rolling back so more appealing when connecting to external blockchains.

Results

All 3 systems are able to perform interoperability within their ecosystem and transfer assets as well as data, as well as use bridges to connect to external blockchains. Cosmos has different trust levels between its zones and hubs and can create issues depending on which path it takes and additional latency added. Polkadot provides the same trust assumptions for all connected parachains but has long finality and limited number of parachain slots available. Avalanche provides the same trust assumptions for all blockchains within a subnet, and different trust levels between subnets. However due to the primary network consisting of all validators it can be used for trust. Avalanche also has a much faster finality time with no limitation on the number of blockchains / subnets / bridges that can be created. Overall all three blockchains excel with interoperability within their ecosystem and each score ✅✅.
https://preview.redd.it/ai0bkbq8wpq51.png?width=1000&format=png&auto=webp&s=3e85ee6a3c4670f388ccea00b0c906c3fb51e415

Tokenomics

Cosmos

The ATOM token is the native token for the Cosmos Hub. It is commonly mistaken by people that think it’s the token used throughout the cosmos ecosystem, whereas it’s just used for one of many hubs in Cosmos, each with their own token. Currently ATOM has little utility as IBC isn’t released and has no connections to other zones / hubs. Once IBC is released zones may prefer to connect to a different hub instead and so ATOM is not used. ATOM isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for ATOM as of the time of this writing is $1 Billion with 203 million circulating supply. Rewards can be earnt through staking to offset the dilution caused by inflation. Delegators can also get slashed and lose a portion of their ATOM should the validator misbehave.

Polkadot

Polkadot’s native token is DOT and it’s used to secure the Relay Chain. Each parachain needs to acquire sufficient DOT to win an auction on an available parachain lease period of up to 24 months at a time. Parathreads have a fixed fee for registration that would realistically be much lower than the cost of acquiring a parachain slot and compete with other parathreads in a per-block auction to have their transactions included in the next relay chain block. DOT isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for DOT as of the time of this writing is $4.4 Billion with 852 million circulating supply. Delegators can also get slashed and lose their DOT (potentially 100% of their DOT for serious attacks) should the validator misbehave.

Avalanche

AVAX is the native token for the primary network in Avalanche. Every validator of any subnet also has to validate the primary network and stake a minimum of 2000 AVAX. There is no limit to the number of validators like other consensus methods then this can cater for tens of thousands even potentially millions of validators. As every validator validates the primary network, this can be a source of trust for interoperability between subnets as well as connecting to other ecosystems, thus increasing amount of transaction fees of AVAX. There is no slashing in Avalanche, so there is no risk to lose your AVAX when selecting a validator, instead rewards earnt for staking can be slashed should the validator misbehave. Because Avalanche doesn’t have direct slashing, it is technically possible for someone to both stake AND deliver tokens for something like a flash loan, under the invariant that all tokens that are staked are returned, thus being able to make profit with staked tokens outside of staking itself.
There will also be a separate subnet for Athereum which is a ‘spoon,’ or friendly fork, of Ethereum, which benefits from the Avalanche consensus protocol and applications in the Ethereum ecosystem. It’s native token ATH will be airdropped to ETH holders as well as potentially AVAX holders as well. This can be done for other blockchains as well.
Transaction fees on the primary network for all 3 of the blockchains as well as subscription fees for creating a subnet and blockchain are paid in AVAX and are burnt, creating deflationary pressure. AVAX is a fixed capped supply of 720 million tokens, creating scarcity rather than an unlimited supply which continuously increase of tokens at a compounded rate each year like others. Initially there will be 360 tokens minted at Mainnet with vesting periods between 1 and 10 years, with tokens gradually unlocking each quarter. The Circulating supply is 24.5 million AVAX with tokens gradually released each quater. The current market cap of AVAX is around $100 million.

Results

Avalanche’s AVAX with its fixed capped supply, deflationary pressure, very strong utility, potential to receive air drops and low market cap, means it scores ✅✅✅. Polkadot’s DOT also has very strong utility with the need for auctions to acquire parachain slots, but has no deflationary mechanisms, no fixed capped supply and already valued at $3.8 billion, therefore scores ✅✅. Cosmos’s ATOM token is only for the Cosmos Hub, of which there will be many hubs in the ecosystem and has very little utility currently. (this may improve once IBC is released and if Cosmos hub actually becomes the hub that people want to connect to and not something like Binance instead. There is no fixed capped supply and currently valued at $1.1 Billion, so scores ✅.
https://preview.redd.it/mels7myawpq51.png?width=1000&format=png&auto=webp&s=df9782e2c0a4c26b61e462746256bdf83b1fb906
All three are excellent projects and have similarities as well as many differences. Just to reiterate this article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important. For a more in-depth view I recommend reading the articles for each of the projects linked above and coming to your own conclusions, you may have different criteria which is important to you, and score them differently. There won’t be one platform to rule them all however, with some uses cases better suited to one platform over another, and it’s not a zero-sum game. Blockchain is going to completely revolutionize industries and the Internet itself. The more projects researching and delivering breakthrough technology the better, each learning from each other and pushing each other to reach that goal earlier. The current market is a tiny speck of what’s in store in terms of value and adoption and it’s going to be exciting to watch it unfold.
https://preview.redd.it/dbb99egcwpq51.png?width=1388&format=png&auto=webp&s=aeb03127dc0dc74d0507328e899db1c7d7fc2879
For more information see the articles below (each with additional sources at the bottom of their articles)
Avalanche, a Revolutionary Consensus Engine and Platform. A Game Changer for Blockchain
Avalanche Consensus, The Biggest Breakthrough since Nakamoto
Cosmos — An Early In-Depth Analysis — Part One
Cosmos — An Early In-Depth Analysis — Part Two
Cosmos Hub ATOM Token and the commonly misunderstood staking tokens — Part Three
Polkadot — An Early In-Depth Analysis — Part One — Overview and Benefits
Polkadot — An Early In-Depth Analysis — Part Two — How Consensus Works
Polkadot — An Early In-Depth Analysis — Part Three — Limitations and Issues
submitted by xSeq22x to CryptoCurrency [link] [comments]

Is it rational to give up during a pandemic?

[CW: emotional outpouring]
I (22 y.o.) just found out that my cousin, who visited us last night to drink with my brother (29 y.o.), tested positive for COVID-19. I am currently living with my mother (57 y.o.) and her boyfriend (55 y.o.) supposedly to hide from the pandemic that's currently on our doorstep.
I want to blame people. Isn't that natural? I mean, I have grown hoarse admonishing my older brother for hosting family members in our house but he and my mum keep colluding to break all my quarantine precautions. Why? I don't know. Maybe it's in our blood to be disagreeable. Maybe it's because I'm the Weird Science Kid who does everything out of weirdness. No one listens to me in this house. I have lost all semblance of persuasion points ever since the pandemic started and now I can't get any of them back.
(I would like to mention that we do not live in the US so this is not a Culture War-related thing. It's just that, I live in a third world country where "honor thy father and mother" is the national motto.)
So, cool. My brother is the kind of person who has never taken any responsibility whatsoever for the consequences of his actions. He has weaseled his way out of child support and has somehow lived on allowances from his neverending stream of girlfriends since 2012. I get it. Maybe it's a coping thing, 'cause we both have ADHD but I'm the only one who bothered to get himself diagnosed.
My mum and her boyfriend support us on a household income of $8200 a year. But y'know, economics is magic and somehow we have a roof over our heads with running water six days a week.
I know why she's going with everything my brother is doing. Because...she's given up. In my country, when you fire someone you are bound by law to compensate them an amount that scales linearly with the amount of time they've been with the company. Unless the employee willingly resigns that is, so what a lot of employers do is to target employees who they want to quit until they give up. My mum, being almost of retirement age (which would entitle her to yet another round of compensation which her employer understandably doesn't want to pay) is currently undergoing this treatment, and has had five or six breakdowns in the last two weeks alone.
So yeah, she doesn't have enough emotional bandwidth to deal with all this. And well, her boyfriend is dutifully keeping out of our squabbles because, hell, why the fuck do you need to discipline a 29 year old?
Now, I didn't use to live with my family. Back in March, I had a place to myself, but the pandemic encroached too quickly and I had to flee. Was that irresponsible? Maybe. When I was five, my kidneys failed and my entire body swelled up like a balloon. Since then I've been in and out of clinics until I decided to take up swimming in college. That, plus my asthma has made me a care a little bit about my own prospects when push comes to shove.
I have moved at least three more times since then, often within two hours of hearing about a gross violation of quarantine or a new case nearby. I have grown tired, and my savings are dwindling. But y'know, you gotta continue living so here we are.
Now you might think, "OP, you're doing all right. You're a STEM major who knows how to code. Why don't you apply to remote jobs so you can keep yourself afloat?"
The trouble is, I...never actually finished my degree. Instead, in my junior year I founded a startup.
"Cool, so you're rich?"
"Nope, but I did get a PC out of it."
I'm sorry, folks. I didn't say I was good at money. Y'know, I even had the chance to invest 2000 USD on Bitcoin a couple of months before it went kaput but like most of you here, I failed.
Let's think about my options here. On one hand, medicine marches constantly and there's going to be a huge financial incentive to figure out how to treat CFS after all this. Furthermore, there is a tiny but still nontrivial chance that we're going to solve longevity within our lifetimes, so what's 25-30 years of suffering for potentially thousands of years, right?
On the other hand, there's eleventy-seven reasons why you can't be part of that Eternal 10000. Stem cell therapy today costs up to $50000/year and it's not like we have powerful financial incentives pointed at living longer.
And the thing is, I'm seeing inpatient COVID-19 costs here hovering around $40 000 per individual. There are three of us in the house who are highly susceptible, and to be honest with meager salaries even ONE case is enough to bankrupt all of us [1].
So anyway, two years ago I left my previous startup to look for greener pastures. I found it in February, when someone believed in me enough to (angel) invest in me! So I founded a new startup because I have to make him whole, right?
Let's face it, I still live in a third-world country. No matter how smart I am or how good I am at programming, the fact that I haven't finished my degree means I'll have a decades-long uphill battle to immigrate to a decent country AND that's if I'm going to be healthy after all this is over. Who's going to hire a doubly disabled person who can't work long hours? [2]
It's feels so...wrong writing out all this. I'm supposed to be stoic, y'know. Founders are supposed to be an unstoppable force, or else we're whiny asses who probably can't do anything right if our life depended on it. In Silicon Valley, there's this weird, unspoken sense that everything you do can be used against you. "The best founders are X, the most successful startups are Y"—implying that if you're neither X nor Y you should probably just go home. There are always these slots you have to fill and character traits you have to have so VCs can trust that you can build a billion-dollar business with their money. I mean, you can't really blame them because they are moving a lot of money. VCs are pattern matchers: no one wants to waste >6 zeros on their term sheet for coke and not much else, and the less risk they need to deal with the better.
So imagine a choice between someone like me and an earnest suburban kid who's lived all his life in California and went to an Ivy League and had access to a proper hospital during the worst pandemic in recent memory. Of course you'll pick that kid. I'd pick him if I were in their shoes.
Paul Graham once distilled the worst founders into one trait: hapless. The inability to control one's environment, the complacency of letting the world have its way with you. And so by implication being the opposite, i.e., being relentlessly resourceful, is the mark of a great founder. But what if your challenge is big enough for the history books? What if it's the mental and physical deterioration of everyone you hold dear? What if shit doesn't just happen, it keeps happening day after day and there's a hungry wolf stalking you and you still have to deal with the faceless bureaucracy of the USCIS and then run a startup on top of all this during a recession?
What if...some people just aren't destined to participate in certain parts of the economy? Wouldn't that make things simpler?
In the spirit of having retaining some semblance of agency, I have outlined the possible paths I can take before I hit rock bottom:
And if none of these work, well, maybe we just can't tsuyoku naritai the shit out of everything.
[1]: Does my mum have health insurance? Yes, from the government. Is the government currently dissolving said health department due to corruption? You bet.
[2]: I'm really sorry if this is incoherent, but to be clear: I'm oscillating between founding a company and getting a proper job. The reason why the latter is such a difficult option to take is because ADHD and entrepreneurship go together like salt and fries, and I'm >85% confident that it's going to be really difficult for me to survive in an office environment based on how I know I work.
submitted by MediumArtCoke to slatestarcodex [link] [comments]

US Store Opening Info

Opening Date and Time – Friday August 14, 2020 – (8am Mountain Time)
Stores – US
Terms and Conditions
Please be aware that USPS is experiencing extended shipping delays. Please allow up to 21 days for shipping, even when using priority mail.
To deal with continued shipping delays with USPS we have decided to limit the amount of orders we take through the store. Our hope is that this will allow us to process orders in a timely manner, but also have the bandwidth to follow up with customer service issues that could result from extended shipping delays. The store will close when we hit our order limit, this does not mean all items are sold out. WE expect to hit this limit in the first few hours of store opening, so be sure to place your order as quickly as possible.
We will open the store again when all orders have processed, and shipping issues have been dealt with. In the meantime, you can order from several Seed Banks that have our stuff in stock! > https://www.mephistogenetics.com/stockists/usa-distributors
  1. The payment methods available for this event are limited to Debit/CC and Bitcoin. Info about payment methods - https://www.mephistogenetics.com/info/payments
  2. Please allow 10 BUSINESS days for your order to process and ship. An order status of processing means your payment has been approved and is awaiting shipment. You will receive a tracking email once your order has shipped.
  3. Orders are processed on a first come, first served basis, first paid first shipped basis.
  4. Please take care when placing your order, we are not able to add on/remove items or make other changes to your order. All sales are FINAL.


Fugue State Restock
Genetic Heritage - Amnesia Haze Bx1 (Archive Seeds) x Walter White F5
Seed Type: F5 Feminised Automatic
Project introduction and Overview:
Fugue State: A Mephisto Twist on a modern classic.
Amnesia is a strain that's been held in very high regard for a long time, and with good reason too! Great growth, high yielding, a strong effect, which equals a high grade headstash with commercial applications. She has a lot of positives to say the least. Although nearly everyone offers an amnesia or amnesia haze in their line up, take a look at ours :)
We acquired several packs of Amnesia Haze BX1 from Archive seeds back in 2015-2016 and without much delay got to popping them to see what we uncovered. We did this simultaneously against several amnesia cuts we'd ran, from elite versions to the local cut, we made seeds with all using our Walter White as the pollen donor. When growing out the F1's the most consistent and quality plants were emerging from the Archive cross, we made our selection and continued on from there.
Another reason for this project was that we felt our catalogue was in need of more sativa options and we wanted to make an ultra high quality sativa strain to work to more in the future. Walter White has been a firm favourite in the Mephisto house for a long time, it also has a distinct aroma and flavour profile that shared similarities with amnesia so we felt they would be a match made in heaven and we weren't wrong!
Fast forward 2 years and we are finally ready to drop our Amnesia/Walter project: Fugue State. An easter egg for the Breaking Bad fans out there for when Walter 'had amnesia'.
We're really stoked with the growth pattern and standout quality of Fugue State, our Walter White has added another level of resin to an already frost-ridden specimen!
For the sativa heads that want to get up to some day time mischief, this strain is the ticket. But also, buyer beware, don't over indulge as she will put you on your ass.
Enjoy!
Strain behavior and structure:
Fugue State is a sativa dominant strain that grows with very nice vigour. She forms a branchy open structure quickly, and as she works through her cycle her leaves become more slender and fine with sharp serrations on the leaves. Despite the sativa traits she's fast flowering and matures quickly with bulk to boot, akin to something like C99. We're really happy how she's transferred into an autoflowering variety, she will veg/pre-flower and stretch to around day 35-40 from sprout but then transitions hard into flower. The flowers she produces are size-able, with a compact density and adorned with white frost. Basically she's a lovely automatic spectacle to grow and witness developing on a daily basis. She grows with a sativa structure and medium sized inter-nodal gaps, devoid of much foliage her sparse leaves allow light to penetrate down and fill out lower flowers.
Fugue state is an adaptable plant that lends well to different training methods, however there's not always the need to go to crazy, and some light pruning of lower branches and lower fluff in combination with a few sessions of leaf tucking really spreads her bud mass across her full structure very nicely indeed. Generally we've seen good mould and pest resistance, however if she grows a really dominant thick main cola, please take steps to ensure really good airflow and lower humidity as mould could be an issue in a really thick and dense main cola. Fugue State progresses very quickly over the last few weeks, she will transition from a lot of white pistils on the flowers where you might be wondering how much longer she will go, to done! Quickly. Fugue State *may* also need support in these later stages.
Overall Fugue state is a really nice variety, not difficult to grow, productive, high class flowers that satisfy the full spectrum of growers, she gives commercial yields of a connoisseur headstash, just like the amnesia we started with.
Full Stock List
Reserva – Double Grape, ManBearAlienPig, Toofless Alien
Medical – Canna-Cheese 1:1
Originals – Hubbabubbasmelloscope, Sour Crack, Toof Decay
Artisnals – 3 Bears OG, 4 Assed Monkey (Limited!!), Crème De La Chem, Northern Cheese Haze, Skywalker, Sour Stomper
Merch – New Sticker Pack in Stock!!
Mephisto Swag Pack – We chose 11 of our favorite Mephisto swag stickers and combined them in a single pack for purchase.
submitted by stan_mephisto to MephHeads [link] [comments]

Technical: The Path to Taproot Activation

Taproot! Everybody wants to have it, somebody wants to make it, nobody knows how to get it!
(If you are asking why everybody wants it, see: Technical: Taproot: Why Activate?)
(Pedants: I mostly elide over lockin times)
Briefly, Taproot is that neat new thing that gets us:
So yes, let's activate taproot!

The SegWit Wars

The biggest problem with activating Taproot is PTSD from the previous softfork, SegWit. Pieter Wuille, one of the authors of the current Taproot proposal, has consistently held the position that he will not discuss activation, and will accept whatever activation process is imposed on Taproot. Other developers have expressed similar opinions.
So what happened with SegWit activation that was so traumatic? SegWit used the BIP9 activation method. Let's dive into BIP9!

BIP9 Miner-Activated Soft Fork

Basically, BIP9 has a bunch of parameters:
Now there are other parameters (name, starttime) but they are not anywhere near as important as the above two.
A number that is not a parameter, is 95%. Basically, activation of a BIP9 softfork is considered as actually succeeding if at least 95% of blocks in the last 2 weeks had the specified bit in the nVersion set. If less than 95% had this bit set before the timeout, then the upgrade fails and never goes into the network. This is not a parameter: it is a constant defined by BIP9, and developers using BIP9 activation cannot change this.
So, first some simple questions and their answers:

The Great Battles of the SegWit Wars

SegWit not only fixed transaction malleability, it also created a practical softforkable blocksize increase that also rebalanced weights so that the cost of spending a UTXO is about the same as the cost of creating UTXOs (and spending UTXOs is "better" since it limits the size of the UTXO set that every fullnode has to maintain).
So SegWit was written, the activation was decided to be BIP9, and then.... miner signalling stalled at below 75%.
Thus were the Great SegWit Wars started.

BIP9 Feature Hostage

If you are a miner with at least 5% global hashpower, you can hold a BIP9-activated softfork hostage.
You might even secretly want the softfork to actually push through. But you might want to extract concession from the users and the developers. Like removing the halvening. Or raising or even removing the block size caps (which helps larger miners more than smaller miners, making it easier to become a bigger fish that eats all the smaller fishes). Or whatever.
With BIP9, you can hold the softfork hostage. You just hold out and refuse to signal. You tell everyone you will signal, if and only if certain concessions are given to you.
This ability by miners to hold a feature hostage was enabled because of the miner-exit allowed by the timeout on BIP9. Prior to that, miners were considered little more than expendable security guards, paid for the risk they take to secure the network, but not special in the grand scheme of Bitcoin.

Covert ASICBoost

ASICBoost was a novel way of optimizing SHA256 mining, by taking advantage of the structure of the 80-byte header that is hashed in order to perform proof-of-work. The details of ASICBoost are out-of-scope here but you can read about it elsewhere
Here is a short summary of the two types of ASICBoost, relevant to the activation discussion.
Now, "overt" means "obvious", while "covert" means hidden. Overt ASICBoost is obvious because nVersion bits that are not currently in use for BIP9 activations are usually 0 by default, so setting those bits to 1 makes it obvious that you are doing something weird (namely, Overt ASICBoost). Covert ASICBoost is non-obvious because the order of transactions in a block are up to the miner anyway, so the miner rearranging the transactions in order to get lower power consumption is not going to be detected.
Unfortunately, while Overt ASICBoost was compatible with SegWit, Covert ASICBoost was not. This is because, pre-SegWit, only the block header Merkle tree committed to the transaction ordering. However, with SegWit, another Merkle tree exists, which commits to transaction ordering as well. Covert ASICBoost would require more computation to manipulate two Merkle trees, obviating the power benefits of Covert ASICBoost anyway.
Now, miners want to use ASICBoost (indeed, about 60->70% of current miners probably use the Overt ASICBoost nowadays; if you have a Bitcoin fullnode running you will see the logs with lots of "60 of last 100 blocks had unexpected versions" which is exactly what you would see with the nVersion manipulation that Overt ASICBoost does). But remember: ASICBoost was, at around the time, a novel improvement. Not all miners had ASICBoost hardware. Those who did, did not want it known that they had ASICBoost hardware, and wanted to do Covert ASICBoost!
But Covert ASICBoost is incompatible with SegWit, because SegWit actually has two Merkle trees of transaction data, and Covert ASICBoost works by fudging around with transaction ordering in a block, and recomputing two Merkle Trees is more expensive than recomputing just one (and loses the ASICBoost advantage).
Of course, those miners that wanted Covert ASICBoost did not want to openly admit that they had ASICBoost hardware, they wanted to keep their advantage secret because miners are strongly competitive in a very tight market. And doing ASICBoost Covertly was just the ticket, but they could not work post-SegWit.
Fortunately, due to the BIP9 activation process, they could hold SegWit hostage while covertly taking advantage of Covert ASICBoost!

UASF: BIP148 and BIP8

When the incompatibility between Covert ASICBoost and SegWit was realized, still, activation of SegWit stalled, and miners were still not openly claiming that ASICBoost was related to non-activation of SegWit.
Eventually, a new proposal was created: BIP148. With this rule, 3 months before the end of the SegWit timeout, nodes would reject blocks that did not signal SegWit. Thus, 3 months before SegWit timeout, BIP148 would force activation of SegWit.
This proposal was not accepted by Bitcoin Core, due to the shortening of the timeout (it effectively times out 3 months before the initial SegWit timeout). Instead, a fork of Bitcoin Core was created which added the patch to comply with BIP148. This was claimed as a User Activated Soft Fork, UASF, since users could freely download the alternate fork rather than sticking with the developers of Bitcoin Core.
Now, BIP148 effectively is just a BIP9 activation, except at its (earlier) timeout, the new rules would be activated anyway (instead of the BIP9-mandated behavior that the upgrade is cancelled at the end of the timeout).
BIP148 was actually inspired by the BIP8 proposal (the link here is a historical version; BIP8 has been updated recently, precisely in preparation for Taproot activation). BIP8 is basically BIP9, but at the end of timeout, the softfork is activated anyway rather than cancelled.
This removed the ability of miners to hold the softfork hostage. At best, they can delay the activation, but not stop it entirely by holding out as in BIP9.
Of course, this implies risk that not all miners have upgraded before activation, leading to possible losses for SPV users, as well as again re-pressuring miners to signal activation, possibly without the miners actually upgrading their software to properly impose the new softfork rules.

BIP91, SegWit2X, and The Aftermath

BIP148 inspired countermeasures, possibly from the Covert ASiCBoost miners, possibly from concerned users who wanted to offer concessions to miners. To this day, the common name for BIP148 - UASF - remains an emotionally-charged rallying cry for parts of the Bitcoin community.
One of these was SegWit2X. This was brokered in a deal between some Bitcoin personalities at a conference in New York, and thus part of the so-called "New York Agreement" or NYA, another emotionally-charged acronym.
The text of the NYA was basically:
  1. Set up a new activation threshold at 80% signalled at bit 4 (vs bit 1 for SegWit).
    • When this 80% signalling was reached, miners would require that bit 1 for SegWit be signalled to achive the 95% activation needed for SegWit.
  2. If the bit 4 signalling reached 80%, increase the block weight limit from the SegWit 4000000 to the SegWit2X 8000000, 6 months after bit 1 activation.
The first item above was coded in BIP91.
Unfortunately, if you read the BIP91, independently of NYA, you might come to the conclusion that BIP91 was only about lowering the threshold to 80%. In particular, BIP91 never mentions anything about the second point above, it never mentions that bit 4 80% threshold would also signal for a later hardfork increase in weight limit.
Because of this, even though there are claims that NYA (SegWit2X) reached 80% dominance, a close reading of BIP91 shows that the 80% dominance was only for SegWit activation, without necessarily a later 2x capacity hardfork (SegWit2X).
This ambiguity of bit 4 (NYA says it includes a 2x capacity hardfork, BIP91 says it does not) has continued to be a thorn in blocksize debates later. Economically speaking, Bitcoin futures between SegWit and SegWit2X showed strong economic dominance in favor of SegWit (SegWit2X futures were traded at a fraction in value of SegWit futures: I personally made a tidy but small amount of money betting against SegWit2X in the futures market), so suggesting that NYA achieved 80% dominance even in mining is laughable, but the NYA text that ties bit 4 to SegWit2X still exists.
Historically, BIP91 triggered which caused SegWit to activate before the BIP148 shorter timeout. BIP148 proponents continue to hold this day that it was the BIP148 shorter timeout and no-compromises-activate-on-August-1 that made miners flock to BIP91 as a face-saving tactic that actually removed the second clause of NYA. NYA supporters keep pointing to the bit 4 text in the NYA and the historical activation of BIP91 as a failed promise by Bitcoin developers.

Taproot Activation Proposals

There are two primary proposals I can see for Taproot activation:
  1. BIP8.
  2. Modern Softfork Activation.
We have discussed BIP8: roughly, it has bit and timeout, if 95% of miners signal bit it activates, at the end of timeout it activates. (EDIT: BIP8 has had recent updates: at the end of timeout it can now activate or fail. For the most part, in the below text "BIP8", means BIP8-and-activate-at-timeout, and "BIP9" means BIP8-and-fail-at-timeout)
So let's take a look at Modern Softfork Activation!

Modern Softfork Activation

This is a more complex activation method, composed of BIP9 and BIP8 as supcomponents.
  1. First have a 12-month BIP9 (fail at timeout).
  2. If the above fails to activate, have a 6-month discussion period during which users and developers and miners discuss whether to continue to step 3.
  3. Have a 24-month BIP8 (activate at timeout).
The total above is 42 months, if you are counting: 3.5 years worst-case activation.
The logic here is that if there are no problems, BIP9 will work just fine anyway. And if there are problems, the 6-month period should weed it out. Finally, miners cannot hold the feature hostage since the 24-month BIP8 period will exist anyway.

PSA: Being Resilient to Upgrades

Software is very birttle.
Anyone who has been using software for a long time has experienced something like this:
  1. You hear a new version of your favorite software has a nice new feature.
  2. Excited, you install the new version.
  3. You find that the new version has subtle incompatibilities with your current workflow.
  4. You are sad and downgrade to the older version.
  5. You find out that the new version has changed your files in incompatible ways that the old version cannot work with anymore.
  6. You tearfully reinstall the newer version and figure out how to get your lost productivity now that you have to adapt to a new workflow
If you are a technically-competent user, you might codify your workflow into a bunch of programs. And then you upgrade one of the external pieces of software you are using, and find that it has a subtle incompatibility with your current workflow which is based on a bunch of simple programs you wrote yourself. And if those simple programs are used as the basis of some important production system, you hve just screwed up because you upgraded software on an important production system.
And well, one of the issues with new softfork activation is that if not enough people (users and miners) upgrade to the newest Bitcoin software, the security of the new softfork rules are at risk.
Upgrading software of any kind is always a risk, and the more software you build on top of the software-being-upgraded, the greater you risk your tower of software collapsing while you change its foundations.
So if you have some complex Bitcoin-manipulating system with Bitcoin somewhere at the foundations, consider running two Bitcoin nodes:
  1. One is a "stable-version" Bitcoin node. Once it has synced, set it up to connect=x.x.x.x to the second node below (so that your ISP bandwidth is only spent on the second node). Use this node to run all your software: it's a stable version that you don't change for long periods of time. Enable txiindex, disable pruning, whatever your software needs.
  2. The other is an "always-up-to-date" Bitcoin Node. Keep its stoarge down with pruning (initially sync it off the "stable-version" node). You can't use blocksonly if your "stable-version" node needs to send transactions, but otherwise this "always-up-to-date" Bitcoin node can be kept as a low-resource node, so you can run both nodes in the same machine.
When a new Bitcoin version comes up, you just upgrade the "always-up-to-date" Bitcoin node. This protects you if a future softfork activates, you will only receive valid Bitcoin blocks and transactions. Since this node has nothing running on top of it, it is just a special peer of the "stable-version" node, any software incompatibilities with your system software do not exist.
Your "stable-version" Bitcoin node remains the same version until you are ready to actually upgrade this node and are prepared to rewrite most of the software you have running on top of it due to version compatibility problems.
When upgrading the "always-up-to-date", you can bring it down safely and then start it later. Your "stable-version" wil keep running, disconnected from the network, but otherwise still available for whatever queries. You do need some system to stop the "always-up-to-date" node if for any reason the "stable-version" goes down (otherwisee if the "always-up-to-date" advances its pruning window past what your "stable-version" has, the "stable-version" cannot sync afterwards), but if you are technically competent enough that you need to do this, you are technically competent enough to write such a trivial monitor program (EDIT: gmax notes you can adjust the pruning window by RPC commands to help with this as well).
This recommendation is from gmaxwell on IRC, by the way.
submitted by almkglor to Bitcoin [link] [comments]

[ CryptoCurrency ] Comparison between Avalanche, Cosmos and Polkadot

[ 🔴 DELETED 🔴 ] Topic originally posted in CryptoCurrency by xSeq22x [link]
A frequent question I see being asked is how Cosmos, Polkadot and Avalanche compare? Whilst there are similarities there are also a lot of differences. This article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important.
For better formatting see https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b
https://preview.redd.it/lg16iwk2dhq51.png?width=428&format=png&auto=webp&s=6c899ee69800dd6c5e2900d8fa83de7a43c57086

Overview

Cosmos

Cosmos is a heterogeneous network of many independent parallel blockchains, each powered by classical BFT consensus algorithms like Tendermint. Developers can easily build custom application specific blockchains, called Zones, through the Cosmos SDK framework. These Zones connect to Hubs, which are specifically designed to connect zones together.
The vision of Cosmos is to have thousands of Zones and Hubs that are Interoperable through the Inter-Blockchain Communication Protocol (IBC). Cosmos can also connect to other systems through peg zones, which are specifically designed zones that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Cosmos does not use Sharding with each Zone and Hub being sovereign with their own validator set.
For a more in-depth look at Cosmos and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three
https://youtu.be/Eb8xkDi_PUg

Polkadot

Polkadot is a heterogeneous blockchain protocol that connects multiple specialised blockchains into one unified network. It achieves scalability through a sharding infrastructure with multiple blockchains running in parallel, called parachains, that connect to a central chain called the Relay Chain. Developers can easily build custom application specific parachains through the Substrate development framework.
The relay chain validates the state transition of connected parachains, providing shared state across the entire ecosystem. If the Relay Chain must revert for any reason, then all of the parachains would also revert. This is to ensure that the validity of the entire system can persist, and no individual part is corruptible. The shared state makes it so that the trust assumptions when using parachains are only those of the Relay Chain validator set, and no other. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. The hope is to have 100 parachains connect to the relay chain.
For a more in-depth look at Polkadot and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three
https://youtu.be/_-k0xkooSlA

Avalanche

Avalanche is a platform of platforms, ultimately consisting of thousands of subnets to form a heterogeneous interoperable network of many blockchains, that takes advantage of the revolutionary Avalanche Consensus protocols to provide a secure, globally distributed, interoperable and trustless framework offering unprecedented decentralisation whilst being able to comply with regulatory requirements.
Avalanche allows anyone to create their own tailor-made application specific blockchains, supporting multiple custom virtual machines such as EVM and WASM and written in popular languages like Go (with others coming in the future) rather than lightly used, poorly-understood languages like Solidity. This virtual machine can then be deployed on a custom blockchain network, called a subnet, which consist of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance.
Avalanche was built with serving financial markets in mind. It has native support for easily creating and trading digital smart assets with complex custom rule sets that define how the asset is handled and traded to ensure regulatory compliance can be met. Interoperability is enabled between blockchains within a subnet as well as between subnets. Like Cosmos and Polkadot, Avalanche is also able to connect to other systems through bridges, through custom virtual machines made to interact with another ecosystem such as Ethereum and Bitcoin.
For a more in-depth look at Avalanche and provide more reference to points made in this article, please see here and here
https://youtu.be/mWBzFmzzBAg

Comparison between Cosmos, Polkadot and Avalanche

A frequent question I see being asked is how Cosmos, Polkadot and Avalanche compare? Whilst there are similarities there are also a lot of differences. This article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important. For a more in-depth view I recommend reading the articles for each of the projects linked above and coming to your own conclusions. I want to stress that it’s not a case of one platform being the killer of all other platforms, far from it. There won’t be one platform to rule them all, and too often the tribalism has plagued this space. Blockchains are going to completely revolutionise most industries and have a profound effect on the world we know today. It’s still very early in this space with most adoption limited to speculation and trading mainly due to the limitations of Blockchain and current iteration of Ethereum, which all three of these platforms hope to address. For those who just want a quick summary see the image at the bottom of the article. With that said let’s have a look

Scalability

Cosmos

Each Zone and Hub in Cosmos is capable of up to around 1000 transactions per second with bandwidth being the bottleneck in consensus. Cosmos aims to have thousands of Zones and Hubs all connected through IBC. There is no limit on the number of Zones / Hubs that can be created

Polkadot

Parachains in Polkadot are also capable of up to around 1500 transactions per second. A portion of the parachain slots on the Relay Chain will be designated as part of the parathread pool, the performance of a parachain is split between many parathreads offering lower performance and compete amongst themselves in a per-block auction to have their transactions included in the next relay chain block. The number of parachains is limited by the number of validators on the relay chain, they hope to be able to achieve 100 parachains.

Avalanche

Avalanche is capable of around 4500 transactions per second per subnet, this is based on modest hardware requirements to ensure maximum decentralisation of just 2 CPU cores and 4 GB of Memory and with a validator size of over 2,000 nodes. Performance is CPU-bound and if higher performance is required then more specialised subnets can be created with higher minimum requirements to be able to achieve 10,000 tps+ in a subnet. Avalanche aims to have thousands of subnets (each with multiple virtual machines / blockchains) all interoperable with each other. There is no limit on the number of Subnets that can be created.

Results

All three platforms offer vastly superior performance to the likes of Bitcoin and Ethereum 1.0. Avalanche with its higher transactions per second, no limit on the number of subnets / blockchains that can be created and the consensus can scale to potentially millions of validators all participating in consensus scores ✅✅✅. Polkadot claims to offer more tps than cosmos, but is limited to the number of parachains (around 100) whereas with Cosmos there is no limit on the number of hubs / zones that can be created. Cosmos is limited to a fairly small validator size of around 200 before performance degrades whereas Polkadot hopes to be able to reach 1000 validators in the relay chain (albeit only a small number of validators are assigned to each parachain). Thus Cosmos and Polkadot scores ✅✅
https://preview.redd.it/ththwq5qdhq51.png?width=1000&format=png&auto=webp&s=92f75152c90d984911db88ed174ebf3a147ca70d

Decentralisation

Cosmos

Tendermint consensus is limited to around 200 validators before performance starts to degrade. Whilst there is the Cosmos Hub it is one of many hubs in the network and there is no central hub or limit on the number of zones / hubs that can be created.

Polkadot

Polkadot has 1000 validators in the relay chain and these are split up into a small number that validate each parachain (minimum of 14). The relay chain is a central point of failure as all parachains connect to it and the number of parachains is limited depending on the number of validators (they hope to achieve 100 parachains). Due to the limited number of parachain slots available, significant sums of DOT will need to be purchased to win an auction to lease the slot for up to 24 months at a time. Thus likely to lead to only those with enough funds to secure a parachain slot. Parathreads are however an alternative for those that require less and more varied performance for those that can’t secure a parachain slot.

Avalanche

Avalanche consensus scan scale to tens of thousands of validators, even potentially millions of validators all participating in consensus through repeated sub-sampling. The more validators, the faster the network becomes as the load is split between them. There are modest hardware requirements so anyone can run a node and there is no limit on the number of subnets / virtual machines that can be created.

Results

Avalanche offers unparalleled decentralisation using its revolutionary consensus protocols that can scale to millions of validators all participating in consensus at the same time. There is no limit to the number of subnets and virtual machines that can be created, and they can be created by anyone for a small fee, it scores ✅✅✅. Cosmos is limited to 200 validators but no limit on the number of zones / hubs that can be created, which anyone can create and scores ✅✅. Polkadot hopes to accommodate 1000 validators in the relay chain (albeit these are split amongst each of the parachains). The number of parachains is limited and maybe cost prohibitive for many and the relay chain is a ultimately a single point of failure. Whilst definitely not saying it’s centralised and it is more decentralised than many others, just in comparison between the three, it scores ✅
https://preview.redd.it/lv2h7g9sdhq51.png?width=1000&format=png&auto=webp&s=56eada6e8c72dbb4406d7c5377ad15608bcc730e

Latency

Cosmos

Tendermint consensus used in Cosmos reaches finality within 6 seconds. Cosmos consists of many Zones and Hubs that connect to each other. Communication between 2 zones could pass through many hubs along the way, thus also can contribute to latency times depending on the path taken as explained in part two of the articles on Cosmos. It doesn’t need to wait for an extended period of time with risk of rollbacks.

Polkadot

Polkadot provides a Hybrid consensus protocol consisting of Block producing protocol, BABE, and then a finality gadget called GRANDPA that works to agree on a chain, out of many possible forks, by following some simpler fork choice rule. Rather than voting on every block, instead it reaches agreements on chains. As soon as more than 2/3 of validators attest to a chain containing a certain block, all blocks leading up to that one are finalized at once.
If an invalid block is detected after it has been finalised then the relay chain would need to be reverted along with every parachain. This is particularly important when connecting to external blockchains as those don’t share the state of the relay chain and thus can’t be rolled back. The longer the time period, the more secure the network is, as there is more time for additional checks to be performed and reported but at the expense of finality. Finality is reached within 60 seconds between parachains but for external ecosystems like Ethereum their state obviously can’t be rolled back like a parachain and so finality will need to be much longer (60 minutes was suggested in the whitepaper) and discussed in more detail in part three

Avalanche

Avalanche consensus achieves finality within 3 seconds, with most happening sub 1 second, immutable and completely irreversible. Any subnet can connect directly to another without having to go through multiple hops and any VM can talk to another VM within the same subnet as well as external subnets. It doesn’t need to wait for an extended period of time with risk of rollbacks.

Results

With regards to performance far too much emphasis is just put on tps as a metric, the other equally important metric, if not more important with regards to finance is latency. Throughput measures the amount of data at any given time that it can handle whereas latency is the amount of time it takes to perform an action. It’s pointless saying you can process more transactions per second than VISA when it takes 60 seconds for a transaction to complete. Low latency also greatly increases general usability and customer satisfaction, nowadays everyone expects card payments, online payments to happen instantly. Avalanche achieves the best results scoring ✅✅✅, Cosmos with comes in second with 6 second finality ✅✅ and Polkadot with 60 second finality (which may be 60 minutes for external blockchains) scores ✅
https://preview.redd.it/qe8e5ltudhq51.png?width=1000&format=png&auto=webp&s=18a2866104590f81a818690337f9121161dda890

Shared Security

Cosmos

Every Zone and Hub in Cosmos has their own validator set and different trust assumptions. Cosmos are researching a shared security model where a Hub can validate the state of connected zones for a fee but not released yet. Once available this will make shared security optional rather than mandatory.

Polkadot

Shared Security is mandatory with Polkadot which uses a Shared State infrastructure between the Relay Chain and all of the connected parachains. If the Relay Chain must revert for any reason, then all of the parachains would also revert. Every parachain makes the same trust assumptions, and as such the relay chain validates state transition and enables seamless interoperability between them. In return for this benefit, they have to purchase DOT and win an auction for one of the available parachain slots.
However, parachains can’t just rely on the relay chain for their security, they will also need to implement censorship resistance measures and utilise proof of work / proof of stake for each parachain as well as discussed in part three, thus parachains can’t just rely on the security of the relay chain, they need to ensure sybil resistance mechanisms using POW and POS are implemented on the parachain as well.

Avalanche

A subnet in Avalanche consists of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance. So unlike in Cosmos where each zone / hub has their own validators, A subnet can validate a single or many virtual machines / blockchains with a single validator set. Shared security is optional

Results

Shared security is mandatory in polkadot and a key design decision in its infrastructure. The relay chain validates the state transition of all connected parachains and thus scores ✅✅✅. Subnets in Avalanche can validate state of either a single or many virtual machines. Each subnet can have their own token and shares a validator set, where complex rulesets can be configured to meet regulatory compliance. It scores ✅ ✅. Every Zone and Hub in cosmos has their own validator set / token but research is underway to have the hub validate the state transition of connected zones, but as this is still early in the research phase scores ✅ for now.
https://preview.redd.it/0mnvpnzwdhq51.png?width=1000&format=png&auto=webp&s=8927ff2821415817265be75c59261f83851a2791

Current Adoption

Cosmos

The Cosmos project started in 2016 with an ICO held in April 2017. There are currently around 50 projects building on the Cosmos SDK with a full list can be seen here and filtering for Cosmos SDK . Not all of the projects will necessarily connect using native cosmos sdk and IBC and some have forked parts of the Cosmos SDK and utilise the tendermint consensus such as Binance Chain but have said they will connect in the future.

Polkadot

The Polkadot project started in 2016 with an ICO held in October 2017. There are currently around 70 projects building on Substrate and a full list can be seen here and filtering for Substrate Based. Like with Cosmos not all projects built using substrate will necessarily connect to Polkadot and parachains or parathreads aren’t currently implemented in either the Live or Test network (Kusama) as of the time of this writing.

Avalanche

Avalanche in comparison started much later with Ava Labs being founded in 2018. Avalanche held it’s ICO in July 2020. Due to lot shorter time it has been in development, the number of projects confirmed are smaller with around 14 projects currently building on Avalanche. Due to the customisability of the platform though, many virtual machines can be used within a subnet making the process incredibly easy to port projects over. As an example, it will launch with the Ethereum Virtual Machine which enables byte for byte compatibility and all the tooling like Metamask, Truffle etc. will work, so projects can easily move over to benefit from the performance, decentralisation and low gas fees offered. In the future Cosmos and Substrate virtual machines could be implemented on Avalanche.

Results

Whilst it’s still early for all 3 projects (and the entire blockchain space as a whole), there is currently more projects confirmed to be building on Cosmos and Polkadot, mostly due to their longer time in development. Whilst Cosmos has fewer projects, zones are implemented compared to Polkadot which doesn’t currently have parachains. IBC to connect zones and hubs together is due to launch Q2 2021, thus both score ✅✅✅. Avalanche has been in development for a lot shorter time period, but is launching with an impressive feature set right from the start with ability to create subnets, VMs, assets, NFTs, permissioned and permissionless blockchains, cross chain atomic swaps within a subnet, smart contracts, bridge to Ethereum etc. Applications can easily port over from other platforms and use all the existing tooling such as Metamask / Truffle etc but benefit from the performance, decentralisation and low gas fees offered. Currently though just based on the number of projects in comparison it scores ✅.
https://preview.redd.it/rsctxi6zdhq51.png?width=1000&format=png&auto=webp&s=ff762dea3cfc2aaaa3c8fc7b1070d5be6759aac2

Enterprise Adoption

Cosmos

Cosmos enables permissioned and permissionless zones which can connect to each other with the ability to have full control over who validates the blockchain. For permissionless zones each zone / hub can have their own token and they are in control who validates.

Polkadot

With polkadot the state transition is performed by a small randomly selected assigned group of validators from the relay chain plus with the possibility that state is rolled back if an invalid transaction of any of the other parachains is found. This may pose a problem for enterprises that need complete control over who performs validation for regulatory reasons. In addition due to the limited number of parachain slots available Enterprises would have to acquire and lock up large amounts of a highly volatile asset (DOT) and have the possibility that they are outbid in future auctions and find they no longer can have their parachain validated and parathreads don’t provide the guaranteed performance requirements for the application to function.

Avalanche

Avalanche enables permissioned and permissionless subnets and complex rulesets can be configured to meet regulatory compliance. For example a subnet can be created where its mandatory that all validators are from a certain legal jurisdiction, or they hold a specific license and regulated by the SEC etc. Subnets are also able to scale to tens of thousands of validators, and even potentially millions of nodes, all participating in consensus so every enterprise can run their own node rather than only a small amount. Enterprises don’t have to hold large amounts of a highly volatile asset, but instead pay a fee in AVAX for the creation of the subnets and blockchains which is burnt.

Results

Avalanche provides the customisability to run private permissioned blockchains as well as permissionless where the enterprise is in control over who validates the blockchain, with the ability to use complex rulesets to meet regulatory compliance, thus scores ✅✅✅. Cosmos is also able to run permissioned and permissionless zones / hubs so enterprises have full control over who validates a blockchain and scores ✅✅. Polkadot requires locking up large amounts of a highly volatile asset with the possibility of being outbid by competitors and being unable to run the application if the guaranteed performance is required and having to migrate away. The relay chain validates the state transition and can roll back the parachain should an invalid block be detected on another parachain, thus scores ✅.
https://preview.redd.it/7phaylb1ehq51.png?width=1000&format=png&auto=webp&s=d86d2ec49de456403edbaf27009ed0e25609fbff

Interoperability

Cosmos

Cosmos will connect Hubs and Zones together through its IBC protocol (due to release in Q1 2020). Connecting to blockchains outside of the Cosmos ecosystem would either require the connected blockchain to fork their code to implement IBC or more likely a custom “Peg Zone” will be created specific to work with a particular blockchain it’s trying to bridge to such as Ethereum etc. Each Zone and Hub has different trust levels and connectivity between 2 zones can have different trust depending on which path it takes (this is discussed more in this article). Finality time is low at 6 seconds, but depending on the number of hops, this can increase significantly.

Polkadot

Polkadot’s shared state means each parachain that connects shares the same trust assumptions, of the relay chain validators and that if one blockchain needs to be reverted, all of them will need to be reverted. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Finality time between parachains is around 60 seconds, but longer will be needed (initial figures of 60 minutes in the whitepaper) for connecting to external blockchains. Thus limiting the appeal of connecting two external ecosystems together through Polkadot. Polkadot is also limited in the number of Parachain slots available, thus limiting the amount of blockchains that can be bridged. Parathreads could be used for lower performance bridges, but the speed of future blockchains is only going to increase.

Avalanche

A subnet can validate multiple virtual machines / blockchains and all blockchains within a subnet share the same trust assumptions / validator set, enabling cross chain interoperability. Interoperability is also possible between any other subnet, with the hope Avalanche will consist of thousands of subnets. Each subnet may have a different trust level, but as the primary network consists of all validators then this can be used as a source of trust if required. As Avalanche supports many virtual machines, bridges to other ecosystems are created by running the connected virtual machine. There will be an Ethereum bridge using the EVM shortly after mainnet. Finality time is much faster at sub 3 seconds (with most happening under 1 second) with no chance of rolling back so more appealing when connecting to external blockchains.

Results

All 3 systems are able to perform interoperability within their ecosystem and transfer assets as well as data, as well as use bridges to connect to external blockchains. Cosmos has different trust levels between its zones and hubs and can create issues depending on which path it takes and additional latency added. Polkadot provides the same trust assumptions for all connected parachains but has long finality and limited number of parachain slots available. Avalanche provides the same trust assumptions for all blockchains within a subnet, and different trust levels between subnets. However due to the primary network consisting of all validators it can be used for trust. Avalanche also has a much faster finality time with no limitation on the number of blockchains / subnets / bridges that can be created. Overall all three blockchains excel with interoperability within their ecosystem and each score ✅✅.
https://preview.redd.it/l775gue3ehq51.png?width=1000&format=png&auto=webp&s=b7c4b5802ceb1a9307bd2a8d65f393d1bcb0d7c6

Tokenomics

Cosmos

The ATOM token is the native token for the Cosmos Hub. It is commonly mistaken by people that think it’s the token used throughout the cosmos ecosystem, whereas it’s just used for one of many hubs in Cosmos, each with their own token. Currently ATOM has little utility as IBC isn’t released and has no connections to other zones / hubs. Once IBC is released zones may prefer to connect to a different hub instead and so ATOM is not used. ATOM isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for ATOM as of the time of this writing is $1 Billion with 203 million circulating supply. Rewards can be earnt through staking to offset the dilution caused by inflation. Delegators can also get slashed and lose a portion of their ATOM should the validator misbehave.

Polkadot

Polkadot’s native token is DOT and it’s used to secure the Relay Chain. Each parachain needs to acquire sufficient DOT to win an auction on an available parachain lease period of up to 24 months at a time. Parathreads have a fixed fee for registration that would realistically be much lower than the cost of acquiring a parachain slot and compete with other parathreads in a per-block auction to have their transactions included in the next relay chain block. DOT isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for DOT as of the time of this writing is $4.4 Billion with 852 million circulating supply. Delegators can also get slashed and lose their DOT (potentially 100% of their DOT for serious attacks) should the validator misbehave.

Avalanche

AVAX is the native token for the primary network in Avalanche. Every validator of any subnet also has to validate the primary network and stake a minimum of 2000 AVAX. There is no limit to the number of validators like other consensus methods then this can cater for tens of thousands even potentially millions of validators. As every validator validates the primary network, this can be a source of trust for interoperability between subnets as well as connecting to other ecosystems, thus increasing amount of transaction fees of AVAX. There is no slashing in Avalanche, so there is no risk to lose your AVAX when selecting a validator, instead rewards earnt for staking can be slashed should the validator misbehave. Because Avalanche doesn’t have direct slashing, it is technically possible for someone to both stake AND deliver tokens for something like a flash loan, under the invariant that all tokens that are staked are returned, thus being able to make profit with staked tokens outside of staking itself.
There will also be a separate subnet for Athereum which is a ‘spoon,’ or friendly fork, of Ethereum, which benefits from the Avalanche consensus protocol and applications in the Ethereum ecosystem. It’s native token ATH will be airdropped to ETH holders as well as potentially AVAX holders as well. This can be done for other blockchains as well.
Transaction fees on the primary network for all 3 of the blockchains as well as subscription fees for creating a subnet and blockchain are paid in AVAX and are burnt, creating deflationary pressure. AVAX is a fixed capped supply of 720 million tokens, creating scarcity rather than an unlimited supply which continuously increase of tokens at a compounded rate each year like others. Initially there will be 360 tokens minted at Mainnet with vesting periods between 1 and 10 years, with tokens gradually unlocking each quarter. The Circulating supply is 24.5 million AVAX with tokens gradually released each quater. The current market cap of AVAX is around $100 million.

Results

Avalanche’s AVAX with its fixed capped supply, deflationary pressure, very strong utility, potential to receive air drops and low market cap, means it scores ✅✅✅. Polkadot’s DOT also has very strong utility with the need for auctions to acquire parachain slots, but has no deflationary mechanisms, no fixed capped supply and already valued at $3.8 billion, therefore scores ✅✅. Cosmos’s ATOM token is only for the Cosmos Hub, of which there will be many hubs in the ecosystem and has very little utility currently. (this may improve once IBC is released and if Cosmos hub actually becomes the hub that people want to connect to and not something like Binance instead. There is no fixed capped supply and currently valued at $1.1 Billion, so scores ✅.
https://preview.redd.it/zb72eto5ehq51.png?width=1000&format=png&auto=webp&s=0ee102a2881d763296ad9ffba20667f531d2fd7a
All three are excellent projects and have similarities as well as many differences. Just to reiterate this article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important. For a more in-depth view I recommend reading the articles for each of the projects linked above and coming to your own conclusions, you may have different criteria which is important to you, and score them differently. There won’t be one platform to rule them all however, with some uses cases better suited to one platform over another, and it’s not a zero-sum game. Blockchain is going to completely revolutionize industries and the Internet itself. The more projects researching and delivering breakthrough technology the better, each learning from each other and pushing each other to reach that goal earlier. The current market is a tiny speck of what’s in store in terms of value and adoption and it’s going to be exciting to watch it unfold.
https://preview.redd.it/fwi3clz7ehq51.png?width=1388&format=png&auto=webp&s=c91c1645a4c67defd5fc3aaec84f4a765e1c50b6
xSeq22x your post has been copied because one or more comments in this topic have been removed. This copy will preserve unmoderated topic. If you would like to opt-out, please send a message using [this link].
submitted by anticensor_bot to u/anticensor_bot [link] [comments]

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

https://preview.redd.it/n5jzxozn27v51.png?width=2222&format=png&auto=webp&s=6cd6bd726582bbe2c595e1e467aeb3fc8aabe36f
On October 20, Eric Yao, Head of EpiK China, and Leo, Co-Founder & CTO of EpiK, visited Deep Chain Online Salon, and discussed “How EpiK saved the miners eliminated by Filecoin by launching E2P storage model”. ‘?” The following is a transcript of the sharing.
Sharing Session
Eric: Hello, everyone, I’m Eric, graduated from School of Information Science, Tsinghua University. My Master’s research was on data storage and big data computing, and I published a number of industry top conference papers.
Since 2013, I have invested in Bitcoin, Ethereum, Ripple, Dogcoin, EOS and other well-known blockchain projects, and have been settling in the chain circle as an early technology-based investor and industry observer with 2 years of blockchain experience. I am also a blockchain community initiator and technology evangelist
Leo: Hi, I’m Leo, I’m the CTO of EpiK. Before I got involved in founding EpiK, I spent 3 to 4 years working on blockchain, public chain, wallets, browsers, decentralized exchanges, task distribution platforms, smart contracts, etc., and I’ve made some great products. EpiK is an answer to the question we’ve been asking for years about how blockchain should be landed, and we hope that EpiK is fortunate enough to be an answer for you as well.
Q & A
Deep Chain Finance:
First of all, let me ask Eric, on October 15, Filecoin’s main website launched, which aroused everyone’s attention, but at the same time, the calls for fork within Filecoin never stopped. The EpiK protocol is one of them. What I want to know is, what kind of project is EpiK Protocol? For what reason did you choose to fork in the first place? What are the differences between the forked project and Filecoin itself?
Eric:
First of all, let me answer the first question, what kind of project is EpiK Protocol.
With the Fourth Industrial Revolution already upon us, comprehensive intelligence is one of the core goals of this stage, and the key to comprehensive intelligence is how to make machines understand what humans know and learn new knowledge based on what they already know. And the knowledge graph scale is a key step towards full intelligence.
In order to solve the many challenges of building large-scale knowledge graphs, the EpiK Protocol was born. EpiK Protocol is a decentralized, hyper-scale knowledge graph that organizes and incentivizes knowledge through decentralized storage technology, decentralized autonomous organizations, and generalized economic models. Members of the global community will expand the horizons of artificial intelligence into a smarter future by organizing all areas of human knowledge into a knowledge map that will be shared and continuously updated for the eternal knowledge vault of humanity
And then, for what reason was the fork chosen in the first place?
EpiK’s project founders are all senior blockchain industry practitioners and have been closely following the industry development and application scenarios, among which decentralized storage is a very fresh application scenario.
However, in the development process of Filecoin, the team found that due to some design mechanisms and historical reasons, the team found that Filecoin had some deviations from the original intention of the project at that time, such as the overly harsh penalty mechanism triggered by the threat to weaken security, and the emergence of the computing power competition leading to the emergence of computing power monopoly by large miners, thus monopolizing the packaging rights, which can be brushed with computing power by uploading useless data themselves.
The emergence of these problems will cause the data environment on Filecoin to get worse and worse, which will lead to the lack of real value of the data in the chain, high data redundancy, and the difficulty of commercializing the project to land.
After paying attention to the above problems, the project owner proposes to introduce multi-party roles and a decentralized collaboration platform DAO to ensure the high value of the data on the chain through a reasonable economic model and incentive mechanism, and store the high-value data: knowledge graph on the blockchain through decentralized storage, so that the lack of value of the data on the chain and the monopoly of large miners’ computing power can be solved to a large extent.
Finally, what differences exist between the forked project and Filecoin itself?
On the basis of the above-mentioned issues, EpiK’s design is very different from Filecoin, first of all, EpiK is more focused in terms of business model, and it faces a different market and track from the cloud storage market where Filecoin is located because decentralized storage has no advantage over professional centralized cloud storage in terms of storage cost and user experience.
EpiK focuses on building a decentralized knowledge graph, which reduces data redundancy and safeguards the value of data in the distributed storage chain while preventing the knowledge graph from being tampered with by a few people, thus making the commercialization of the entire project reasonable and feasible.
From the perspective of ecological construction, EpiK treats miners more friendly and solves the pain point of Filecoin to a large extent, firstly, it changes the storage collateral and commitment collateral of Filecoin to one-time collateral.
Miners participating in EpiK Protocol are only required to pledge 1000 EPK per miner, and only once before mining, not in each sector.
What is the concept of 1000 EPKs, you only need to participate in pre-mining for about 50 days to get this portion of the tokens used for pledging. The EPK pre-mining campaign is currently underway, and it runs from early September to December, with a daily release of 50,000 ERC-20 standard EPKs, and the pre-mining nodes whose applications are approved will divide these tokens according to the mining ratio of the day, and these tokens can be exchanged 1:1 directly after they are launched on the main network. This move will continue to expand the number of miners eligible to participate in EPK mining.
Secondly, EpiK has a more lenient penalty mechanism, which is different from Filecoin’s official consensus, storage and contract penalties, because the protocol can only be uploaded by field experts, which is the “Expert to Person” mode. Every miner needs to be backed up, which means that if one or more miners are offline in the network, it will not have much impact on the network, and the miner who fails to upload the proof of time and space in time due to being offline will only be forfeited by the authorities for the effective computing power of this sector, not forfeiting the pledged coins.
If the miner can re-submit the proof of time and space within 28 days, he will regain the power.
Unlike Filecoin’s 32GB sectors, EpiK’s encapsulated sectors are smaller, only 8M each, which will solve Filecoin’s sector space wastage problem to a great extent, and all miners have the opportunity to complete the fast encapsulation, which is very friendly to miners with small computing power.
The data and quality constraints will also ensure that the effective computing power gap between large and small miners will not be closed.
Finally, unlike Filecoin’s P2P data uploading model, EpiK changes the data uploading and maintenance to E2P uploading, that is, field experts upload and ensure the quality and value of the data on the chain, and at the same time introduce the game relationship between data storage roles and data generation roles through a rational economic model to ensure the stability of the whole system and the continuous high-quality output of the data on the chain.
Deep Chain Finance:
Eric, on the eve of Filecoin’s mainline launch, issues such as Filecoin’s pre-collateral have aroused a lot of controversy among the miners. In your opinion, what kind of impact will Filecoin bring to itself and the whole distributed storage ecosystem after it launches? Do you think that the current confusing FIL prices are reasonable and what should be the normal price of FIL?
Eric:
Filecoin mainnet has launched and many potential problems have been exposed, such as the aforementioned high pre-security problem, the storage resource waste and computing power monopoly caused by unreasonable sector encapsulation, and the harsh penalty mechanism, etc. These problems are quite serious, and will greatly affect the development of Filecoin ecology.
These problems are relatively serious, and will greatly affect the development of Filecoin ecology, here are two examples to illustrate. For example, the problem of big miners computing power monopoly, now after the big miners have monopolized computing power, there will be a very delicate state — — the miners save a file data with ordinary users. There is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. And after the big miners have monopolized computing power, there will be a very delicate state — — the miners will save a file data with ordinary users, there is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. Because I can fake another identity to upload data for myself, but that leads to the fact that for any miner I go to choose which data to save. I have only one goal, and that is to brush my computing power and how fast I can brush my computing power.
There is no difference between saving other people’s data and saving my own data in the matter of computing power. When I save someone else’s data, I don’t know that data. Somewhere in the world, the bandwidth quality between me and him may not be good enough.
The best option is to store my own local data, which makes sense, and that results in no one being able to store data on the chain at all. They only store their own data, because it’s the most economical for them, and the network has essentially no storage utility, no one is providing storage for the masses of retail users.
The harsh penalty mechanism will also severely deplete the miner’s profits, because DDOS attacks are actually a very common attack technique for the attacker, and for a big miner, he can get a very high profit in a short period of time if he attacks other customers, and this thing is a profitable thing for all big miners.
Now as far as the status quo is concerned, the vast majority of miners are actually not very well maintained, so they are not very well protected against these low-DDOS attacks. So the penalty regime is grim for them.
The contradiction between the unreasonable system and the demand will inevitably lead to the evolution of the system in a more reasonable direction, so there will be many forked projects that are more reasonable in terms of mechanism, thus attracting Filecoin miners and a diversion of storage power.
Since each project is in the field of decentralized storage track, the demand for miners is similar or even compatible with each other, so miners will tend to fork the projects with better economic benefits and business scenarios, so as to filter out the projects with real value on the ground.
For the chaotic FIL price, because FIL is also a project that has gone through several years, carrying too many expectations, so it can only be said that the current situation has its own reasons for existence. As for the reasonable price of FIL there is no way to make a prediction because in the long run, it is necessary to consider the commercialization of the project to land and the value of the actual chain of data. In other words, we need to keep observing whether Filecoin will become a game of computing power or a real value carrier.
Deep Chain Finance:
Leo, we just mentioned that the pre-collateral issue of Filecoin caused the dissatisfaction of miners, and after Filecoin launches on the main website, the second round of space race test coins were directly turned into real coins, and the official selling of FIL hit the market phenomenon, so many miners said they were betrayed. What I want to know is, EpiK’s main motto is “save the miners eliminated by Filecoin”, how to deal with the various problems of Filecoin, and how will EpiK achieve “save”?
Leo:
Originally Filecoin’s tacit approval of the computing power makeup behavior was to declare that the official directly chose to abandon the small miners. And this test coin turned real coin also hurt the interests of the loyal big miners in one cut, we do not know why these low-level problems, we can only regret.
EpiK didn’t do it to fork Filecoin, but because EpiK to build a shared knowledge graph ecology, had to integrate decentralized storage in, so the most hardcore Filecoin’s PoRep and PoSt decentralized verification technology was chosen. In order to ensure the quality of knowledge graph data, EpiK only allows community-voted field experts to upload data, so EpiK naturally prevents miners from making up computing power, and there is no reason for the data that has no value to take up such an expensive decentralized storage resource.
With the inability to make up computing power, the difference between big miners and small miners is minimal when the amount of knowledge graph data is small.
We can’t say that we can save the big miners, but we are definitely the optimal choice for the small miners who are currently in the market to be eliminated by Filecoin.
Deep Chain Finance:
Let me ask Eric: According to EpiK protocol, EpiK adopts the E2P model, which allows only experts in the field who are voted to upload their data. This is very different from Filecoin’s P2P model, which allows individuals to upload data as they wish. In your opinion, what are the advantages of the E2P model? If only voted experts can upload data, does that mean that the EpiK protocol is not available to everyone?
Eric:
First, let me explain the advantages of the E2P model over the P2P model.
There are five roles in the DAO ecosystem: miner, coin holder, field expert, bounty hunter and gateway. These five roles allocate the EPKs generated every day when the main network is launched.
The miner owns 75% of the EPKs, the field expert owns 9% of the EPKs, and the voting user shares 1% of the EPKs.
The other 15% of the EPK will fluctuate based on the daily traffic to the network, and the 15% is partly a game between the miner and the field expert.
The first describes the relationship between the two roles.
The first group of field experts are selected by the Foundation, who cover different areas of knowledge (a wide range of knowledge here, including not only serious subjects, but also home, food, travel, etc.) This group of field experts can recommend the next group of field experts, and the recommended experts only need to get 100,000 EPK votes to become field experts.
The field expert’s role is to submit high-quality data to the miner, who is responsible for encapsulating this data into blocks.
Network activity is judged by the amount of EPKs pledged by the entire network for daily traffic (1 EPK = 10 MB/day), with a higher percentage indicating higher data demand, which requires the miner to increase bandwidth quality.
If the data demand decreases, this requires field experts to provide higher quality data. This is similar to a library with more visitors needing more seats, i.e., paying the miner to upgrade the bandwidth.
When there are fewer visitors, more money is needed to buy better quality books to attract visitors, i.e., money for bounty hunters and field experts to generate more quality knowledge graph data. The game between miners and field experts is the most important game in the ecosystem, unlike the game between the authorities and big miners in the Filecoin ecosystem.
The game relationship between data producers and data storers and a more rational economic model will inevitably lead to an E2P model that generates stored on-chain data of much higher quality than the P2P model, and the quality of bandwidth for data access will be better than the P2P model, resulting in greater business value and better landing scenarios.
I will then answer the question of whether this means that the EpiK protocol will not be universally accessible to all.
The E2P model only qualifies the quality of the data generated and stored, not the roles in the ecosystem; on the contrary, with the introduction of the DAO model, the variety of roles introduced in the EpiK ecosystem (which includes the roles of ordinary people) is not limited. (Bounty hunters who can be competent in their tasks) gives roles and possibilities for how everyone can participate in the system in a more logical way.
For example, a miner with computing power can provide storage, a person with a certain domain knowledge can apply to become an expert (this includes history, technology, travel, comics, food, etc.), and a person willing to mark and correct data can become a bounty hunter.
The presence of various efficient support tools from the project owner will lower the barriers to entry for various roles, thus allowing different people to do their part in the system and together contribute to the ongoing generation of a high-quality decentralized knowledge graph.
Deep Chain Finance:
Leo, some time ago, EpiK released a white paper and an economy whitepaper, explaining the EpiK concept from the perspective of technology and economy model respectively. What I would like to ask is, what are the shortcomings of the current distributed storage projects, and how will EpiK protocol be improved?
Leo:
Distributed storage can easily be misunderstood as those of Ali’s OceanDB, but in the field of blockchain, we should focus on decentralized storage first.
There is a big problem with the decentralized storage on the market now, which is “why not eat meat porridge”.
How to understand it? Decentralized storage is cheaper than centralized storage because of its technical principle, and if it is, the centralized storage is too rubbish for comparison.
What incentive does the average user have to spend more money on decentralized storage to store data?
Is it safer?
Existence miners can shut down at any time on decentralized storage by no means save a share of security in Ariadne and Amazon each.
More private?
There’s no difference between encrypted presence on decentralized storage and encrypted presence on Amazon.
Faster?
The 10,000 gigabytes of bandwidth in decentralized storage simply doesn’t compare to the fiber in a centralized server room. This is the root problem of the business model, no one is using it, no one is buying it, so what’s the big vision.
The goal of EpiK is to guide all community participants in the co-construction and sharing of field knowledge graph data, which is the best way for robots to understand human knowledge, and the more knowledge graph data there is, the more knowledge a robot has, the more intelligent it is exponentially, i.e., EpiK uses decentralized storage technology. The value of exponentially growing data is captured with linearly growing hardware costs, and that’s where the buy-in for EPK comes in.
Organized data is worth a lot more than organized hard drives, and there is a demand for EPK when robots have the need for intelligence.
Deep Chain Finance:
Let me ask Leo, how many forked projects does Filecoin have so far, roughly? Do you think there will be more or less waves of fork after the mainnet launches? Have the requirements of the miners at large changed when it comes to participation?
Leo:
We don’t have specific statistics, now that the main network launches, we feel that forking projects will increase, there are so many restricted miners in the market that they need to be organized efficiently.
However, we currently see that most forked projects are simply modifying the parameters of Filecoin’s economy model, which is undesirable, and this level of modification can’t change the status quo of miners making up computing power, and the change to the market is just to make some of the big miners feel more comfortable digging up, which won’t help to promote the decentralized storage ecology to land.
We need more reasonable landing scenarios so that idle mining resources can be turned into effective productivity, pitching a 100x coin instead of committing to one Fomo sentiment after another.
Deep Chain Finance:
How far along is the EpiK Protocol project, Eric? What other big moves are coming in the near future?
Eric:
The development of the EpiK Protocol is divided into 5 major phases.
(a) Phase I testing of the network “Obelisk”.
Phase II Main Network 1.0 “Rosetta”.
Phase III Main Network 2.0 “Hammurabi”.
(a) The Phase IV Enrichment Knowledge Mapping Toolkit.
The fifth stage is to enrich the knowledge graph application ecology.
Currently in the first phase of testing network “Obelisk”, anyone can sign up to participate in the test network pre-mining test to obtain ERC20 EPK tokens, after the mainnet exchange on a one-to-one basis.
We have recently launched ERC20 EPK on Uniswap, you can buy and sell it freely on Uniswap or download our EpiK mobile wallet.
In addition, we will soon launch the EpiK Bounty platform, and welcome all community members to do tasks together to build the EpiK community. At the same time, we are also pushing forward the centralized exchange for token listing.
Users’ Questions
User 1:
Some KOLs said, Filecoin consumed its value in the next few years, so it will plunge, what do you think?
Eric:
First of all, the judgment of the market is to correspond to the cycle, not optimistic about the FIL first judgment to do is not optimistic about the economic model of the project, or not optimistic about the distributed storage track.
First of all, we are very confident in the distributed storage track and will certainly face a process of growth and decline, so as to make a choice for a better project.
Since the existing group of miners and the computing power already produced is fixed, and since EpiK miners and FIL miners are compatible, anytime miners will also make a choice for more promising and economically viable projects.
Filecoin consumes the value of the next few years this time, so it will plunge.
Regarding the market issues, the plunge is not a prediction, in the industry or to keep learning iteration and value judgment. Because up and down market sentiment is one aspect, there will be more very important factors. For example, the big washout in March this year, so it can only be said that it will slow down the development of the FIL community. But prices are indeed unpredictable.
User2:
Actually, in the end, if there are no applications and no one really uploads data, the market value will drop, so what are the landing applications of EpiK?
Leo: The best and most direct application of EpiK’s knowledge graph is the question and answer system, which can be an intelligent legal advisor, an intelligent medical advisor, an intelligent chef, an intelligent tour guide, an intelligent game strategy, and so on.
submitted by EpiK-Protocol to u/EpiK-Protocol [link] [comments]

Please recommend me a Seedbox

Are you OK with direct message offers from vendors?
NO
What are your main reasons for getting a seedbox?
I want a SaaS solution. I work as an SE so I hate fixing stuff when I get home. Also as an easy way to protect myself from DMCA letters, etc.
Do you have any specific requirements?
Plex, and whatever software that would allow for auto-downloads of new released episodes, etc.
I prefer to just direct-stream with Plex. While transcoding would be nice, I probably don't want to to pay for it.
Are you looking for a shared or dedicated solution?
Shared.
Are you looking for managed or unmanaged solution?
Either. Although managed would make my life easier. Root would be cool but I really don't need it.
Please describe your Seedbox experience:
I ran a local self-hosted box at one point without any issues. Wasn't the smoothest workflow but I got tired of putting time into it.
Currently with a provider or used one before?
Just DIY.
What is your Linux experience?
A boatload and a half. A BSD variant is fine too.
What is your monthly budget?
This is hard because it depends what gets bundled into the service. <$50. Cheaper options are highly preferred.
Payment preferences or requirements?
Card. Bitcoin.
A big no on paypal and bank account transfers.
Do you need support for public trackers?
Yes. :( My old private tracker shut down since I last used it.
Routing: Tell us your continent:
N. America.
What kind of connection speeds do you need?
Download to seedbox can really be anything that isn't dialup speeds. I can wait.
Streaming from seedbox to local needs to be handle 1080p and 4k (the former being more common).
How much monthly bandwidth is needed?
1TB? That's probably the upper upper limit because I don't even hit that with streaming services. 500GB would probably be fine.
How much disk space do you need?
2TB+. If I can expand overtime without losing current data that would be a MASSIVE plus.
List some features you are looking for:
Plex, and whatever software that would allow for auto-downloads of new released episodes, etc. I'm a big fan of Transmission but not a hard requirement.
Site access over Tor only would be nice. Or a VPN to get access into the box. 2FA is a HUGE plus. But I also have a feeling this is far outside the scope of what seedboxes generally offer.
Anything else you think we should know?
I have on-site storage that can be used. Not a data-hoarder when it comes to media. I can always download again, but I still would like a decent amount of space.
submitted by Environmental-Cap766 to seedboxes [link] [comments]

[Hiring] Looking to hire a Mac OS and IOS app developer for two different tasks.

Task 1:
I am looking for someone who can design/write an app or find me one that will actually work on Mac OS Big Sur for my iMac. I am looking to essentially have an app that IS my iPhone when I launch it. I want to be able to use my keyboard however as input when typing and my camera as the iphone's camera when the app is launched. I have tried apps like iPadian and the such, but they all have a flaw in that some apps only work for the iphone. I want to be able to launch this app and have it show exactly what I would see on my iphone. I broke my iphone, so this would have to be done using a backup of the iphone.
Task 2:
I am looking for somebody to design an app that would allow me to see what devices are pulling the most bandwidth and data on my home network. This would be for my iPhone. I would then like to be able to throttle the device if I deem they are using too much or suspend the device all together. I would also like to be able to allocate bandwidth amounts to certain devices if per say I needed one to download faster than all of the others.
For both tasks:
I am willing to pay up to $50 US if the apps work correctly with minimal bugs. I am open to negotiating on pricing too. I am happy to discuss about more details if you are interested and can do it. Considering my phone is broken, I have no way to log in to send payment via bitcoin or other crypto, so I would have to send via PayPal or something equivocal that doesn't require dual factor authentication.
submitted by PenguinAgent to forhire [link] [comments]

Thanks to all who submitted questions for Shiv Malik in the GAINS AMA yesterday, it was great to see so much interest in Data Unions! You can read the full transcript here:

Thanks to all who submitted questions for Shiv Malik in the GAINS AMA yesterday, it was great to see so much interest in Data Unions! You can read the full transcript here:

Gains x Streamr AMA Recap

https://preview.redd.it/o74jlxia8im51.png?width=1236&format=png&auto=webp&s=93eb37a3c9ed31dc3bf31c91295c6ee32e1582be
Thanks to everyone in our community who attended the GAINS AMA yesterday with, Shiv Malik. We were excited to see that so many people attended and gladly overwhelmed by the amount of questions we got from you on Twitter and Telegram. We decided to do a little recap of the session for anyone who missed it, and to archive some points we haven’t previously discussed with our community. Happy reading and thanks to Alexandre and Henry for having us on their channel!
What is the project about in a few simple sentences?
At Streamr we are building a real-time network for tomorrow’s data economy. It’s a decentralized, peer-to-peer network which we are hoping will one day replace centralized message brokers like Amazon’s AWS services. On top of that one of the things I’m most excited about are Data Unions. With Data Unions anyone can join the data economy and start monetizing the data they already produce. Streamr’s Data Union framework provides a really easy way for devs to start building their own data unions and can also be easily integrated into any existing apps.
Okay, sounds interesting. Do you have a concrete example you could give us to make it easier to understand?
The best example of a Data Union is the first one that has been built out of our stack. It's called Swash and it's a browser plugin.
You can download it here: http://swashapp.io/
And basically it helps you monetize the data you already generate (day in day out) as you browse the web. It's the sort of data that Google already knows about you. But this way, with Swash, you can actually monetize it yourself. The more people that join the union, the more powerful it becomes and the greater the rewards are for everyone as the data product sells to potential buyers.
Very interesting. What stage is the project/product at? It's live, right?
Yes. It's live. And the Data Union framework is in public beta. The Network is on course to be fully decentralized at some point next year.
How much can a regular person browsing the Internet expect to make for example?
So that's a great question. The answer is no one quite knows yet. We do know that this sort of data (consumer insights) is worth hundreds of millions and really isn't available in high quality. So With a union of a few million people, everyone could be getting 20-50 dollars a year. But it'll take a few years at least to realise that growth. Of course Swash is just one data union amongst many possible others (which are now starting to get built out on our platform!)
With Swash, I believe they now have 3,000 members. They need to get to 50,000 before they become really viable but they are yet to do any marketing. So all that is organic growth.
I assume the data is anonymized btw?
Yes. And there in fact a few privacy protecting tools Swash supplys to its users.
How does Swash compare to Brave?
So Brave really is about consent for people's attention and getting paid for that. They don't sell your data as such.
Swash can of course be a plugin with Brave and therefore you can make passive income browsing the internet. Whilst also then consenting to advertising if you so want to earn BAT.
Of course it's Streamr that is powering Swash. And we're looking at powering other DUs - say for example mobile applications.
The holy grail might be having already existing apps and platforms out there, integrating DU tech into their apps so people can consent (or not) to having their data sold - and then getting a cut of that revenue when it does sell.
The other thing to recognise is that the big tech companies monopolise data on a vast scale - data that we of course produce for them. That is stifling innovation.
Take for example a competitor map app. To effectively compete with Google maps or Waze, they need millions of users feeding real time data into it.
Without that - it's like Google maps used to be - static and a bit useless.
Right, so how do you convince these big tech companies that are producing these big apps to integrate with Streamr? Does it mean they wouldn't be able to monetize data as well on their end if it becomes more available through an aggregation of individuals?
If a map application does manage to scale to that level then inevitably Google buys them out - that's what happened with Waze.
But if you have a data union which bundles together the raw location data of millions of people then any application builder can come along and license that data for their app. This encourages all sorts of innovation and breaks the monopoly.
We're currently having conversations with Mobile Network operators to see if they want to pilot this new approach to data monetization. And that's what even more exciting. Just be explicit with users - do you want to sell your data? Okay, if yes, then which data point do you want to sell.
Then the mobile network operator (like T-mobile for example) then organises the sale of the data of those who consent and everyone gets a cut.
Streamr - in this example provides the backend to port and bundle the data, and also the token and payment rail for the payments.
So for big companies (mobile operators in this case), it's less logistics, handing over the implementation to you, and simply taking a cut?
It's a vision that we'll be able to talk more about more concretely in a few weeks time 😁
Compared to having to make sense of that data themselves (in the past) and selling it themselves
Sort of.
We provide the backened to port the data and the template smart contracts to distribute the payments.
They get to focus on finding buyers for the data and ensuring that the data that is being collected from the app is the kind of data that is valuable and useful to the world.
(Through our sister company TX, we also help build out the applications for them and ensure a smooth integration).
The other thing to add is that the reason why this vision is working, is that the current data economy is under attack. Not just from privacy laws such as GDPR, but also from Google shutting down cookies, bidstream data being investigated by the FTC (for example) and Apple making changes to IoS14 to make third party data sharing more explicit for users.
All this means that the only real places for thousands of multinationals to buy the sort of consumer insights they need to ensure good business decisions will be owned by Google/FB etc, or from SDKs or through this method - from overt, rich, consent from the consumer in return for a cut of the earnings.
A couple of questions to get a better feel about Streamr as a whole now and where it came from. How many people are in the team? For how long have you been working on Streamr?
We are around 35 people with one office in Zug, Switzerland and another one in Helsinki. But there are team members all over the globe, we’ve people in the US, Spain, the UK, Germany, Poland, Australia and Singapore. I joined Streamr back in 2017 during the ICO craze (but not for that reason!)
And did you raise funds so far? If so, how did you handle them? Are you planning to do any future raises?
We did an ICO back in Sept/Oct 2017 in which we raised around 30 Millions CHF. The funds give us enough runway for around five/six years to finalize our roadmap. We’ve also simultaneously opened up a sister company consultancy business, TX which helps enterprise clients implementing the Streamr stack. We've got no more plans to raise more!
What is the token use case? How did you make sure it captures the value of the ecosystem you're building
The token is used for payments on the Marketplace (such as for Data Union products for example) also for the broker nodes in the Network. ( we haven't talked much about the P2P network but it's our project's secret sauce).
The broker nodes will be paid in DATAcoin for providing bandwidth. We are currently working together with Blockscience on our tokeneconomics. We’ve just started the second phase in their consultancy process and will be soon able to share more on the Streamr Network’s tokeneconoimcs.
But if you want to summate the Network in a sentence or two - imagine the Bittorrent network being run by nodes who get paid to do so. Except that instead of passing around static files, it's realtime data streams.
That of course means it's really well suited for the IoT economy.
Well, let's continue with questions from Twitter and this one comes at the perfect time. Can Streamr Network be used to transfer data from IOT devices? Is the network bandwidth sufficient? How is it possible to monetize the received data from a huge number of IOT devices? From u/ EgorCypto
Yes, IoT devices are a perfect use case for the Network. When it comes to the network’s bandwidth and speed - the Streamr team just recently did extensive research to find out how well the network scales.
The result was that it is on par with centralized solutions. We ran experiments with network sizes between 32 to 2048 nodes and in the largest network of 2048 nodes, 99% of deliveries happened within 362 ms globally.
To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! So we're super happy with those results.
Here's a link to the paper:
https://medium.com/streamrblog/streamr-network-performance-and-scalability-whitepaper-adb461edd002
While we're on the technical side, second question from Twitter: Can you be sure that valuable data is safe and not shared with service providers? Are you using any encryption methods? From u/ CryptoMatvey
Yes, the messages in the Network are encrypted. Currently all nodes are still run by the Streamr team. This will change in the Brubeck release - our last milestone on the roadmap - when end-to-end encryption is added. This release adds end-to-end encryption and automatic key exchange mechanisms, ensuring that node operators can not access any confidential data.
If BTW - you want to get very technical the encryption algorithms we are using are: AES (AES-256-CTR) for encryption of data payloads, RSA (PKCS #1) for securely exchanging the AES keys and ECDSA (secp256k1) for data signing (same as Bitcoin and Ethereum).
Last question from Twitter, less technical now :) In their AMA ad, they say that Streamr has three unions, Swash, Tracey and MyDiem. Why does Tracey help fisherfolk in the Philippines monetize their catch data? Do they only work with this country or do they plan to expand? From u/ alej_pacedo
So yes, Tracey is one of the first Data Unions on top of the Streamr stack. Currently we are working together with the WWF-Philippines and the UnionBank of the Philippines on doing a first pilot with local fishing communities in the Philippines.
WWF is interested in the catch data to protect wildlife and make sure that no overfishing happens. And at the same time the fisherfolk are incentivized to record their catch data by being able to access micro loans from banks, which in turn helps them make their business more profitable.
So far, we have lots of interest from other places in South East Asia which would like to use Tracey, too. In fact TX have already had explicit interest in building out the use cases in other countries and not just for sea-food tracking, but also for many other agricultural products.
(I think they had a call this week about a use case involving cows 😂)
I recall late last year, that the Streamr Data Union framework was launched into private beta, now public beta was recently released. What are the differences? Any added new features? By u/ Idee02
The main difference will be that the DU 2.0 release will be more reliable and also more transparent since the sidechain we are using for micropayments is also now based on blockchain consensus (PoA).
Are there plans in the pipeline for Streamr to focus on the consumer-facing products themselves or will the emphasis be on the further development of the underlying engine?by u/ Andromedamin
We're all about what's under the hood. We want third party devs to take on the challenge of building the consumer facing apps. We know it would be foolish to try and do it all!
As a project how do you consider the progress of the project to fully developed (in % of progress plz) by u/ Hash2T
We're about 60% through I reckon!
What tools does Streamr offer developers so that they can create their own DApps and monetize data?What is Streamr Architecture? How do the Ethereum blockchain and the Streamr network and Streamr Core applications interact? By u/ CryptoDurden
We'll be releasing the Data UNion framework in a few weeks from now and I think DApp builders will be impressed with what they find.
We all know that Blockchain has many disadvantages as well,
So why did Streamr choose blockchain as a combination for its technology?
What's your plan to merge Blockchain with your technologies to make it safer and more convenient for your users? By u/ noonecanstopme
So we're not a blockchain ourselves - that's important to note. The P2P network only uses BC tech for the payments. Why on earth for example would you want to store every single piece of info on a blockchain. You should only store what you want to store. And that should probably happen off chain.
So we think we got the mix right there.
What were the requirements needed for node setup ? by u/ John097
Good q - we're still working on that but those specs will be out in the next release.
How does the STREAMR team ensure good data is entered into the blockchain by participants? By u/ kartika84
Another great Q there! From the product buying end, this will be done by reputation. But ensuring the quality of the data as it passes through the network - if that is what you also mean - is all about getting the architecture right. In a decentralised network, that's not easy as data points in streams have to arrive in the right order. It's one of the biggest challenges but we think we're solving it in a really decentralised way.
What are the requirements for integrating applications with Data Union? What role does the DATA token play in this case? By u/ JP_Morgan_Chase
There are no specific requirements as such, just that your application needs to generate some kind of real-time data. Data Union members and administrators are both paid in DATA by data buyers coming from the Streamr marketplace.
Regarding security and legality, how does STREAMR guarantee that the data uploaded by a given user belongs to him and he can monetize and capitalize on it? By u/ kherrera22
So that's a sort of million dollar question for anyone involved in a digital industry. Within our system there are ways of ensuring that but in the end the negotiation of data licensing will still, in many ways be done human to human and via legal licenses rather than smart contracts. at least when it comes to sizeable data products. There are more answers to this but it's a long one!
Okay thank you all for all of those!
The AMA took place in the GAINS Telegram group 10/09/20. Answers by Shiv Malik.
submitted by thamilton5 to streamr [link] [comments]

Sidechain Risks - (3/5) Bandwidth Paradox - 9/13/2016 Mining bitcoins takes too much electricity - so we use it as hair dryer! How to Stop Windows 10 from Using so Much Data - YouTube How Much Internet Bandwidth/Usage Does MY Mining Rig Use ... STOP THE BITCOIN FUD

Here is what the bandwidth chart looks like at a 5-minute interval: STH 1000 Miner Bandwidth Needs 5 Min View. Each algorithm is a bit different in how much bandwidth they use. On the other hand, we feel fairly confident that a recommendation of needing 1mpbs per crypto miner is too high by a few orders of magnitude. We are seeing that a 10mbps ... Subject: bitcoin-qt uses too much outgoing bandwidth. Date: Thu, 28 Feb 2013 18:23:57 +0000. Package: bitcoin-qt Version: 0.7.2-2 Severity: normal Dear Maintainer, I left bitcoin-qt running one complete day, and I started to notice my Internet connection was slow. nethogs showed me that there were a process using port 8333 continuously wasting near 200KB/s, and netstat told me that the culprit ... Traditionally, crypto mining does not use a lot of bandwidth. There is no specific amount of bandwidth crypto mining uses because, as we said earlier, some factors can influence the numbers. But, at an average, if you’re mining on your own, you will need a bandwidth speed of 15kbps for inbound transactions and 12kbps for outgoing transactions. Here are some factors that can affect the ... How Much Bandwidth Does Bitcoin Mining Take? Transaction Handling. Bitcoin – User-Friendly Payments or Not? Can Bitcoin scale to become a major payment network? Are bitcoin transactions anonymous? Why Do Bitcoin Transactions Take 10 Minutes? How Much Will The Transaction Free be for Bitcoin Transfers? What if I Receive a Bitcoin and my Computer is Powered Off? What if I Receive a Bitcoin and ... How much bandwidth does Bitcoin mining take? If you are mining with a pool then the amount should be negligible with about 10MB/day. However, what you do need is exceptional connectivity so that you get any updates on the work as fast as possible. Isn't Bitcoin mining a waste of energy? Spending energy to secure and operate a payment system is hardly a waste. Like any other payment service ...

[index] [24074] [1043] [12993] [42268] [20322] [12506] [34126] [10991] [14538] [28435]

Sidechain Risks - (3/5) Bandwidth Paradox - 9/13/2016

This video goes over my 7 day 1 week Bitcoin Mining experiment. I let my computer Mine for Bitcoin for a week straight, to see how much money I could generat... A Freaky Friday segment on the total energy consumption from Bitcoin mining and transactions taking place. Holy Cow! https://arstechnica.com/tech-policy/2017... This includes many of Bitcoin's tricky bandwidth problems. For example, Bitcoin is throttled to a paltry 1.7 KB/sec (1 MB / 10 min), for a variety of reasons. Bitcoin is clearly named after ... Crypto Comment Giveaway Winner: paul bobthebuilder 0:01:45 Bitcoin Analysis and Trading Strategy 0:03:51 Ethereum Analysis and Strategy 0:24:20 Waddah Attar ... Must see 2015 guide. Current information! The 'dislikes' are amusing to say the least. Sorry for exposing the truth! I see too many disappointed people trying with outdated technology. Please keep ...

#