Who Is Gavin Andresen? One of the Bitcoin Devellopers
In 2015, Andresen joined MIT’s Digital Currency Initiative. Additionally, he serves as the Chief Scientist of the failed Bitcoin Foundation, and is an advisor to Coinbase and Zcash.
History with Bitcoin
Andresen and Satoshi Nakamoto communicated often through Bitcointalk in early-2010. In mid-2010, Andresen alerted Satoshi that he had agreed to meet with the CIA to discuss Bitcoin. Satoshi disappeared shortly after. It’s not clear, however, whether or not Gavin’s meeting had anything to do with Satoshi’s departure.
From mid-2010 until April-2014, Andresen maintained control of the Bitcoin Core GitHub repository. On April 8, 2014, Andresen stepped down and Wladimir van der Laan agreed to take over as Lead Maintainer of the Bitcoin Core repo.
Contributions to Bitcoin
Multisig (Pay to script hash)
Payment Protocol (BIP 70)
Andresen and Mike Hearn published BIP 70, also known as Payment Protocol, which added a new form of communication between merchants and customers. BIP 70 made Bitcoin payments more secure, seamless and also prevents against man-in-the-middle attacks.
Andresen has long been critical of Bitcoin’s capacity limit of three transactions per-second. He teamed up with Mike Hearn in an attempt to fork Bitcoin and created Bitcoin XT. XT, also known as BIP 101, proposed an immediate block size increase to 8 MB, which was to be doubled every two years.
Other Bitcoin developers were critical of Bitcoin XT and BIP 101. Some feared that large blocks could force small miners and nodes to shut down, leading to further centralization of Bitcoin. XT failed to reach consensus. Chinese mining pools, which control most of the Bitcoin network hash rate, also rejected the proposal.
After Bitcoin XT failed to achieve Andresen’s goal of bumping the block size limit, he teamed up with another group of developers to create Bitcoin Classic. Classic proposed a more modest 2 MB block size increase via hardfork.
Bitcoin Classic received initial support from a number of miners–which are required to activate any block size increase via hard fork. A few weeks after Classic’s release, however, the majority of Bitcoin miners agreed to run only Bitcoin Core compatible software.
Andresen is one of the most controversial figures in Bitcoin. Much of the controversy stems from his desire to increase Bitcoin’s block size limit.
While most Bitcoin developers agree that Bitcoin must scale in order to accommodate more users, there is much debate about how to do so. Andresen proposals all aim to increase Bitcoin’s capacity with hardforks via a block size increase. Other Bitcoin developers argue that a block size increase is unsafe and that more efficient methods exist to increase Bitcoin’s capacity without a hardfork through proposals like Segregated Witness.
I am willing to run SPV mode because I trust that my customers aren’t going to double spend against me.
On presenting hard fork risks the same as soft fork risks:
It doesn’t matter if the software is obsolete because of hard or soft fork, the difference in risks between those two cases will not be understood by the typical full node or SPV node user.
I think 2MB is absurdly small.
Peter is hung up on the decentralization/privacy aspects of Bitcoin
I don’t plan on saving a significant number of Bitcoins as a store of value. I like to invest in people who are doing productive things that grow our economy and make the world a better place, so when Bitcoins replace dollars Wink I’ll lend them to people by buying bonds or stocks.
In my heart of hearts I still believe that going back to “no hard-coded maximum block size” would work out just fine.
Miners and merchants and wallet services and a small fraction of super-security-conscious people are more than enough for a robust, stable network; I don’t think we need more incentives to run full nodes.
Peter Todd is a very large part of the “Bitcoin Core moves forward too slowly” problem.
Peter is trustworthy.
Most ordinary folks should NOT be running a full node. We need full nodes that are always on, have more than 8 connections (if you have only 8 then you are part of the problem, not part of the solution), and have a high-bandwidth connection to the Internet. So: if you’ve got an extra virtual machine with enough memory in a data center, then yes, please, run a full node.
Maybe I don’t communicate clearly enough, but much of the fear, uncertainty, and doubt about the block size issue comes from people like Peter who want to know exactly how technical problems that MIGHT come up three years from now will be solved TODAY.
Mmm. If you want my respect, write some code.