Unix Time Stamp - Epoch Converter

Dragonchain Great Reddit Scaling Bake-Off Public Proposal

Dragonchain Great Reddit Scaling Bake-Off Public Proposal

Dragonchain Public Proposal TL;DR:

Dragonchain has demonstrated twice Reddit’s entire total daily volume (votes, comments, and posts per Reddit 2019 Year in Review) in a 24-hour demo on an operational network. Every single transaction on Dragonchain is decentralized immediately through 5 levels of Dragon Net, and then secured with combined proof on Bitcoin, Ethereum, Ethereum Classic, and Binance Chain, via Interchain. At the time, in January 2020, the entire cost of the demo was approximately $25K on a single system (transaction fees locked at $0.0001/txn). With current fees (lowest fee $0.0000025/txn), this would cost as little as $625.
Watch Joe walk through the entire proposal and answer questions on YouTube.
This proposal is also available on the Dragonchain blog.

Hello Reddit and Ethereum community!

I’m Joe Roets, Founder & CEO of Dragonchain. When the team and I first heard about The Great Reddit Scaling Bake-Off we were intrigued. We believe we have the solutions Reddit seeks for its community points system and we have them at scale.
For your consideration, we have submitted our proposal below. The team at Dragonchain and I welcome and look forward to your technical questions, philosophical feedback, and fair criticism, to build a scaling solution for Reddit that will empower its users. Because our architecture is unlike other blockchain platforms out there today, we expect to receive many questions while people try to grasp our project. I will answer all questions here in this thread on Reddit, and I've answered some questions in the stream on YouTube.
We have seen good discussions so far in the competition. We hope that Reddit’s scaling solution will emerge from The Great Reddit Scaling Bake-Off and that Reddit will have great success with the implementation.

Executive summary

Dragonchain is a robust open source hybrid blockchain platform that has proven to withstand the passing of time since our inception in 2014. We have continued to evolve to harness the scalability of private nodes, yet take full advantage of the security of public decentralized networks, like Ethereum. We have a live, operational, and fully functional Interchain network integrating Bitcoin, Ethereum, Ethereum Classic, and ~700 independent Dragonchain nodes. Every transaction is secured to Ethereum, Bitcoin, and Ethereum Classic. Transactions are immediately usable on chain, and the first decentralization is seen within 20 seconds on Dragon Net. Security increases further to public networks ETH, BTC, and ETC within 10 minutes to 2 hours. Smart contracts can be written in any executable language, offering full freedom to existing developers. We invite any developer to watch the demo, play with our SDK’s, review open source code, and to help us move forward. Dragonchain specializes in scalable loyalty & rewards solutions and has built a decentralized social network on chain, with very affordable transaction costs. This experience can be combined with the insights Reddit and the Ethereum community have gained in the past couple of months to roll out the solution at a rapid pace.

Response and PoC

In The Great Reddit Scaling Bake-Off post, Reddit has asked for a series of demonstrations, requirements, and other considerations. In this section, we will attempt to answer all of these requests.

Live Demo

A live proof of concept showing hundreds of thousands of transactions
On Jan 7, 2020, Dragonchain hosted a 24-hour live demonstration during which a quarter of a billion (250 million+) transactions executed fully on an operational network. Every single transaction on Dragonchain is decentralized immediately through 5 levels of Dragon Net, and then secured with combined proof on Bitcoin, Ethereum, Ethereum Classic, and Binance Chain, via Interchain. This means that every single transaction is secured by, and traceable to these networks. An attack on this system would require a simultaneous attack on all of the Interchained networks.
24 hours in 4 minutes (YouTube):
24 hours in 4 minutes
The demonstration was of a single business system, and any user is able to scale this further, by running multiple systems simultaneously. Our goals for the event were to demonstrate a consistent capacity greater than that of Visa over an extended time period.
Tooling to reproduce our demo is available here:
https://github.com/dragonchain/spirit-bomb

Source Code

Source code (for on & off-chain components as well tooling used for the PoC). The source code does not have to be shared publicly, but if Reddit decides to use a particular solution it will need to be shared with Reddit at some point.

Scaling

How it works & scales

Architectural Scaling

Dragonchain’s architecture attacks the scalability issue from multiple angles. Dragonchain is a hybrid blockchain platform, wherein every transaction is protected on a business node to the requirements of that business or purpose. A business node may be held completely private or may be exposed or replicated to any level of exposure desired.
Every node has its own blockchain and is independently scalable. Dragonchain established Context Based Verification as its consensus model. Every transaction is immediately usable on a trust basis, and in time is provable to an increasing level of decentralized consensus. A transaction will have a level of decentralization to independently owned and deployed Dragonchain nodes (~700 nodes) within seconds, and full decentralization to BTC and ETH within minutes or hours. Level 5 nodes (Interchain nodes) function to secure all transactions to public or otherwise external chains such as Bitcoin and Ethereum. These nodes scale the system by aggregating multiple blocks into a single Interchain transaction on a cadence. This timing is configurable based upon average fees for each respective chain. For detailed information about Dragonchain’s architecture, and Context Based Verification, please refer to the Dragonchain Architecture Document.

Economic Scaling

An interesting feature of Dragonchain’s network consensus is its economics and scarcity model. Since Dragon Net nodes (L2-L4) are independent staking nodes, deployment to cloud platforms would allow any of these nodes to scale to take on a large percentage of the verification work. This is great for scalability, but not good for the economy, because there is no scarcity, and pricing would develop a downward spiral and result in fewer verification nodes. For this reason, Dragonchain uses TIME as scarcity.
TIME is calculated as the number of Dragons held, multiplied by the number of days held. TIME influences the user’s access to features within the Dragonchain ecosystem. It takes into account both the Dragon balance and length of time each Dragon is held. TIME is staked by users against every verification node and dictates how much of the transaction fees are awarded to each participating node for every block.
TIME also dictates the transaction fee itself for the business node. TIME is staked against a business node to set a deterministic transaction fee level (see transaction fee table below in Cost section). This is very interesting in a discussion about scaling because it guarantees independence for business implementation. No matter how much traffic appears on the entire network, a business is guaranteed to not see an increased transaction fee rate.

Scaled Deployment

Dragonchain uses Docker and Kubernetes to allow the use of best practices traditional system scaling. Dragonchain offers managed nodes with an easy to use web based console interface. The user may also deploy a Dragonchain node within their own datacenter or favorite cloud platform. Users have deployed Dragonchain nodes on-prem on Amazon AWS, Google Cloud, MS Azure, and other hosting platforms around the world. Any executable code, anything you can write, can be written into a smart contract. This flexibility is what allows us to say that developers with no blockchain experience can use any code language to access the benefits of blockchain. Customers have used NodeJS, Python, Java, and even BASH shell script to write smart contracts on Dragonchain.
With Docker containers, we achieve better separation of concerns, faster deployment, higher reliability, and lower response times.
We chose Kubernetes for its self-healing features, ability to run multiple services on one server, and its large and thriving development community. It is resilient, scalable, and automated. OpenFaaS allows us to package smart contracts as Docker images for easy deployment.
Contract deployment time is now bounded only by the size of the Docker image being deployed but remains fast even for reasonably large images. We also take advantage of Docker’s flexibility and its ability to support any language that can run on x86 architecture. Any image, public or private, can be run as a smart contract using Dragonchain.

Flexibility in Scaling

Dragonchain’s architecture considers interoperability and integration as key features. From inception, we had a goal to increase adoption via integration with real business use cases and traditional systems.
We envision the ability for Reddit, in the future, to be able to integrate alternate content storage platforms or other financial services along with the token.
  • LBRY - To allow users to deploy content natively to LBRY
  • MakerDAO to allow users to lend small amounts backed by their Reddit community points.
  • STORJ/SIA to allow decentralized on chain storage of portions of content. These integrations or any other are relatively easy to integrate on Dragonchain with an Interchain implementation.

Cost

Cost estimates (on-chain and off-chain) For the purpose of this proposal, we assume that all transactions are on chain (posts, replies, and votes).
On the Dragonchain network, transaction costs are deterministic/predictable. By staking TIME on the business node (as described above) Reddit can reduce transaction costs to as low as $0.0000025 per transaction.
Dragonchain Fees Table

Getting Started

How to run it
Building on Dragonchain is simple and requires no blockchain experience. Spin up a business node (L1) in our managed environment (AWS), run it in your own cloud environment, or on-prem in your own datacenter. Clear documentation will walk you through the steps of spinning up your first Dragonchain Level 1 Business node.
Getting started is easy...
  1. Download Dragonchain’s dctl
  2. Input three commands into a terminal
  3. Build an image
  4. Run it
More information can be found in our Get started documents.

Architecture
Dragonchain is an open source hybrid platform. Through Dragon Net, each chain combines the power of a public blockchain (like Ethereum) with the privacy of a private blockchain.
Dragonchain organizes its network into five separate levels. A Level 1, or business node, is a totally private blockchain only accessible through the use of public/private keypairs. All business logic, including smart contracts, can be executed on this node directly and added to the chain.
After creating a block, the Level 1 business node broadcasts a version stripped of sensitive private data to Dragon Net. Three Level 2 Validating nodes validate the transaction based on guidelines determined from the business. A Level 3 Diversity node checks that the level 2 nodes are from a diverse array of locations. A Level 4 Notary node, hosted by a KYC partner, then signs the validation record received from the Level 3 node. The transaction hash is ledgered to the Level 5 public chain to take advantage of the hash power of massive public networks.
Dragon Net can be thought of as a “blockchain of blockchains”, where every level is a complete private blockchain. Because an L1 can send to multiple nodes on a single level, proof of existence is distributed among many places in the network. Eventually, proof of existence reaches level 5 and is published on a public network.

API Documentation

APIs (on chain & off)

SDK Source

Nobody’s Perfect

Known issues or tradeoffs
  • Dragonchain is open source and even though the platform is easy enough for developers to code in any language they are comfortable with, we do not have so large a developer community as Ethereum. We would like to see the Ethereum developer community (and any other communities) become familiar with our SDK’s, our solutions, and our platform, to unlock the full potential of our Ethereum Interchain. Long ago we decided to prioritize both Bitcoin and Ethereum Interchains. We envision an ecosystem that encompasses different projects to give developers the ability to take full advantage of all the opportunities blockchain offers to create decentralized solutions not only for Reddit but for all of our current platforms and systems. We believe that together we will take the adoption of blockchain further. We currently have additional Interchain with Ethereum Classic. We look forward to Interchain with other blockchains in the future. We invite all blockchains projects who believe in decentralization and security to Interchain with Dragonchain.
  • While we only have 700 nodes compared to 8,000 Ethereum and 10,000 Bitcoin nodes. We harness those 18,000 nodes to scale to extremely high levels of security. See Dragonchain metrics.
  • Some may consider the centralization of Dragonchain’s business nodes as an issue at first glance, however, the model is by design to protect business data. We do not consider this a drawback as these nodes can make any, none, or all data public. Depending upon the implementation, every subreddit could have control of its own business node, for potential business and enterprise offerings, bringing new alternative revenue streams to Reddit.

Costs and resources

Summary of cost & resource information for both on-chain & off-chain components used in the PoC, as well as cost & resource estimates for further scaling. If your PoC is not on mainnet, make note of any mainnet caveats (such as congestion issues).
Every transaction on the PoC system had a transaction fee of $0.0001 (one-hundredth of a cent USD). At 256MM transactions, the demo cost $25,600. With current operational fees, the same demonstration would cost $640 USD.
For the demonstration, to achieve throughput to mimic a worldwide payments network, we modeled several clients in AWS and 4-5 business nodes to handle the traffic. The business nodes were tuned to handle higher throughput by adjusting memory and machine footprint on AWS. This flexibility is valuable to implementing a system such as envisioned by Reddit. Given that Reddit’s daily traffic (posts, replies, and votes) is less than half that of our demo, we would expect that the entire Reddit system could be handled on 2-5 business nodes using right-sized containers on AWS or similar environments.
Verification was accomplished on the operational Dragon Net network with over 700 independently owned verification nodes running around the world at no cost to the business other than paid transaction fees.

Requirements

Scaling

This PoC should scale to the numbers below with minimal costs (both on & off-chain). There should also be a clear path to supporting hundreds of millions of users.
Over a 5 day period, your scaling PoC should be able to handle:
*100,000 point claims (minting & distributing points) *25,000 subscriptions *75,000 one-off points burning *100,000 transfers
During Dragonchain’s 24 hour demo, the above required numbers were reached within the first few minutes.
Reddit’s total activity is 9000% more than Ethereum’s total transaction level. Even if you do not include votes, it is still 700% more than Ethereum’s current volume. Dragonchain has demonstrated that it can handle 250 million transactions a day, and it’s architecture allows for multiple systems to work at that level simultaneously. In our PoC, we demonstrate double the full capacity of Reddit, and every transaction was proven all the way to Bitcoin and Ethereum.
Reddit Scaling on Ethereum

Decentralization

Solutions should not depend on any single third-party provider. We prefer solutions that do not depend on specific entities such as Reddit or another provider, and solutions with no single point of control or failure in off-chain components but recognize there are numerous trade-offs to consider
Dragonchain’s architecture calls for a hybrid approach. Private business nodes hold the sensitive data while the validation and verification of transactions for the business are decentralized within seconds and secured to public blockchains within 10 minutes to 2 hours. Nodes could potentially be controlled by owners of individual subreddits for more organic decentralization.
  • Billing is currently centralized - there is a path to federation and decentralization of a scaled billing solution.
  • Operational multi-cloud
  • Operational on-premises capabilities
  • Operational deployment to any datacenter
  • Over 700 independent Community Verification Nodes with proof of ownership
  • Operational Interchain (Interoperable to Bitcoin, Ethereum, and Ethereum Classic, open to more)

Usability Scaling solutions should have a simple end user experience.

Users shouldn't have to maintain any extra state/proofs, regularly monitor activity, keep track of extra keys, or sign anything other than their normal transactions
Dragonchain and its customers have demonstrated extraordinary usability as a feature in many applications, where users do not need to know that the system is backed by a live blockchain. Lyceum is one of these examples, where the progress of academy courses is being tracked, and successful completion of courses is rewarded with certificates on chain. Our @Save_The_Tweet bot is popular on Twitter. When used with one of the following hashtags - #please, #blockchain, #ThankYou, or #eternalize the tweet is saved through Eternal to multiple blockchains. A proof report is available for future reference. Other examples in use are DEN, our decentralized social media platform, and our console, where users can track their node rewards, view their TIME, and operate a business node.
Examples:

Transactions complete in a reasonable amount of time (seconds or minutes, not hours or days)
All transactions are immediately usable on chain by the system. A transaction begins the path to decentralization at the conclusion of a 5-second block when it gets distributed across 5 separate community run nodes. Full decentralization occurs within 10 minutes to 2 hours depending on which interchain (Bitcoin, Ethereum, or Ethereum Classic) the transaction hits first. Within approximately 2 hours, the combined hash power of all interchained blockchains secures the transaction.

Free to use for end users (no gas fees, or fixed/minimal fees that Reddit can pay on their behalf)
With transaction pricing as low as $0.0000025 per transaction, it may be considered reasonable for Reddit to cover transaction fees for users.
All of Reddit's Transactions on Blockchain (month)
Community points can be earned by users and distributed directly to their Reddit account in batch (as per Reddit minting plan), and allow users to withdraw rewards to their Ethereum wallet whenever they wish. Withdrawal fees can be paid by either user or Reddit. This model has been operating inside the Dragonchain system since 2018, and many security and financial compliance features can be optionally added. We feel that this capability greatly enhances user experience because it is seamless to a regular user without cryptocurrency experience, yet flexible to a tech savvy user. With regard to currency or token transactions, these would occur on the Reddit network, verified to BTC and ETH. These transactions would incur the $0.0000025 transaction fee. To estimate this fee we use the monthly active Reddit users statista with a 60% adoption rate and an estimated 10 transactions per month average resulting in an approximate $720 cost across the system. Reddit could feasibly incur all associated internal network charges (mining/minting, transfer, burn) as these are very low and controllable fees.
Reddit Internal Token Transaction Fees

Reddit Ethereum Token Transaction Fees
When we consider further the Ethereum fees that might be incurred, we have a few choices for a solution.
  1. Offload all Ethereum transaction fees (user withdrawals) to interested users as they wish to withdraw tokens for external use or sale.
  2. Cover Ethereum transaction fees by aggregating them on a timed schedule. Users would request withdrawal (from Reddit or individual subreddits), and they would be transacted on the Ethereum network every hour (or some other schedule).
  3. In a combination of the above, customers could cover aggregated fees.
  4. Integrate with alternate Ethereum roll up solutions or other proposals to aggregate minting and distribution transactions onto Ethereum.

Bonus Points

Users should be able to view their balances & transactions via a blockchain explorer-style interface
From interfaces for users who have no knowledge of blockchain technology to users who are well versed in blockchain terms such as those present in a typical block explorer, a system powered by Dragonchain has flexibility on how to provide balances and transaction data to users. Transactions can be made viewable in an Eternal Proof Report, which displays raw data along with TIME staking information and traceability all the way to Bitcoin, Ethereum, and every other Interchained network. The report shows fields such as transaction ID, timestamp, block ID, multiple verifications, and Interchain proof. See example here.
Node payouts within the Dragonchain console are listed in chronological order and can be further seen in either Dragons or USD. See example here.
In our social media platform, Dragon Den, users can see, in real-time, their NRG and MTR balances. See example here.
A new influencer app powered by Dragonchain, Raiinmaker, breaks down data into a user friendly interface that shows coin portfolio, redeemed rewards, and social scores per campaign. See example here.

Exiting is fast & simple
Withdrawing funds on Dragonchain’s console requires three clicks, however, withdrawal scenarios with more enhanced security features per Reddit’s discretion are obtainable.

Interoperability Compatibility with third party apps (wallets/contracts/etc) is necessary.
Proven interoperability at scale that surpasses the required specifications. Our entire platform consists of interoperable blockchains connected to each other and traditional systems. APIs are well documented. Third party permissions are possible with a simple smart contract without the end user being aware. No need to learn any specialized proprietary language. Any code base (not subsets) is usable within a Docker container. Interoperable with any blockchain or traditional APIs. We’ve witnessed relatively complex systems built by engineers with no blockchain or cryptocurrency experience. We’ve also demonstrated the creation of smart contracts within minutes built with BASH shell and Node.js. Please see our source code and API documentation.

Scaling solutions should be extensible and allow third parties to build on top of it Open source and extensible
APIs should be well documented and stable

Documentation should be clear and complete
For full documentation, explore our docs, SDK’s, Github repo’s, architecture documents, original Disney documentation, and other links or resources provided in this proposal.

Third-party permissionless integrations should be possible & straightforward Smart contracts are Docker based, can be written in any language, use full language (not subsets), and can therefore be integrated with any system including traditional system APIs. Simple is better. Learning an uncommon or proprietary language should not be necessary.
Advanced knowledge of mathematics, cryptography, or L2 scaling should not be required. Compatibility with common utilities & toolchains is expected.
Dragonchain business nodes and smart contracts leverage Docker to allow the use of literally any language or executable code. No proprietary language is necessary. We’ve witnessed relatively complex systems built by engineers with no blockchain or cryptocurrency experience. We’ve also demonstrated the creation of smart contracts within minutes built with BASH shell and Node.js.

Bonus

Bonus Points: Show us how it works. Do you have an idea for a cool new use case for Community Points? Build it!

TIME

Community points could be awarded to Reddit users based upon TIME too, whereas the longer someone is part of a subreddit, the more community points someone naturally gained, even if not actively commenting or sharing new posts. A daily login could be required for these community points to be credited. This grants awards to readers too and incentivizes readers to create an account on Reddit if they browse the website often. This concept could also be leveraged to provide some level of reputation based upon duration and consistency of contribution to a community subreddit.

Dragon Den

Dragonchain has already built a social media platform that harnesses community involvement. Dragon Den is a decentralized community built on the Dragonchain blockchain platform. Dragon Den is Dragonchain’s answer to fake news, trolling, and censorship. It incentivizes the creation and evaluation of quality content within communities. It could be described as being a shareholder of a subreddit or Reddit in its entirety. The more your subreddit is thriving, the more rewarding it will be. Den is currently in a public beta and in active development, though the real token economy is not live yet. There are different tokens for various purposes. Two tokens are Lair Ownership Rights (LOR) and Lair Ownership Tokens (LOT). LOT is a non-fungible token for ownership of a specific Lair. LOT will only be created and converted from LOR.
Energy (NRG) and Matter (MTR) work jointly. Your MTR determines how much NRG you receive in a 24-hour period. Providing quality content, or evaluating content will earn MTR.

Security. Users have full ownership & control of their points.
All community points awarded based upon any type of activity or gift, are secured and provable to all Interchain networks (currently BTC, ETH, ETC). Users are free to spend and withdraw their points as they please, depending on the features Reddit wants to bring into production.

Balances and transactions cannot be forged, manipulated, or blocked by Reddit or anyone else
Users can withdraw their balance to their ERC20 wallet, directly through Reddit. Reddit can cover the fees on their behalf, or the user covers this with a portion of their balance.

Users should own their points and be able to get on-chain ERC20 tokens without permission from anyone else
Through our console users can withdraw their ERC20 rewards. This can be achieved on Reddit too. Here is a walkthrough of our console, though this does not show the quick withdrawal functionality, a user can withdraw at any time. https://www.youtube.com/watch?v=aNlTMxnfVHw

Points should be recoverable to on-chain ERC20 tokens even if all third-parties involved go offline
If necessary, signed transactions from the Reddit system (e.g. Reddit + Subreddit) can be sent to the Ethereum smart contract for minting.

A public, third-party review attesting to the soundness of the design should be available
To our knowledge, at least two large corporations, including a top 3 accounting firm, have conducted positive reviews. These reviews have never been made public, as Dragonchain did not pay or contract for these studies to be released.

Bonus points
Public, third-party implementation review available or in progress
See above

Compatibility with HSMs & hardware wallets
For the purpose of this proposal, all tokenization would be on the Ethereum network using standard token contracts and as such, would be able to leverage all hardware wallet and Ethereum ecosystem services.

Other Considerations

Minting/distributing tokens is not performed by Reddit directly
This operation can be automated by smart contract on Ethereum. Subreddits can if desired have a role to play.

One off point burning, as well as recurring, non-interactive point burning (for subreddit memberships) should be possible and scalable
This is possible and scalable with interaction between Dragonchain Reddit system and Ethereum token contract(s).

Fully open-source solutions are strongly preferred
Dragonchain is fully open source (see section on Disney release after conclusion).

Conclusion

Whether it is today, or in the future, we would like to work together to bring secure flexibility to the highest standards. It is our hope to be considered by Ethereum, Reddit, and other integrative solutions so we may further discuss the possibilities of implementation. In our public demonstration, 256 million transactions were handled in our operational network on chain in 24 hours, for the low cost of $25K, which if run today would cost $625. Dragonchain’s interoperable foundation provides the atmosphere necessary to implement a frictionless community points system. Thank you for your consideration of our proposal. We look forward to working with the community to make something great!

Disney Releases Blockchain Platform as Open Source

The team at Disney created the Disney Private Blockchain Platform. The system was a hybrid interoperable blockchain platform for ledgering and smart contract development geared toward solving problems with blockchain adoption and usability. All objective evaluation would consider the team’s output a success. We released a list of use cases that we explored in some capacity at Disney, and our input on blockchain standardization as part of our participation in the W3C Blockchain Community Group.
https://lists.w3.org/Archives/Public/public-blockchain/2016May/0052.html

Open Source

In 2016, Roets proposed to release the platform as open source to spread the technology outside of Disney, as others within the W3C group were interested in the solutions that had been created inside of Disney.
Following a long process, step by step, the team met requirements for release. Among the requirements, the team had to:
  • Obtain VP support and approval for the release
  • Verify ownership of the software to be released
  • Verify that no proprietary content would be released
  • Convince the organization that there was a value to the open source community
  • Convince the organization that there was a value to Disney
  • Offer the plan for ongoing maintenance of the project outside of Disney
  • Itemize competing projects
  • Verify no conflict of interest
  • Preferred license
  • Change the project name to not use the name Disney, any Disney character, or any other associated IP - proposed Dragonchain - approved
  • Obtain legal approval
  • Approval from corporate, parks, and other business units
  • Approval from multiple Disney patent groups Copyright holder defined by Disney (Disney Connected and Advanced Technologies)
  • Trademark searches conducted for the selected name Dragonchain
  • Obtain IT security approval
  • Manual review of OSS components conducted
  • OWASP Dependency and Vulnerability Check Conducted
  • Obtain technical (software) approval
  • Offer management, process, and financial plans for the maintenance of the project.
  • Meet list of items to be addressed before release
  • Remove all Disney project references and scripts
  • Create a public distribution list for email communications
  • Remove Roets’ direct and internal contact information
  • Create public Slack channel and move from Disney slack channels
  • Create proper labels for issue tracking
  • Rename internal private Github repository
  • Add informative description to Github page
  • Expand README.md with more specific information
  • Add information beyond current “Blockchains are Magic”
  • Add getting started sections and info on cloning/forking the project
  • Add installation details
  • Add uninstall process
  • Add unit, functional, and integration test information
  • Detail how to contribute and get involved
  • Describe the git workflow that the project will use
  • Move to public, non-Disney git repository (Github or Bitbucket)
  • Obtain Disney Open Source Committee approval for release
On top of meeting the above criteria, as part of the process, the maintainer of the project had to receive the codebase on their own personal email and create accounts for maintenance (e.g. Github) with non-Disney accounts. Given the fact that the project spanned multiple business units, Roets was individually responsible for its ongoing maintenance. Because of this, he proposed in the open source application to create a non-profit organization to hold the IP and maintain the project. This was approved by Disney.
The Disney Open Source Committee approved the application known as OSSRELEASE-10, and the code was released on October 2, 2016. Disney decided to not issue a press release.
Original OSSRELASE-10 document

Dragonchain Foundation

The Dragonchain Foundation was created on January 17, 2017. https://den.social/l/Dragonchain/24130078352e485d96d2125082151cf0/dragonchain-and-disney/
submitted by j0j0r0 to ethereum [link] [comments]

A better anti-reorg algorithm using first-seen times to punish secret/dishonest mining

Bitcoin currently allows a malicious miner with at least 51% of the network hashrate to arbitrarily rewrite blockchain history. This means that transactions are reversible if they belong to a miner with a hashrate majority, and such transactions are subject to double-spend attempts. Bitcoin SV's miners have repeatedly threatened to perform this attack against exchanges using BCH by mining a secret, hidden chain which they only publish after they have withdrawn funds in a different currency from the exchange. It would be nice if we could prevent these secret mining re-org attacks.
Yesterday, I came up with a new algorithm for making secret re-org attacks very expensive and difficult to pull off. This new algorithm is designed to avoid the permanent chainsplit vulnerabilities of ABC 0.18.5 while being more effective at punishing malicious behavior.
The key to the new algorithm is to punish exactly the behavior that indicates malice. First, publishing a block after another block at the same height has arrived on the network suggests malice or poor performance, and the likelihood of malice increases as the delay increases. A good algorithm would penalize blocks in proportion to how much later they were published after the competing block. Second, building upon a block that was intentionally delayed is also a sign of malice. Therefore, a good algorithm would discount the work done by blocks based not only on their own delays, but the delays that were seen earlier in that chain as well. Since the actions at the start of the fork are more culpable (as they generate the split), we want to weight those blocks more heavily than later blocks.
I wrote up an algorithm that implements these features. When comparing two chains, you look at the PoW done since the fork block, and divide that PoW by a penalty score. The penalty score for each chain is calculated as the sum of the penalty scores for each block. Each block's penalty score is equal to the apparent time delay of that block relative to its sibling or cousin[1], divided by 120 seconds[2], and further divided by the square[3] of that block's height[4] from the fork.[5]
This algorithm has some desirable properties:
  1. It provides smooth performance. There are no corners or sharp changes in its incentive structure or penalty curve.
  2. It converges over very long time scales. Eventually, if one chain has more hashrate than the other and that is sustained indefinitely, the chain with the most hashrate will win by causing the chain penalty score for the slower (less-PoW) chain to grow.
  3. The long-term convergence means that variation in observed times early in the fork will not cause permanent chainsplits.
  4. Long-term convergence means that nodes can follow the standard most-PoW rule during initial block download and get the same results unless an attack is underway, in which case the node will only temporarily disagree.
  5. Over intermediate time scales (e.g. hours to weeks), the penalty given to secret-mining deep-reorg chains is very large and difficult to overcome even with a significant hashrate advantage. The penalty increases the longer the attack chain is kept secret. This makes attack attempts ineffective unless they are published within about 20 minutes of the attack starting.
  6. Single-block orphan race behavior is identical to existing behavior unless one of the blocks has a delay of at least 120 seconds, in which case that chain would require a total of 3 blocks to win (or more) instead of just 2.
  7. As the algorithm strongly punishes hidden chains, finalization becomes much safer as long as you prevent finalization from happening while there are known competitive alternate chains. However, this algorithm is still effective without finalization.
I wrote up this algorithm into a Python sim yesterday and have been playing around with it since. It seems to perform quite well. For example, if the attacker has 1.5x as much hashrate as the defenders (who had 100% of the hashrate before the fork), mine in secret for 20 minutes before publishing, and if finalization is enabled after 10 blocks when there's at least a 2x score advantage, then the attacker gets an orphan rate of 49.3% on their blocks and is only able to cause a >= 10 block reorg in 5.2% of cases, and none of those happen blindly, as the opposing chain shows up when most transactions have about 2 confirmations. If the attacker waits 1 hour before publishing, the attack is even less effective: 94% of their blocks are orphaned, 95.6% of their attempts fail, 94.3% of the attacks end with defenders successfully finalizing, and only 0.6% of attack attempts result in a >= 10 block reorg.
The code for my algorithm and simulator can be found on my antiReorgSim Github repository. If you guys have time, I'd appreciate some review and feedback. To run it:
git clone https://github.com/jtoomim/antiReorgSim.git cd antiReorgSim python reorgsim.py # use pypy if you have it, as it's 30x faster 
Thanks! Special thanks to Jonald Fyookball and Mark Lundeberg for reviewing early versions of the code and the ideas. I believe Jonald is working on a Medium post based on some of these concepts. Keep an eye out for it.
Edit: I'm working on an interactive HTML visualization using Dash/Python! Here's a screenshot from a preliminary version in which convergence (or attacker victory, if you prefer) happens after 88.4 hours. In this scenario, the attacker wins because of the rule in Note 5.
Edit 2: An alpha website version of the simulator is up! The code is all server-side for the simulation, so it might get overloaded if too many people hit it at the same time, but it might be fine. Feel free to play around with it!
Note 1: This time delay is calculated by finding the best competing chain's last block with less work than this one and the first block with more work than this one and interpolating the time-first-seen between the two. The time at which the block was fully downloaded and verified is used as time-first-seen, not the time at which the header was received nor the block header's timestamp.
Note 2: An empirical constant, intended to be similar to worst-case block propagation times.
Note 3: A semi-empirical constant; this balances the effect of early blocks against late blocks. The motivation for squaring is that late blocks gain an advantage for two multiplicative reasons: First, there are more late blocks than early blocks. Second, the time deltas for late blocks are larger. Both of these factors are linear versus time, so canceling them out can be done by dividing by height squared. This way, the first block has about as much weight as the next 4 blocks; the first two blocks have as much weight as the next 9 blocks; and the first (n) blocks have about as much weight as the next (n+1)2 blocks. Any early advantage can be overcome eventually by a hashrate majority, so over very long time scales (e.g. hours to weeks), this rule is equivalent to the simple Satoshi most-PoW rule, as long as the hashrate on each chain is constant. However, over intermediate time scales, the advantage to the first seen blocks is large enough that the hashrate will likely not remain constant, and hashrate will likely switch over to whichever chain has the best score and looks the most honest.
Note 4: The calculation doesn't actually use height, as that would be vulnerable to DAA manipulation. Instead, the calculation uses pseudoheight, which uses the PoW done and the fork block's difficulty to calculate what the height would be if all blocks had the fork block's difficulty.
Note 5: If one chain has less PoW than the other, the shorter chain's penalty is calculated as if enough blocks had been mined at the last minute to make them equal in PoW, but these fictional blocks do not contribute to the actual PoW of that chain.
submitted by jtoomim to btc [link] [comments]

Bitcoin Witness: use the worlds most secure, immutable, and decentralised public database to attest to the integrity of your files

About me

I have only ever done basic web development before but over the last 4-6 months i have been spending my time learning javascript, vuejs and a few blockchain technologies. I have finally finished the first release of Bitcoin Witness. I am aware that similar services already exist but my focus has been on simplifying the user experience and also making it scalable and free for anyone to use. Below provides more info on the app. I would love your feedback on the app and ideas / suggestions for me to take into the roadmap.

About Bitcoin Witness

https://bitcoinwitness.com is a free service that allows you to take any file and have its fingerprint witnessed in a bitcoin transaction. The service then allows you to download a proof file that can be used as verifiable evidence that your files fingerprint matches the fingerprint witnessed in the bitcoin transaction. The verification can be done using open source software even if our website does not exist in the future.

Protecting your data

We do not store your files data, in fact your files data is never even sent to our servers. Instead, your file is analysed locally in the browser to generate a SHA256 hash which is your files fingerprint.
The only data we do store is the file name, the fingerprint (hash), and the proof file generated by the app. This is so you can access and download proofs in the future. Anyone can retrieve the proof by presenting the original file at any time.
As you witness files, their fingerprint is also stored in your local cache so that you can easily retrieve the proof files when you load bitcoin witness on that device. It is recommend you download the proof once they are available to remove any reliance on our service.

How it works

Bitcoin Witness uses the Chainpoint protocol for many of its operations. Chainpoint is a layer two decentralised network that runs atop of (and supports the scaling of) bitcoin. Currently there are ~6500 community run Chainpoint nodes. Chainpoint nodes receive hashes and aggregate them together in a Merkle tree. The root of this tree is then included in a bitcoin transaction.
Your files fingerprint becomes part of a tree that is initially secured and witnessed in a Chainpoint calendar block (a decentralised database maintained by Chainpoint nodes) before being witnessed in a bitcoin transaction (the most secure decentralised database in the world).

Steps performed to witness your file

The end to end process for witnessing your file and retrieving a downloadable proof takes around ~90 minutes. This is because we wait for 6 bitcoin block confirmations before the proof file is made available.
The steps to witness files is as follows:
1. Generate the files fingerprint
When you select a file it is processed locally in the browser using the SHA256 algorithm to generate its fingerprint. We call it a fingerprint because if the same file is processed using this algorithm in the future, it will always result in the same hash value (fingerprint). If any modifications are made to your file it will result in a completely different hash value.
2. Combine the files fingerprint with entropy from NIST
The National Institute of Standards and Technology (NIST) randomness beacon generates full entropy bit strings and posts them in blocks every minute. The published values include a cryptographic link to all previous values to prevent retroactive changes.
Your files fingerprint is hashed with this random value to prove that the file was witnessed after that random value was generated.
3. Witness the file in the Chainpoint calendar
Chainpoint nodes aggregate your hash with other hashes in the network to create a Merkle tree and generate partial proof.
After ~ 12 seconds we retrieve a proof which includes the NIST value, timestamp information and the other hashes in the tree required to verify your files fingerprint in the anchor hash of a Chainpoint Calendar Block.
4. Witness the file in the bitcoin blockchain
The anchoring hash of the calendar block is then sent in the OP_RETURN of a Bitcoin transaction. As a result, this value is included in the raw transaction body, allowing the transaction ID and the Merkle path from that transaction to the Bitcoin block’s Merkle root to be calculated.
After 6 confirmations (~60 minutes) the final proof file is made available which contains all the Merkle path information required to verify your proof.

Steps to verify a file was witnessed by Bitcoin

The easiest way to verify a file has been witnessed is to visit https://bitcoinwitness.com and upload the proof file or the original file. Bitcoin Witness performs the verification processes and returns the relevant information about when the file was witnessed.
With that said, the benefit of the service is that even if the bitcoin witness app does not exist in the future. People must still be able to verify the files integrity (don’t trust us, trust bitcoin).
There are 2 steps to verify that your file was witnessed. The steps seek to verify that both your original file, and the downloaded proof file, have not been modified since the time of the bitcoin transaction / block. These steps are outlined below and can be performed using open source software.
1. Verify your file has not been modified
Generate a Sha256 hash of your file and check that the hash value generated matches the “hash” value in the proof file you are about to verify.
There are plenty of free online tools available that allow you to generate a hash of your file. And you can check the “hash” value in the proof file by opening it in a text editor.
2. Verify the proof file has not been modified
Re-run the operations set out in the proof file and then validate that the hash value produced at the end of the operations matches the Merkle root value in the bitcoin block.
The Chainpoint Parse library is open source software that can be used to re-run the operations in the proof file. The result can be verified to match the bitcoin Merkle root using any block explorer.

Future Vision and Roadmap

Today marks the release of the first version of the bitcoin witness app which can be found at https://bitcoinwitness.com. The immediate focus is on some additional features some users have already suggested
The broader vision and road map for bitcoin witness is to remove the need to trust organisations and each other with our data and instead trust bitcoin. We want to enable a world where people can make claims about data and that bitcoin’s immutable ledger can be used to verify that claim. The current version allows people to claim “This data has not been modified since that point in time”. An example of a future claim might be; “I was in possession of this data at that point in time”

Support us and get involved

This has been a fun learning experience. Would love it if you could all test out the app and give me feedback on the app, the user experience, any roadmap items I should think about. I welcome any comments here or join our telegram
For regular updates you can follow our twitter.
submitted by gaskills to Bitcoin [link] [comments]

CEO Block Classroom-White Paper

The White Paper is an internationally recognized official document.The White Paper of the Block Chain Project is the announcement that the project team shows to the market the development prospects, business models, technical strength and team capabilities. Take"Bitcoin:A Point-to-Point Electronic Cash System"as an example. On November 1,2008, Satoshi Nakamoto released the first white paper of the block chain industry----Bitcoin: A Point-to-Point Electronic Cash System. The prospects, principles and technologies of Bitcoin are described, including twelve sections: introduction, transaction, timestamp server, POW, network, incentive, recovery of hard disk space, simplified payment confirmation, combination and division of value, privacy, calculation and conclusion. This white paper lays a good foundation for people to accept Bitcoin, and also provides theoretical support for Bitcoin to enter people's lives. Although there is no clear regulation to stipulate the standard of information disclosure for the projects of block chain, the current white papers of block chain project will contain three main contents: project introduction, fund-raising plan and team plan.
submitted by CEO_Global to u/CEO_Global [link] [comments]

Satoshi: "Any needed rules and incentives can be enforced with this consensus mechanism"

We have [constructed] a system for electronic transactions without relying on trust.1
In [the white paper], we propose[d] a solution to the double-spending problem using a peer-to-peer distributed timestamp server to generate computational proof of the chronological order of transactions. The system is secure as long as honest nodes collectively control more CPU power than any cooperating group of attacker nodes.2
We started with the usual framework of coins made from digital signatures, which provides strong control of ownership, but is incomplete without a way to prevent double-spending.
To solve this, we proposed a peer-to-peer network using proof-of-work to record a public history of transactions that quickly becomes computationally impractical for an attacker to change if honest nodes control a majority of CPU power.
The network is robust in its unstructured simplicity.
Any needed rules and incentives can be enforced with this consensus mechanism.3
Mmmm. I don't know if I'm comfortable with that. You're saying there's no effort to identify and exclude nodes that don't cooperate? I suspect this will lead to trouble and possible DOS attacks.
There is no reliance on identifying anyone. As you've said, it's futile and can be trivially defeated with sock puppets.
The credential that establishes someone as real is the ability to supply [hash] power.4
Until.... until what? How does anybody know when a transaction has become irrevocable? Is "a few" blocks three? Thirty? A hundred? Does it depend on the number of nodes? Is it logarithmic or linear in number of nodes?
Section 11 calculates the worst case under attack. Typically, 5 or 0 blocks is enough for that. If you're selling something that doesn't merit a network-scale attack to steal it, in practice you could cut it closer.5
Redditors note: The concensus mechanism includes for example checking that every transaction itself is "valid" rather than being counterfeit, but this is fully implied in the contents above. This was likely why Satoshi only focused in on the most fundamental parts in the final section of the Bitcoin white paper.
submitted by fruitsofknowledge to btc [link] [comments]

Detailed explanation of BitMEX pending order strategy

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++) https://www.fmz.com/bbs-topic/2710
BitMEX has become the platform of choice for cryptocurrency leverage trading, but its API trading restrictions are strict and make automatic traders feeling very confused. This article mainly shares some tips on the use of APIs in the FMZ quantitative trading platform, mainly for the market making strategy.

1. Features of BitMEX

The most significant advantage is that the trading liquidity is very active, especially the Bitcoin perpetual contract, the transaction amount per minute often exceeds one million or even ten million US dollars; BitMEX pending orders trading have the policy of return commission fee, although it is not much, but attracted a large number of market making tradings, which made the price depth very rich. the latest buying and selling price often have more than one million dollars worth pending orders; because of this point, the transaction price often fluctuates around the minimum change unit of $0.50.

2.BitMEX API frequency limit

The request frequency of the REST API is limited to 300 times every 5 minutes, almost equal to 1 time every second, this limit can be said to be very strict compared to other trading platforms. After the limit is exceeded, 'Rate limit exceeded' will be prompted. If you keep exceeding the limit, the IP may be disabled for one hour. Multiple disables in a short time will result in a week being disabled. For each API request, BitMEX will return the header data, header data is used to see the current number of remaining requests. In fact, if the API is used properly, it will not exceed the frequency limit and generally does not need to be checked.

3.Use websocket to get the market quote

The BitMEX REST API is more restrictive. The official recommendation is to use the websocket protocol more, and push more data types than the average exchange. Pay attention to the following points for specific use:
If the depth data push time is too long, there will be an error, which does not correspond to the real depth. It is estimated that there are too many depth changes and there are omissions in the push, but in general, due to excellent fluidity, you can subscribe to "ticker" or "trades". The order details push is missing a lot and is almost unavailable. There is a significant delay in the push of account information, preferably using the REST API. When the market is volatile too big, the push delay will reach a few seconds. The following code uses the websocket protocol to obtain market and account information in real time, mainly for market-making strategies. The specific use needs to be performed in the main() function.
var ticker = {price:0, buy:0, sell:0, time:0} //Ticker information, the latest price, "buy one" price, "sell one" price, update time //Account information, respectively, position, buying and selling price, buying and selling quantity, position status, order Id var info = {position:0, buyPrice:0, sellPrice:0, buyAmount:0, sellAmount:0, buyState:0, sellState:0, buyId:0, sellId:0} var buyListId = []//Global variables, pre-emptive buying id list, will described below var sellListId = [] var APIKEY = 'your api id' //Need to fill in the BitMEX API ID here. Note that it is not a key, which is required for websocket protocol authentication. var expires = parseInt(Date.now() / 1000) + 10 var signature = exchange.HMAC("sha256", "hex", "GET/realtime" + expires, "{{secretkey}}")//The secretkey will be automatically replaced at the bottom level and does not need to be filled in. var bitmexClient = Dial("wss://www.bitmex.com/realtime", 60) var auth = JSON.stringify({args: [APIKEY, expires, signature], op: "authKeyExpires"})//Authentication information, otherwise you cannot subscribe to the account bitmexClient.write(auth) bitmexClient.write('{"op": "subscribe", "args": ["position","execution","trade:XBTUSD"]}')//Subscribed to positions, order execution and perpetual contract real-time transaction while(true){ var data = bitmexClient.read() if(data){ bitmexData = JSON.parse(data) if('table' in bitmexData && bitmexData.table == 'trade'){ data = bitmexData.data ticker.price = parseFloat(data[data.length-1].price)//The latest transaction price, will push multiple transactions at a time, take one will be ok //You can get the "buy one" and "sell one" price according to the direction of the latest transaction, without subscribing to the depth. if(data[data.length-1].side == 'Buy'){ ticker.sell = parseFloat(data[data.length-1].price) ticker.buy = parseFloat(data[data.length-1].price)-0.5 }else{ ticker.buy = parseFloat(data[data.length-1].price) ticker.sell = parseFloat(data[data.length-1].price)+0.5 } ticker.time = new Date(data[data.length-1].timestamp);//Update time, can be used to determine the delay } }else if(bitmexData.table == 'position'){ var position = parseInt(bitmexData.data[0].currentQty) if(position != info.position){ Log('Position change: ', position, info.position, '#[email protected]')//Position change Log, and pushed to WeChat, remove @ means Do not push info.position = position } info.position = parseInt(bitmexData.data[0].currentQty) } }

4. Placing order skills

BitMEX officially recommends using "bulk ordering" and "order modification" to place order. "bulk ordering" can be executed faster due to BitMEX real-time auditing, risk checking, margin calculation, and commissioning. Therefore, the frequency of the "bulk ordering" is calculated as one tenth of the normal frequency. Futhermore, our order operation should use the method of "bulk ordering" and "order modification" to minimize the use of API. The query order status also needs to consume the API using frequency. It can judge the order status according to the position change or modification order failure.
"bulk ordering" does not limit the order quantity (can't be too much), in fact, a single order can also use the "bulk ordering" interface. Due to the operation of modifying the order, we can "pre-order" some orders where the price deviates greatly, these orders will not be executed, but when we need to place an order, we only need to modify the price and quantity of the placed order. when modifying the order occurs failure, it can also be used as a signal for the order to be executed.
The following is the specific implementation code:
// Cancel all orders and reset global variables function cancelAll(){ exchange.IO("api","DELETE","/api/v1/ordeall","symbol=XBTUSD")//Call IO extension revocation info = {position:0, buyPrice:0, sellPrice:0, buyAmount:0, sellAmount:0, buyState:0, sellState:0, buyId:0, sellId:0} buyListId = [] sellListId = [] } //placing alternate order function waitOrders(){ var orders = [] if(buyListId.length<4){ //When the number of inspections is insufficient, place another "bulk" for(var i=0;i<7;i++){ //Due to BitMEX restrictions, the price can not be excessively excessive, the order quantity can not be too small, and the "execInst" parameter guarantees that only the market making transaction can be executed. orders.push({symbol:'XBTUSD', side:'Buy', orderQty:100, price:ticker.buy-400+i, execInst:'ParticipateDoNotInitiate'}) } } if(sellListId.length<4){ for(var i=0;i<7;i++){ orders.push({symbol:'XBTUSD', side:'Sell', orderQty:100, price:ticker.buy+400+i, execInst:'ParticipateDoNotInitiate'}) } } if(orders.length>0){ var param = "orders=" + JSON.stringify(orders); var ids = exchange.IO("api", "POST", "/api/v1/ordebulk", param);//Bulk orders submitted here for(var i=0;i0){ info.position = pos[0].Type == 0 ? pos[0].Amount : -pos[0].Amount }else{ info.position = 0 } } //Unknown error cannot be modified, all orders are cancelled, reset once else if(err.includes('Invalid orderID')){ cancelAll() Log('Invalid orderID,reset once') } //Exceed the frequency limit, you can continue to try after hibernation else if(err.includes('Rate limit exceeded')){ Sleep(2000) return } //The account is banned, all orders are revoked, and sleep is awaiting recovery for a long time. else if(err.includes('403 Forbidden')){ cancelAll() Log('403,reset once') Sleep(5*60*1000) } }else{ //Modify order successfully if(direction == 'buy'){ info.buyState = 1 info.buyPrice = price info.buyAmount = amount }else{ info.sellState = 1 info.sellPrice = price info.sellAmount = amount } } } //0.5 price change function fixSize(num){ if(num>=_N(num,0)+0.75){ num = _N(num,0)+1 }else if(num>=_N(num,0)+0.5){ num=_N(num,0)+0.5 }else{ num=_N(num,0) } return num } //Trading function function trade(){ waitOrders()//Check if you need a replacement order var buyPrice = fixSize(ticker.buy-5) //For demonstration purposes only, specific transactions should be written by yourself. var sellPrice = fixSize(ticker.sell+5) var buyAmount = 500 var sellAmount = 500 //Modify from an alternate order when there is no order if(info.buyState == 0 && buyListId.length > 0){ info.buyId = buyListId.shift() amendOrders([{orderID:info.buyId, price:buyPrice, orderQty:buyAmount}],'buy', group, buyPrice, buyAmount, info.buyId) } if(info.sellState == 0 && sellListId.length > 0){ info.sellId = sellListId.shift() amendOrders([{orderID: info.sellId, price:sellPrice, orderQty:sellAmount}],'sell', group, sellPrice, sellAmount, info.sellId ) } //Existing orders need to change price if(buyPrice != info.buyPrice && info.buyState == 1){ amendOrders([{orderID:info.buyId, price:buyPrice, orderQty:buyAmount}],'buy', group, buyPrice, buyAmount) } if(sellPrice != info.sellPrice && info.sellState == 1){ amendOrders([{orderID:info.sellId, price:sellPrice, orderQty:sellAmount}],'sell', group, sellPrice, sellAmount) } }

5. Others

BitMEX's server is in the Amazon's server in Dublin, Ireland. The server running strategy ping is less than 1ms when you choose a AWS cloud sever in Dublin, but when there is still a delay in pushing, the overload problem cannot be solved. In addition, when the account is logged in, the server agent cannot be located in the United States and other places where don't allow cryptocurrency tradings. Due to the regulation, the account will be banned.
The code in this article has been modified from my personal strategy and is not guaranteed to be completely correct for reference. The specific use of the market code should be executed in the main function, the trading-related code is placed before the main function, and the trade() function is placed in the push market quote.
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++) https://www.fmz.com/bbs-topic/2710
submitted by FmzQuant to BitMEX [link] [comments]

Merkle Trees and Mountain Ranges - Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments

Original link: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html
Unedited text and originally written by:

Peter Todd pete at petertodd.org
Tue May 17 13:23:11 UTC 2016
Previous message: [bitcoin-dev] Bip44 extension for P2SH/P2WSH/...
Next message: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
# Motivation

UTXO growth is a serious concern for Bitcoin's long-term decentralization. To
run a competitive mining operation potentially the entire UTXO set must be in
RAM to achieve competitive latency; your larger, more centralized, competitors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra latency
of not doing so if they do directly impacts your profit margin. Secondly,
having possession of the UTXO set is one of the minimum requirements to run a
full node; the larger the set the harder it is to run a full node.

Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself; as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
which expands to significantly more in memory. UTXO growth is driven by a
number of factors, including the fact that there is little incentive to merge
inputs, lost coins, dust outputs that can't be economically spent, and
non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles and
timestamping.

We don't have good tools to combat UTXO growth. Segregated Witness proposes to
give witness space a 75% discount, in part of make reducing the UTXO set size
by spending txouts cheaper. While this may change wallets to more often spend
dust, it's hard to imagine an incentive sufficiently strong to discourage most,
let alone all, UTXO growing behavior.

For example, timestamping applications often create unspendable outputs due to
ease of implementation, and because doing so is an easy way to make sure that
the data required to reconstruct the timestamp proof won't get lost - all
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
use-cases like using the UTXO set for key rotation piggyback on the uniquely
strong security and decentralization guarantee that Bitcoin provides; it's very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-cases
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to register
a new PGP key, yet would rather not spend $50 in fees to create a standard two
output transaction. Effective techniques to resist miner censorship exist, so
without resorting to whitelists blocking non-btc-value-transfer use-cases as
"spam" is not a long-term, incentive compatible, solution.

A hard upper limit on UTXO set size could create a more level playing field in
the form of fixed minimum requirements to run a performant Bitcoin node, and
make the issue of UTXO "spam" less important. However, making any coins
unspendable, regardless of age or value, is a politically untenable economic
change.


# TXO Commitments

A merkle tree committing to the state of all transaction outputs, both spent
and unspent, we can provide a method of compactly proving the current state of
an output. This lets us "archive" less frequently accessed parts of the UTXO
set, allowing full nodes to discard the associated data, still providing a
mechanism to spend those archived outputs by proving to those nodes that the
outputs are in fact unspent.

Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
type of deterministic, indexable, insertion ordered merkle tree, which allows
new items to be cheaply appended to the tree with minimal storage requirements,
just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
never removed; if an output is spent its status is updated in place. Both the
state of a specific item in the MMR, as well the validity of changes to items
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
to the tip of the tree.

At an extreme, with TXO commitments we could even have no UTXO set at all,
entirely eliminating the UTXO growth problem. Transactions would simply be
accompanied by TXO commitment proofs showing that the outputs they wanted to
spend were still unspent; nodes could update the state of the TXO MMR purely
from TXO commitment proofs. However, the log2(n) bandwidth overhead per txin is
substantial, so a more realistic implementation is be to have a UTXO cache for
recent transactions, with TXO commitments acting as a alternate for the (rare)
event that an old txout needs to be spent.

Proofs can be generated and added to transactions without the involvement of
the signers, even after the fact; there's no need for the proof itself to
signed and the proof is not part of the transaction hash. Anyone with access to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes are
required to wallet software to make use of TXO commitments.


## Delayed Commitments

TXO commitments aren't a new idea - the author proposed them years ago in
response to UTXO commitments. However it's critical for small miners' orphan
rates that block validation be fast, and so far it has proven difficult to
create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more
work than not doing so. Fortunately if we maintain a UTXO set for recent
outputs, TXO commitments are only needed when spending old, archived, outputs.
We can take advantage of this by delaying the commitment, allowing it to be
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.

Concretely each block B_i commits to the TXO set state as of block B_{i-n}, in
other words what the TXO commitment would have been n blocks ago, if not for
the n block delay. Since that commitment only depends on the contents of the
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.


## Implementation

Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:

1) UTXO set

Low-latency K:V map of txouts definitely known to be unspent. Similar to
existing UTXO implementation, but with the key difference that old,
unspent, outputs may be pruned from the UTXO set.


2) STXO set

Low-latency set of transaction outputs known to have been spent by
transactions after the most recent TXO commitment, but created prior to the
TXO commitment.


3) TXO journal

FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
must be low-latency; removals can be high-latency.


4) TXO MMR list

Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
backed by a reference counted, cryptographically hashed object store
indexed by digest (similar to how git repos work). High-latency ok. We'll
cover this in more in detail later.


### Fast-Path: Verifying a Txout Spend In a Block

When a transaction output is spent by a transaction in a block we have two
cases:

1) Recently created output

Output created after the most recent TXO commitment, so it should be in the
UTXO set; the transaction spending it does not need a TXO commitment proof.
Remove the output from the UTXO set and append it to the TXO journal.

2) Archived output

Output created prior to the most recent TXO commitment, so there's no
guarantee it's in the UTXO set; transaction will have a TXO commitment
proof for the most recent TXO commitment showing that it was unspent.
Check that the output isn't already in the STXO set (double-spent), and if
not add it. Append the output and TXO commitment proof to the TXO journal.

In both cases recording an output as spent requires no more than two key:value
updates, and one journal append. The existing UTXO set requires one key:value
update per spend, so we can expect new block validation latency to be within 2x
of the status quo even in the worst case of 100% archived output spends.


### Slow-Path: Calculating Pending TXO Commitments

In a low-priority background task we flush the TXO journal, recording the
outputs spent by each block in the TXO MMR, and hashing MMR data to obtain the
TXO commitment digest. Additionally this background task removes STXO's that
have been recorded in TXO commitments, and prunes TXO commitment data no longer
needed.

Throughput for the TXO commitment calculation will be worse than the existing
UTXO only scheme. This impacts bulk verification, e.g. initial block download.
That said, TXO commitments provides other possible tradeoffs that can mitigate
impact of slower validation throughput, such as skipping validation of old
history, as well as fraud proof approaches.


### TXO MMR Implementation Details

Each TXO MMR state is a modification of the previous one with most information
shared, so we an space-efficiently store a large number of TXO commitments
states, where each state is a small delta of the previous state, by sharing
unchanged data between each state; cycles are impossible in merkelized data
structures, so simple reference counting is sufficient for garbage collection.
Data no longer needed can be pruned by dropping it from the database, and
unpruned by adding it again. Since everything is committed to via cryptographic
hash, we're guaranteed that regardless of where we get the data, after
unpruning we'll have the right data.

Let's look at how the TXO MMR works in detail. Consider the following TXO MMR
with two txouts, which we'll call state #0:

0
/ \
a b

If we add another entry we get state #1:

1
/ \
0 \
/ \ \
a b c

Note how it 100% of the state #0 data was reused in commitment #1. Let's
add two more entries to get state #2:

2
/ \
2 \
/ \ \
/ \ \
/ \ \
0 2 \
/ \ / \ \
a b c d e

This time part of state #1 wasn't reused - it's wasn't a perfect binary
tree - but we've still got a lot of re-use.

Now suppose state #2 is committed into the blockchain by the most recent block.
Future transactions attempting to spend outputs created as of state #2 are
obliged to prove that they are unspent; essentially they're forced to provide
part of the state #2 MMR data. This lets us prune that data, discarding it,
leaving us with only the bare minimum data we need to append new txouts to the
TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:

2
/ \
2 \
\
\
\
\
\
e

Note that we're glossing over some nuance here about exactly what data needs to
be kept; depending on the details of the implementation the only data we need
for nodes "2" and "e" may be their hash digest.

Adding another three more txouts results in state #3:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \
/ \
/ \
3 3
/ \ / \
e f g h

Suppose recently created txout f is spent. We have all the data required to
update the MMR, giving us state #4. It modifies two inner nodes and one leaf
node:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ \
/ \
/ \
4 3
/ \ / \
e (f) g h

If an archived txout is spent requires the transaction to provide the merkle
path to the most recently committed TXO, in our case state #2. If txout b is
spent that means the transaction must provide the following data from state #2:

2
/
2
/
/
/
0
\
b

We can add that data to our local knowledge of the TXO MMR, unpruning part of
it:

4
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 4
/ / \
/ / \
/ / \
0 4 3
\ / \ / \
b e (f) g h

Remember, we haven't _modified_ state #4 yet; we just have more data about it.
When we mark txout b as spent we get state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ / \
/ / \
/ / \
5 4 3
\ / \ / \
(b) e (f) g h

Secondly by now state #3 has been committed into the chain, and transactions
that want to spend txouts created as of state #3 must provide a TXO proof
consisting of state #3 data. The leaf nodes for outputs g and h, and the inner
node above them, are part of state #3, so we prune them:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ /
/ /
/ /
5 4
\ / \
(b) e (f)

Finally, lets put this all together, by spending txouts a, c, and g, and
creating three new txouts i, j, and k. State #3 was the most recently committed
state, so the transactions spending a and g are providing merkle paths up to
it. This includes part of the state #2 data:

3
/ \
/ \
/ \
/ \
/ \
/ \
/ \
2 3
/ \ \
/ \ \
/ \ \
0 2 3
/ / /
a c g

After unpruning we have the following data for state #5:

5
/ \
/ \
/ \
/ \
/ \
/ \
/ \
5 4
/ \ / \
/ \ / \
/ \ / \
5 2 4 3
/ \ / / \ /
a (b) c e (f) g

That's sufficient to mark the three outputs as spent and add the three new
txouts, resulting in state #6:

6
/ \
/ \
/ \
/ \
/ \
6 \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
/ \ \
6 6 \
/ \ / \ \
/ \ / \ 6
/ \ / \ / \
6 6 4 6 6 \
/ \ / / \ / / \ \
(a) (b) (c) e (f) (g) i j k

Again, state #4 related data can be pruned. In addition, depending on how the
STXO set is implemented may also be able to prune data related to spent txouts
after that state, including inner nodes where all txouts under them have been
spent (more on pruning spent inner nodes later).


### Consensus and Pruning

It's important to note that pruning behavior is consensus critical: a full node
that is missing data due to pruning it too soon will fall out of consensus, and
a miner that fails to include a merkle proof that is required by the consensus
is creating an invalid block. At the same time many full nodes will have
significantly more data on hand than the bare minimum so they can help wallets
make transactions spending old coins; implementations should strongly consider
separating the data that is, and isn't, strictly required for consensus.

A reasonable approach for the low-level cryptography may be to actually treat
the two cases differently, with the TXO commitments committing too what data
does and does not need to be kept on hand by the UTXO expiration rules. On the
other hand, leaving that uncommitted allows for certain types of soft-forks
where the protocol is changed to require more data than it previously did.


### Consensus Critical Storage Overheads

Only the UTXO and STXO sets need to be kept on fast random access storage.
Since STXO set entries can only be created by spending a UTXO - and are smaller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
sets combined will always be less than the peak size of the UTXO set alone in
the existing UTXO-only scheme (though the combined size can be temporarily
higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).

TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
mean that no other entry shares data with it). On a reasonably fast system the
TXO journal will be flushed quickly, converting it into TXO MMR data; the TXO
journal will never be more than a few blocks in size.

Transactions spending non-archived txouts are not required to provide any TXO
commitment data; we must have that data on hand in the form of one TXO MMR
entry per UTXO. Once spent however the TXO MMR leaf node associated with that
non-archived txout can be immediately pruned - it's no longer in the UTXO set
so any attempt to spend it will fail; the data is now immutable and we'll never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs under
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tree,
with each inner node committing to the sum of the unspent txouts under it.

When a archived txout is spent the transaction is required to provide a merkle
path to the most recent TXO commitment. As shown above that path is sufficient
information to unprune the necessary nodes in the TXO MMR and apply the spend
immediately, reducing this case to the TXO journal size question (non-consensus
critical overhead is a different question, which we'll address in the next
section).

Taking all this into account the only significant storage overhead of our TXO
commitments scheme when compared to the status quo is the log2(n) merkle path
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic case where
all storage available is equally fast. In the real world that isn't yet the
case - even SSD's significantly slower than RAM.


### Non-Consensus Critical Storage Overheads

Transactions spending archived txouts pose two challenges:

1) Obtaining up-to-date TXO commitment proofs

2) Updating those proofs as blocks are mined

The first challenge can be handled by specialized archival nodes, not unlike
how some nodes make transaction data available to wallets via bloom filters or
the Electrum protocol. There's a whole variety of options available, and the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.

While miners and relay nodes don't need to be concerned about the initial
commitment proof, updating that proof is another matter. If a node aggressively
prunes old versions of the TXO MMR as it calculates pending TXO commitments, it
won't have the data available to update the TXO commitment proof to be against
the next block, when that block is found; the child nodes of the TXO MMR tip
are guaranteed to have changed, yet aggressive pruning would have discarded that
data.

Relay nodes could ignore this problem if they simply accept the fact that
they'll only be able to fully relay the transaction once, when it is initially
broadcast, and won't be able to provide mempool functionality after the initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author has
previously argued that relay nodes don't need a mempool² at all.

For a miner though not having the data necessary to update the proofs as blocks
are found means potentially losing out on transactions fees. So how much extra
data is necessary to make this a non-issue?

Since the TXO MMR is insertion ordered, spending a non-archived txout can only
invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, committed
by a master MMR for all blocks). The maximum number of relevant inner nodes
changed is log2(n) per block, so if there are n non-archival blocks between the
most recent TXO commitment and the pending TXO MMR tip, we have to store
log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.

Archived txout spends on the other hand can invalidate TXO MMR proofs at any
level - consider the case of two adjacent txouts being spent. To guarantee
success requires storing full proofs. However, they're limited by the blocksize
limit, and additionally are expected to be relatively uncommon. For example, if
1% of 1MB blocks was archival spends, our hypothetical year long TXO commitment
delay is only a few hundred MB of data with low-IO-performance requirements.


## Security Model

Of course, a TXO commitment delay of a year sounds ridiculous. Even the slowest
imaginable computer isn't going to need more than a few blocks of TXO
commitment delay to keep up ~100% of the time, and there's no reason why we
can't have the UTXO archive delay be significantly longer than the TXO
commitment delay.

However, as with UTXO commitments, TXO commitments raise issues with Bitcoin's
security model by allowing relatively miners to profitably mine transactions
without bothering to validate prior history. At the extreme, if there was no
commitment delay at all at the cost of a bit of some extra network bandwidth
"full" nodes could operate and even mine blocks completely statelessly by
expecting all transactions to include "proof" that their inputs are unspent; a
TXO commitment proof for a commitment you haven't verified isn't a proof that a
transaction output is unspent, it's a proof that some miners claimed the txout
was unspent.

At one extreme, we could simply implement TXO commitments in a "virtual"
fashion, without miners actually including the TXO commitment digest in their
blocks at all. Full nodes would be forced to compute the commitment from
scratch, in the same way they are forced to compute the UTXO state, or total
work. Of course a full node operator who doesn't want to verify old history can
get a copy of the TXO state from a trusted source - no different from how you
could get a copy of the UTXO set from a trusted source.

A more pragmatic approach is to accept that people will do that anyway, and
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementation comes "from
the factory" with a reasonably up-to-date minimum accepted total-work
thresholdⁱ - in other words it won't accept a chain with less than that amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with,
say, six months worth of blocks has enough hashing power to threaten the main
chain as well.

That leaves public attempts to falsify TXO commitments, done out in the open by
the majority of hashing power. In this circumstance the "assumed valid"
threshold determines how long the attack would have to go on before full nodes
start accepting the invalid chain, or at least, newly installed/recently reset
full nodes. The minimum age that we can "assume valid" is tradeoff between
political/social/technical concerns; we probably want at least a few weeks to
guarantee the defenders a chance to organise themselves.

With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
may help ensure that full node software actually validates some minimum number
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-proof
proposal³, fraud proofs, or even a PoW with an inner loop dependent on
blockchain data. Like UTXO commitments, TXO commitments are also potentially
very useful in reducing the need for SPV wallet software to trust third parties
providing them with transaction data.

i) Checkpoints that reject any chain without a specific block are a more
common, if uglier, way of achieving this protection.

j) A good homework problem is to figure out how the TXO commitment could be
designed such that the delay could be reduced in a soft-fork.


## Further Work

While we've shown that TXO commitments certainly could be implemented without
increasing peak IO bandwidth/block validation latency significantly with the
delayed commitment approach, we're far from being certain that they should be
implemented this way (or at all).

1) Can a TXO commitment scheme be optimized sufficiently to be used directly
without a commitment delay? Obviously it'd be preferable to avoid all the above
complexity entirely.

2) Is it possible to use a metric other than age, e.g. priority? While this
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfer
over other uses (though if "normal" wallets nearly never need to use TXO
commitments proofs to spend outputs, the infrastructure to actually do this may
rot).

3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
age/priority/etc. threshold?

4) By fixing the problem (or possibly just "fixing" the problem) are we
encouraging/legitimising blockchain use-cases other than BTC value transfer?
Should we?

5) Instead of TXO commitment proofs counting towards the blocksize limit, can
we use a different miner fairness/decentralization metric/incentive? For
instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance.

6) How does this interact with fraud proofs? Obviously furthering dependency on
non-cryptographically-committed STXO/UTXO databases is incompatible with the
modularized validation approach to implementing fraud proofs.


# References

1) "Merkle Mountain Ranges",
Peter Todd, OpenTimestamps, Mar 18 2013,
https://github.com/opentimestamps/opentimestamps-serveblob/mastedoc/merkle-mountain-range.md

2) "Do we really need a mempool? (for relay nodes)",
Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html

3) "Segregated witnesses and validationless mining",
Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012103.html

--
https://petertodd.org 'peter'[:-1]@petertodd.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20160517/33f69665/attachment-0001.sig>
submitted by Godballz to CryptoTechnology [link] [comments]

Polish Bank Verifies Documents With Ethereum Blockchain

Polish Bank Verifies Documents With Ethereum Blockchain
Alior Bank, a bank based in Warsaw, Poland, is using the public Ethereum blockchain to authenticate its clients’ documents, according to a report by Forbes on June 17.

According to the report, when a client at Alior receives a document, they can now verify its authenticity by following a website link to its spot on the public blockchain. This means that customers can verify that the document in question was in fact issued, in the exact wording provided, when the bank claims. The blockchain technology lead at Alior, Piotr Adamczyk, explains:

“We know exactly in which block of Ethereum the document with a given hash is published. If we know the block number, we also know the timestamp [...] We know that the document was published some time ago and hasn’t been changed in that time [if the hash stored on the blockchain is identical to the hash calculated from the document], so we can prove it hasn’t been replaced on our servers.”

Alior reportedly developed this blockchain solution in response to changing regulations in Poland, where the Office of Competition and Consumer Protection ruled in 2017 that website pages do not constitute a “durable medium” necessary for issuing customer documents — the issue being that website pages are too easily changed, making them not suitably durable.

Thus, Alior came up with a blockchain solution that provides online documentation through a suitably durable medium. Moreover, Alior management reportedly believes it is the first bank to use a public, as opposed to a private, blockchain for customer service. Blockchain strategy lead at Alior Tomasz Sienicki commented:

“We want people to verify that we did everything right and we don’t conceal anything. If we say the documents are actually verified and authentic, everybody can check it and confirm [...] That’s not possible using a private blockchain.”
Trade Bitcoin and other cryptocurrencies with up to 100x leverage. Fast execution, low fees,available only on Bitseven


https://preview.redd.it/pgjx2fw6v0531.png?width=527&format=png&auto=webp&s=5a7b18fe8916d587cd0d0d27a53e3f0ecefbeb53
submitted by Bitcoin_Exchange7 to u/Bitcoin_Exchange7 [link] [comments]

Detailed explanation of BitMEX pending order strategy

article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
BitMEX has become the platform of choice for cryptocurrency leverage trading, but its API trading restrictions are strict and make automatic traders feeling very confused. This article mainly shares some tips on the use of APIs in the FMZ quantitative trading platform, mainly for the market making strategy.

1. Features of BitMEX

The most significant advantage is that the trading liquidity is very active, especially the Bitcoin perpetual contract, the transaction amount per minute often exceeds one million or even ten million US dollars; BitMEX pending orders trading have the policy of return commission fee, although it is not much, but attracted a large number of market making tradings, which made the price depth very rich. the latest buying and selling price often have more than one million dollars worth pending orders; because of this point, the transaction price often fluctuates around the minimum change unit of $0.50.

2.BitMEX API frequency limit

The request frequency of the REST API is limited to 300 times every 5 minutes, almost equal to 1 time every second, this limit can be said to be very strict compared to other trading platforms. After the limit is exceeded, 'Rate limit exceeded' will be prompted. If you keep exceeding the limit, the IP may be disabled for one hour. Multiple disables in a short time will result in a week being disabled. For each API request, BitMEX will return the header data, header data is used to see the current number of remaining requests. In fact, if the API is used properly, it will not exceed the frequency limit and generally does not need to be checked.

3.Use websocket to get the market quote

The BitMEX REST API is more restrictive. The official recommendation is to use the websocket protocol more, and push more data types than the average exchange. Pay attention to the following points for specific use:
If the depth data push time is too long, there will be an error, which does not correspond to the real depth. It is estimated that there are too many depth changes and there are omissions in the push, but in general, due to excellent fluidity, you can subscribe to "ticker" or "trades". The order details push is missing a lot and is almost unavailable. There is a significant delay in the push of account information, preferably using the REST API. When the market is volatile too big, the push delay will reach a few seconds. The following code uses the websocket protocol to obtain market and account information in real time, mainly for market-making strategies. The specific use needs to be performed in the main() function.
var ticker = {price:0, buy:0, sell:0, time:0} //Ticker information, the latest price, "buy one" price, "sell one" price, update time //Account information, respectively, position, buying and selling price, buying and selling quantity, position status, order Id var info = {position:0, buyPrice:0, sellPrice:0, buyAmount:0, sellAmount:0, buyState:0, sellState:0, buyId:0, sellId:0} var buyListId = []//Global variables, pre-emptive buying id list, will described below var sellListId = [] var APIKEY = 'your api id' //Need to fill in the BitMEX API ID here. Note that it is not a key, which is required for websocket protocol authentication. var expires = parseInt(Date.now() / 1000) + 10 var signature = exchange.HMAC("sha256", "hex", "GET/realtime" + expires, "{{secretkey}}")//The secretkey will be automatically replaced at the bottom level and does not need to be filled in. var bitmexClient = Dial("wss://www.bitmex.com/realtime", 60) var auth = JSON.stringify({args: [APIKEY, expires, signature], op: "authKeyExpires"})//Authentication information, otherwise you cannot subscribe to the account bitmexClient.write(auth) bitmexClient.write('{"op": "subscribe", "args": ["position","execution","trade:XBTUSD"]}')//Subscribed to positions, order execution and perpetual contract real-time transaction while(true){ var data = bitmexClient.read() if(data){ bitmexData = JSON.parse(data) if('table' in bitmexData && bitmexData.table == 'trade'){ data = bitmexData.data ticker.price = parseFloat(data[data.length-1].price)//The latest transaction price, will push multiple transactions at a time, take one will be ok //You can get the "buy one" and "sell one" price according to the direction of the latest transaction, without subscribing to the depth. if(data[data.length-1].side == 'Buy'){ ticker.sell = parseFloat(data[data.length-1].price) ticker.buy = parseFloat(data[data.length-1].price)-0.5 }else{ ticker.buy = parseFloat(data[data.length-1].price) ticker.sell = parseFloat(data[data.length-1].price)+0.5 } ticker.time = new Date(data[data.length-1].timestamp);//Update time, can be used to determine the delay } }else if(bitmexData.table == 'position'){ var position = parseInt(bitmexData.data[0].currentQty) if(position != info.position){ Log('Position change: ', position, info.position, '#[email protected]')//Position change Log, and pushed to WeChat, remove @ means Do not push info.position = position } info.position = parseInt(bitmexData.data[0].currentQty) } }

4. Placing order skills

BitMEX officially recommends using "bulk ordering" and "order modification" to place order. "bulk ordering" can be executed faster due to BitMEX real-time auditing, risk checking, margin calculation, and commissioning. Therefore, the frequency of the "bulk ordering" is calculated as one tenth of the normal frequency. Futhermore, our order operation should use the method of "bulk ordering" and "order modification" to minimize the use of API. The query order status also needs to consume the API using frequency. It can judge the order status according to the position change or modification order failure.
"bulk ordering" does not limit the order quantity (can't be too much), in fact, a single order can also use the "bulk ordering" interface. Due to the operation of modifying the order, we can "pre-order" some orders where the price deviates greatly, these orders will not be executed, but when we need to place an order, we only need to modify the price and quantity of the placed order. when modifying the order occurs failure, it can also be used as a signal for the order to be executed.
The following is the specific implementation code:
// Cancel all orders and reset global variables function cancelAll(){ exchange.IO("api","DELETE","/api/v1/ordeall","symbol=XBTUSD")//Call IO extension revocation info = {position:0, buyPrice:0, sellPrice:0, buyAmount:0, sellAmount:0, buyState:0, sellState:0, buyId:0, sellId:0} buyListId = [] sellListId = [] } //placing alternate order function waitOrders(){ var orders = [] if(buyListId.length<4){ //When the number of inspections is insufficient, place another "bulk" for(var i=0;i<7;i++){ //Due to BitMEX restrictions, the price can not be excessively excessive, the order quantity can not be too small, and the "execInst" parameter guarantees that only the market making transaction can be executed. orders.push({symbol:'XBTUSD', side:'Buy', orderQty:100, price:ticker.buy-400+i, execInst:'ParticipateDoNotInitiate'}) } } if(sellListId.length<4){ for(var i=0;i<7;i++){ orders.push({symbol:'XBTUSD', side:'Sell', orderQty:100, price:ticker.buy+400+i, execInst:'ParticipateDoNotInitiate'}) } } if(orders.length>0){ var param = "orders=" + JSON.stringify(orders); var ids = exchange.IO("api", "POST", "/api/v1/ordebulk", param);//Bulk orders submitted here for(var i=0;i0){ info.position = pos[0].Type == 0 ? pos[0].Amount : -pos[0].Amount }else{ info.position = 0 } } //Unknown error cannot be modified, all orders are cancelled, reset once else if(err.includes('Invalid orderID')){ cancelAll() Log('Invalid orderID,reset once') } //Exceed the frequency limit, you can continue to try after hibernation else if(err.includes('Rate limit exceeded')){ Sleep(2000) return } //The account is banned, all orders are revoked, and sleep is awaiting recovery for a long time. else if(err.includes('403 Forbidden')){ cancelAll() Log('403,reset once') Sleep(5*60*1000) } }else{ //Modify order successfully if(direction == 'buy'){ info.buyState = 1 info.buyPrice = price info.buyAmount = amount }else{ info.sellState = 1 info.sellPrice = price info.sellAmount = amount } } } //0.5 price change function fixSize(num){ if(num>=_N(num,0)+0.75){ num = _N(num,0)+1 }else if(num>=_N(num,0)+0.5){ num=_N(num,0)+0.5 }else{ num=_N(num,0) } return num } //Trading function function trade(){ waitOrders()//Check if you need a replacement order var buyPrice = fixSize(ticker.buy-5) //For demonstration purposes only, specific transactions should be written by yourself. var sellPrice = fixSize(ticker.sell+5) var buyAmount = 500 var sellAmount = 500 //Modify from an alternate order when there is no order if(info.buyState == 0 && buyListId.length > 0){ info.buyId = buyListId.shift() amendOrders([{orderID:info.buyId, price:buyPrice, orderQty:buyAmount}],'buy', group, buyPrice, buyAmount, info.buyId) } if(info.sellState == 0 && sellListId.length > 0){ info.sellId = sellListId.shift() amendOrders([{orderID: info.sellId, price:sellPrice, orderQty:sellAmount}],'sell', group, sellPrice, sellAmount, info.sellId ) } //Existing orders need to change price if(buyPrice != info.buyPrice && info.buyState == 1){ amendOrders([{orderID:info.buyId, price:buyPrice, orderQty:buyAmount}],'buy', group, buyPrice, buyAmount) } if(sellPrice != info.sellPrice && info.sellState == 1){ amendOrders([{orderID:info.sellId, price:sellPrice, orderQty:sellAmount}],'sell', group, sellPrice, sellAmount) } }

5. Others

BitMEX's server is in the Amazon's server in Dublin, Ireland. The server running strategy ping is less than 1ms when you choose a AWS cloud sever in Dublin, but when there is still a delay in pushing, the overload problem cannot be solved. In addition, when the account is logged in, the server agent cannot be located in the United States and other places where don't allow cryptocurrency tradings. Due to the regulation, the account will be banned.
The code in this article has been modified from my personal strategy and is not guaranteed to be completely correct for reference. The specific use of the market code should be executed in the main function, the trading-related code is placed before the main function, and the trade() function is placed in the push market quote.
article originally from FMZ.COM ( A place you can create your own trading bot by Python, JavaScript and C++)
submitted by FmzQuant to CryptoCurrencyTrading [link] [comments]

World History Timeline of Events Leading up to Bitcoin - In the Making

A (live/editable) timeline of historical events directly or indirectly related to the creation of Bitcoin and Cryptocurrencies
*still workin' on this so check back later and more will be added, if you have any suggested dates/events feel free to lemme know...
This timeline includes dates pertaining to:
Ancient Bartering – first recorded in Egypt (resources, services...) – doesn’t scale
Tally sticks were used, making notches in bones or wood, as a form of money of account
9000-6000 BC Livestock considered the first form of currency
c3200 BC Clay tablets used in Uruk (Iraq) for accounting (believed to be the earliest form of writing)
3000 BC Grain is used as a currency, measured out in Shekels
3000 BC Banking developed in Mesopotamia
3000 BC? Punches used to stamp symbols on coins were a precursor to the printing press and modern coins
? BC Since ancient Persia and all the way up until the invention and expansion of the telegraph Homing Pigeons were used to carry messages
2000 BC Merchants in Assyria, India and Sumeria lent grain to farmers and traders as a precursor to banks
1700 BC In Babylon at the time of Hammurabi, in the 18th century BC, there are records of loans made by the priests of the temple.
1200 BC Shell money first used in China
1000-600 BC Crude metal coins first appear in China
640 BC Precious metal coins – Gold & Silver first used in ancient Lydia and coastal Greek cities featuring face to face heads of a bull and a lion – first official minted currency made from electrum, a mixture of gold and silver
600-500 BC Atbash Cipher
A substitution Cipher used by ancient Hebrew scholars mapping the alphabet in reverse, for example, in English an A would be a Z, B a Y etc.
400 BC Skytale used by Sparta
474 BC Hundreds of gold coins from this era were discovered in Rome in 2018
350 BC Greek hydraulic semaphore system, an optical communication system developed by Aeneas Tacticus.
c200 BC Polybius Square
??? Wealthy stored coins in temples, where priests also lent them out
??? Rome was the first to create banking institutions apart from temples
118 BC First banknote in the form of 1 foot sq pieces of white deerskin
100-1 AD Caesar Cipher
193 Aureus, a gold coin of ancient Rome, minted by Septimius Severus
324 Solidus, pure gold coin, minted under Constantine’s rule, lasted until the late 8th century
600s Paper currency first developed in Tang Dynasty China during the 7th century, although true paper money did not appear until the 11th century, during the Song Dynasty, 960–1279
c757–796 Silver pennies based on the Roman denarius became the staple coin of Mercia in Great Britain around the time of King Offa
806 First paper banknotes used in China but isn’t widely accepted in China until 960
1024 The first series of standard government notes were issued in 1024 with denominations like 1 guàn (貫, or 700 wén), 1 mín (緡, or 1000 wén), up to 10 guàn. In 1039 only banknotes of 5 guàn and 10 guàn were issued, and in 1068 a denomination of 1 guàn was introduced which became forty percent of all circulating Jiaozi banknotes.
1040 The first movable type printer was invented in China and made of porcelain
? Some of the earliest forms of long distance communication were drums used by Native Africans and smoke signals used by Native Americans and Chinese
1088 Movable type in Song Dynasty China
1120 By the 1120s the central government officially stepped in and produced their own state-issued paper money (using woodblock printing)
1150 The Knights Templar issued bank notes to pilgrims. Pilgrims deposited their valuables with a local Templar preceptory before embarking, received a document indicating the value of their deposit, then used that document upon arrival in the Holy Land to retrieve their funds in an amount of treasure of equal value.
1200s-1300s During the 13th century bankers from north Italy, collectively known as Lombards, gradually replace the Jews in their traditional role as money-lenders to the rich and powerful. – Florence, Venice and Genoa - The Bardi and Peruzzi Families dominated banking in 14th century Florence, establishing branches in many other parts of Europe
1200 By the time Marco Polo visited China they’d move from coins to paper money, who introduced the concept to Europe. An inscription warned, "All counterfeiters will be decapitated." Before the use of paper, the Chinese used coins that were circular, with a rectangular hole in the middle. Several coins could be strung together on a rope. Merchants in China, if they became rich enough, found that their strings of coins were too heavy to carry around easily. To solve this problem, coins were often left with a trustworthy person, and the merchant was given a slip of paper recording how much money they had with that person. Marco Polo's account of paper money during the Yuan Dynasty is the subject of a chapter of his book, The Travels of Marco Polo, titled "How the Great Kaan Causeth the Bark of Trees, Made Into Something Like Paper, to Pass for Money All Over his Country."
1252 Florin minted in Florence, becomes the hard currency of its day helping Florence thrive economically
1340 Double-entry bookkeeping - The clerk keeping the accounts for the Genoese firm of Massari painstakingly fills in the ledger for the year 1340.
1397 Medici Bank established
1450 Johannes Gutenberg builds the printing press – printed words no longer just for the rich
1455 Paper money disappears from China
1466 Polyalphabetic Cipher
1466 Rotating cipher disks – Vatican – greatest crypto invention in 1000 yrs – the first system to challenge frequency analysis
1466 First known mechanical cipher machine
1472 The oldest bank still in existence founded, Banca Monte dei Paschi di Siena, headquartered in Siena, Italy
1494 Double-entry bookkeeping system codified by Luca Pacioli
1535 Wampum, a form of currency used by Native Americans, a string of beads made from clamshells, is first document.
1553 Vigenere Cipher
1557 Phillip II of Spain managed to burden his kingdom with so much debt (as the result of several pointless wars) that he caused the world's first national bankruptcy — as well as the world's second, third and fourth, in rapid succession.
1577 Newspaper in Korea
1586 The Babington Plot
1590 Cabinet Noir was established in France. Its mission was to open, read and reseal letters, and great expertise was developed in the restoration of broken seals. In the knowledge that mail was being opened, correspondents began to develop systems to encrypt and decrypt their letters. The breaking of these codes gave birth to modern systematic scientific code breaking.
1600s Promissory banknotes began in London
1600s By the early 17th century banking begins also to exist in its modern sense - as a commercial service for customers rather than kings. – Late 17th century we see cheques slowly gains acceptance
The total of the money left on deposit by a bank's customers is a large sum, only a fraction of which is usually required for withdrawals. A proportion of the rest can be lent out at interest, bringing profit to the bank. When the customers later come to realize this hidden value of their unused funds, the bank's profit becomes the difference between the rates of interest paid to depositors and demanded from debtors.
The transformation from moneylenders into private banks is a gradual one during the 17th and 18th centuries. In England it is achieved by various families of goldsmiths who early in the period accept money on deposit purely for safe-keeping. Then they begin to lend some of it out. Finally, by the 18th century, they make banking their business in place of their original craft as goldsmiths.
1605 Newspaper in Straussburg
c1627 Great Cipher
1637 Wampum is declared as legal tender in the U.S. (where we got the slang word “clams” for money)
1656 Johan Palmstruch establishes the Stockholm Banco
1661 Paper Currency reappears in Europe, soon became common - The goldsmith-bankers of London began to give out the receipts as payable to the bearer of the document rather than the original depositor
1661 Palmstruch issues credit notes which can be exchanged, on presentation to his bank, for a stated number of silver coins
1666 Stockholms Banco, the predecessor to the Central Bank of Sweden issues the first paper money in Europe. Soon went bankrupt for printing too much money.
1667 He issues more notes than his bank can afford to redeem with silver and winds up in disgrace, facing a death penalty (commuted to imprisonment) for fraud.
1668 Bank of Sweden – today the 2nd oldest surviving bank
1694 First Central Bank established in the UK was the first bank to initiate the permanent issue of banknotes
Served as model for most modern central banks.
The modern banknote rests on the assumption that money is determined by a social and legal consensus. A gold coin's value is simply a reflection of the supply and demand mechanism of a society exchanging goods in a free market, as opposed to stemming from any intrinsic property of the metal. By the late 17th century, this new conceptual outlook helped to stimulate the issue of banknotes.
1700s Throughout the commercially energetic 18th century there are frequent further experiments with bank notes - deriving from a recognized need to expand the currency supply beyond the availability of precious metals.
1710 Physiocracy
1712 First commercial steam engine
1717 Master of the Royal Mint Sir Isaac Newton established a new mint ratio between silver and gold that had the effect of driving silver out of circulation (bimetalism) and putting Britain on a gold standard.
1735 Classical Economics – markets regulate themselves when free of intervention
1744 Mayer Amschel Rothschild, Founder of the Rothschild Banking Empire, is Born in Frankfurt, Germany
Mayer Amschel Rothschild extended his banking empire across Europe by carefully placing his five sons in key positions. They set up banks in Frankfurt, Vienna, London, Naples, and Paris. By the mid 1800’s they dominated the banking industry, lending to governments around the world and people such as the Vanderbilts, Carnegies, and Cecil Rhodes.
1745 There was a gradual move toward the issuance of fixed denomination notes in England standardized printed notes ranging from £20 to £1,000 were being printed.
1748 First recorded use of the word buck for a dollar, stemming from the Colonial period in America when buck skins were commonly traded
1757 Colonial Scrip Issued in US
1760s Mayer Amschel Rothschild establishes his banking business
1769 First steam powered car
1775-1938 US Diplomatic Codes & Ciphers by Ralph E Weber used – problems were security and distribution
1776 American Independence
1776 Adam Smith’s Invisible Hand theory helped bankers and money-lenders limit government interference in the banking sector
1781 The Bank of North America was a private bank first adopted created the US Nation's first de facto central bank. When shares in the bank were sold to the public, the Bank of North America became the country's first initial public offering. It lasted less than ten years.
1783 First steamboat
1791 Congress Creates the First US Bank – A Private Company, Partly Owned by Foreigners – to Handle the Financial Needs of the New Central Government. First Bank of the United States, a National bank, chartered for a term of twenty years, it was not renewed in 1811.
Previously, the 13 states had their own banks, currencies and financial institutions, which had an average lifespan of about 5 years.
1792 First optical telegraph invented where towers with telescopes were dispersed across France 12-25 km apart, relaying signals according to positions of arms extended from the top of the towers.
1795 Thomas Jefferson invents the Jefferson Disk Cipher or Wheel Cipher
1797 to 1821 Restriction Period by England of trading banknotes for silver during Napoleonic Wars
1797 Currency Crisis
Although the Bank was originally a private institution, by the end of the 18th century it was increasingly being regarded as a public authority with civic responsibility toward the upkeep of a healthy financial system.
1799 First paper machine
1800 Banque de France – France’s central bank opens to try to improve financing of the war
1800 Invention of the battery
1801 Rotchschild Dynasty begins in Frankfurt, Holy Roman Empire – established international banking family through his 5 sons who established themselves in London, Paris, Frankfurt, Vienna, and Naples
1804 Steam locomotive
1807 Internal combustion engine and automobile
1807 Robert Fulton expands water transportation and trade with the workable steamboat.
1809 Telegraphy
1811 First powered printing press, also first to use a cylinder
1816 The Privately Owned Second Bank of the US was Chartered – It Served as the Main Depository for Government Revenue, Making it a Highly Profitable Bank – charter not renewed in 1836
1816 The first working telegraph was built using static electricity
1816 Gold becomes the official standard of value in England
1820 Industrial Revolution
c1820 Neoclassical Economics
1821 British gov introduces the gold standard - With governments issuing the bank notes, the inherent danger is no longer bankruptcy but inflation.
1822 Charles Babbage, considered the "father of the computer", begins building the first programmable mechanical computer.
1832 Andrew Jackson Campaigns Against the 2nd Bank of the US and Vetoes Bank Charter Renewal
Andrew Jackson was skeptical of the central banking system and believed it gave too few men too much power and caused inflation. He was also a proponent of gold and silver and an outspoken opponent of the 2nd National Bank. The Charter expired in 1836.
1833 President Jackson Issues Executive Order to Stop Depositing Government Funds Into Bank of US
By September 1833, government funds were being deposited into state chartered banks.
1833-1837 Manufactured “boom” created by central bankers – money supply Increases 84%, Spurred by the 2nd Bank of the US
The total money supply rose from $150 million to $267 million
1835 Jackson Escapes Assassination. Assassin misfired twice.
1837-1862 The “Free Banking Era” there was no formal central bank in the US, and banks issued their own notes again
1838 First Telegram sent using Morse Code across 3 km, in 1844 he sent a message across 71 km from Washington DC to Baltimore.
1843 Ada Lovelace published the first algorithm for computing
1844 Modern central bank of England established - meaning only the central bank of England could issue banknotes – prior to that commercial banks could issue their own and were the primary form of currency throughout England
the Bank of England was restricted to issue new banknotes only if they were 100% backed by gold or up to £14 million in government debt.
1848 Communist Manifesto
1850 The first undersea telegraphic communications cable connected France in England after latex produced from the sap of the Palaquium gutta tree in 1845 was proposed as insulation for the underwater cables.
1852 Many countries in Europe build telegram networks, however post remained the primary means of communication to distant countries.
1855 In England fully printed notes that did not require the name of the payee and the cashier's signature first appeared
1855 The printing telegraph made it possible for a machine with 26 alphabetic keys to print the messages automatically and was soon adopted worldwide.
1856 Belgian engineer Charles Bourseul proposed telephony
1856 The Atlantic Telegraph company was formed in London to stretch a commercial telegraph cable across the Atlantic Ocean, completed in 1866.
1860 The Pony Express was founded, able to deliver mail of wealthy individuals or government officials from coast to coast in 10 days.
1861 The East coast was connected to the West when Western Union completed the transcontinental telegraph line, putting an end to unprofitable The Pony Express.
1862-1863 First US banknotes - Lincoln Over Rules Debt-Based Money and Issues Greenbacks to Fund Civil War
Bankers would only lend the government money under certain conditions and at high interest rates, so Lincoln issued his own currency – “greenbacks” – through the US Treasury, and made them legal tender. His soldiers went on to win the war, followed by great economic expansion.
1863 to 1932 “National Banking Era” Commercial banks in the United States had legally issued banknotes before there was a national currency; however, these became subject to government authorization from 1863 to 1932
1864 Friedrich Wilhelm Raiffeisen founded the first rural credit union in Heddesdorf (now part of Neuwied) in Germany. By the time of Raiffeisen's death in 1888, credit unions had spread to Italy, France, the Netherlands, England, Austria, and other nations
1870 Long-distance telegraph lines connected Britain and India.
c1871 Marginalism - The doctrines of marginalism and the Marginal Revolution are often interpreted as a response to the rise of the worker's movement, Marxian economics and the earlier (Ricardian) socialist theories of the exploitation of labour.
1871 Carl Menger’s Principles of Economics – Austrian School
1872 Marx’s Das Capital
1872 Australia becomes the first nation to be connected to the rest of the world via submarine telegraph cables.
1876 Alexander Graham Bell patented the telephone, first called the electric speech machine – revolutionized communication
1877 Thomas Edison – Phonograph
1878 Western Union, the leading telegraph provider of the U.S., begins to lose out to the telephone technology of the National Bell Telephone Company.
1881 President James Garfield, Staunch Proponent of “Honest Money” Backed by Gold and Silver, was Assassinated
Garfield opposed fiat currency (money that was not backed by any physical object). He had the second shortest Presidency in history.
1882 First description of the one-time pad
1886 First gas powered car
1888 Ballpoint pen
1892 Cinematograph
1895 System of wireless communication using radio waves
1896 First successful intercontinental telegram
1898 Polyethylene
1899 Nickel-cadmium battery
1907 Banking Panic of 1907
The New York Stock Exchange dropped dramatically as everyone tried to get their money out of the banks at the same time across the nation. This banking panic spurred debate for banking reform. JP Morgan and others gathered to create an image of concern and stability in the face of the panic, which eventually led to the formation of the Federal Reserve. The founders of the Federal Reserve pretended like the bankers were opposed to the idea of its formation in order to mislead the public into believing that the Federal Reserve would help to regulate bankers when in fact it really gave even more power to private bankers, but in a less transparent way.
1908 St Mary’s Bank – first credit union in US
1908 JP Morgan Associate and Rockefeller Relative Nelson Aldrich Heads New National Monetary Commission
Senate Republican leader, Nelson Aldrich, heads the new National Monetary Commission that was created to study the cause of the banking panic. Aldrich had close ties with J.P. Morgan and his daughter married John D. Rockefeller.
1910 Bankers Meet Secretly on Jekyll Island to Draft Federal Reserve Banking Legislation
Over the course of a week, some of the nation’s most powerful bankers met secretly off the coast of Georgia, drafting a proposal for a private Central Banking system.
1913 Federal Reserve Act Passed
Two days before Christmas, while many members of Congress were away on vacation, the Federal Reserve Act was passed, creating the Central banking system we have today, originally with gold backed Federal Reserve Notes. It was based on the Aldrich plan drafted on Jekyll Island and gave private bankers supreme authority over the economy. They are now able to create money out of nothing (and loan it out at interest), make decisions without government approval, and control the amount of money in circulation.
1913 Income tax established -16th Amendment Ratified
Taxes ensured that citizens would cover the payment of debt due to the Central Bank, the Federal Reserve, which was also created in 1913.The 16th Amendment stated: “The Congress shall have power to lay and collect taxes on incomes, from whatever source derived, without apportionment among the several States, and without regard to any census or enumeration.”
1914 November, Federal Reserve Banks Open
JP Morgan and Co. Profits from Financing both sides of War and Purchasing Weapons
J.P. Morgan and Co. made a deal with the Bank of England to give them a monopoly on underwriting war bonds for the UK and France. They also invested in the suppliers of war equipment to Britain and France.
1914 WWI
1917 Teletype cipher
1917 The one-time pad
1917 Zimmerman Telegram intercepted and decoded by Room 40, the cryptanalysis department of the British Military during WWI.
1918 GB returns to gold standard post-war but it didn’t work out
1919 First rotor machine, an electro-mechanical stream ciphering and decrypting machine.
1919 Founding of The Cipher Bureau, Poland’s intelligence and cryptography agency.
1919-1929 The Black Chamber, a forerunner of the NSA, was the first U.S. cryptanalytic organization. Worked with the telegraph company Western Union to illegally acquire foreign communications of foreign embassies and representatives. It was shut down in 1929 as funding was removed after it was deemed unethical to intercept private domestic radio signals.
1920s Department stores, hotel chains and service staions begin offering customers charge cards
1921-1929 The “Roaring 20’s” – The Federal Reserve Floods the Economy with Cash and Credit
From 1921 to 1929 the Federal Reserve increased the money supply by $28 billion, almost a 62% increase over an eight-year period.[3] This artificially created another “boom”.
1927 Quartz clock
1928 First experimental Television broadcast in the US.
1929 Federal Reserve Contracts the Money Supply
In 1929, the Federal Reserve began to pull money out of circulation as loans were paid back. They created a “bust” which was inevitable after issuing so much credit in the years before. The Federal Reserve’s actions triggered the banking crisis, which led to the Great Depression.
1929 October 24, “Black Thursday”, Stock Market Crash
The most devastating stock market crash in history. Billions of dollars in value were consolidated into the private banker’s hands at the expense of everyone else.
1930s The Great Depression marked the end of the gold standard
1931 German Enigma machines attained and reconstructed.
1932 Turbo jet engine patented
1933 SEC founded - passed the Glass–Steagall Act, which separated investment banking and commercial banking. This was to avoid more risky investment banking activities from ever again causing commercial bank failures.
1933 FM Radio
1933 Germany begins Telex, a network of teleprinters sending and receiving text based messages. Post WWII Telex networks began to spread around the world.
1936 Austrian engineer Paul Eisler invented Printed circuit board
1936 Beginning of the Keynesian Revolution
1937 Typex, British encryption machines which were upgraded versions of Enigma machines.
1906 Teletypewriters
1927 Founding of highly secret and unofficial Signal Intelligence Service, SIS, the U.S. Army’s codebreaking division.
1937 Made illegal for Americans to own gold
1938 Z1 built by Konrad Zuse is the first freely programmable computer in the world.
1939 WWII – decline of the gold standard which greatly restricted policy making
1939-45 Codetalkers - The Navajo code is the only spoken military code never to have been deciphered - "Were it not for the Navajos, the Marines would never have taken Iwo Jima."—Howard Connor
1940 Modems
1942 Deciphering Japanese coded messages leads to a turning point victory for the U.S. in WWII.
1943 At Bletchley Park, Alan Turing and team build a specialized cipher-breaking machine called Heath Robinson.
1943 Colossus computer built in London to crack the German Lorenz cipher.
1944 Bretton Woods – convenient after the US had most of the gold
1945 Manhattan Project – Atom Bomb
1945 Transatlantic telephone cable
1945 Claude E. Shannon published "A mathematical theory of cryptography", commonly accepted as the starting point for development of modern cryptography.
C1946 Crypto Wars begin and last to this day
1946 Charg-it card created by John C Biggins
1948 Atomic clock
1948 Claude Shannon writes a paper that establishes the mathematical basis of information theory
1949 Info theorist Claude Shannon asks “What does an ideal cipher look like?” – one time pad – what if the keys are not truly random
1950 First credit card released by the Diners Club, able to be used in 20 restaurants in NYC
1951 NSA, National Security Agency founded and creates the KL-7, an off-line rotor encryption machine
1952 First thermonuclear weapon
1953 First videotape recorder
1953 Term “Hash” first used meaning to “chop” or “make a mess” out of something
1954 Atomic Energy Act (no mention of crypto)
1957 The NSA begins producing ROMOLUS encryption machines, soon to be used by NATO
1957 First PC – IBM
1957 First Satellite – Sputnik 1
1958 Western Union begins building a nationwide Telex network in the U.S.
1960s Machine readable codes were added to the bottom of cheques in MICR format, which speeded up the clearing and sorting process
1960s Financial organizations were beginning to require strong commercial encryption on the rapidly growing field of wired money transfer.
1961 Electronic clock
1963 June 4, Kennedy Issued an Executive Order (11110) that Authorized the US Treasury to Issue Silver Certificates, Threatening the Federal Reserve’s Monopoly on Money
This government issued currency would bypass the governments need to borrow from bankers at interest.
1963 Electronic calculator
1963 Nov. 22, Kennedy Assassinated
1963 Johnson Reverses Kennedy’s Banking Rule and Restores Power to the Federal Reserve
1964 8-Track
1964 LAN, Local Area Networks adapters
1965 Moore’s Law by CEO of Intel Gordon Moore observes that the number of components per integrated circuit doubles every year, and projected this rate of growth would continue for at least another decade. In 1975 he revised it to every two years.
1967 First ATM installed at Barclay’s Bank in London
1968 Cassette Player introduced
1969 First connections of ARPANET, predecessor of the internet, are made. started – SF, SB, UCLA, Utah (now Darpa) – made to stay ahead of the Soviets – there were other networks being built around the world but it was very hard to connect them – CERN in Europe
1970s Stagflation – unemployment + inflation, which Keynesian theory could not explain
1970s Business/commercial applications for Crypto emerge – prior to this time it was militarily used – ATMs 1st got people thinking about commercial applications of cryptography – data being sent over telephone lines
1970s The public developments of the 1970s broke the near monopoly on high quality cryptography held by government organizations.
Use of checks increased in 70s – bringing about ACH
One way functions...
A few companies began selling access to private networks – but weren’t allowed to connect to the internet – business and universities using Arpanet had no commercial traffic – internet was used for research, not for commerce or advertising
1970 Railroads threatened by the growing popularity of air travel. Penn Central Railroad declares bankruptcy resulting in a $3.2 billion bailout
1970 Conjugate coding used in an attempt to design “money physically impossible to counterfeit”
1971 The US officially removes the gold standard
1971 Email invented
1971 Email
1971 First microcomputer on a chip
1971 Lockheed Bailout - $1.4 billion – Lockheed was a major government defense contractor
1972 First programmable word processor
1972 First video game console
1973 SWIFT established
1973 Ethernet invented, standardized in ‘83
1973 Mobile phone
1973 First commercial GUI – Xerox Alto
1973 First touchscreen
1973 Emails made up more than ¾ of ARPANET’s packets – people had to keep a map of the network by their desk – so DNS was created
1974 A protocol for packet network intercommunication – TCP/IP – Cerf and Kahn
1974 Franklin National Bank Bailout - $1.5 billion (valued at that time) - At the time, it was the largest bank failure in US history
1975 New York City Bailout - $9.4 billion – NYC was overextended
1975 W DES - meant that commercial uses of high quality encryption would become common, and serious problems of export control began to arise.
1975 DES, Data Encryption Standard developed at IBM, seeking to develop secure electronic communications for banks and large financial organizations. DES was the first publicly accessible cipher to be 'blessed' by a national agency such as the NSA. Its release stimulated an explosion of public and academic interest in cryptography.
1975 Digital camera
1975 Altair 8800 sparks the microprocessor revolution
1976 Bretton Woods ratified (lasted 30 years) – by 80’s all nations were using floating currencies
1976 New Directions in Cryptography published by Diffie & Hellman – this terrified Fort Meade – previously this technique was classified, now it’s public
1976 Apple I Computer – Steve Wozniak
1976 Asymmetric key cryptosystem published by Whitfield Diffie and Martin Hellman.
1976 Hellman and Diffie publish New Directions in Cryptography, introducing a radically new method of distributing cryptographic keys, contributing much to solving key distribution one of the fundamental problems of cryptography. It brought about the almost immediate public development of asymmetric key algorithms. - where people can have 2 sets of keys, public and private
1977 Diffie & Hellman receive letter from NSA employee JA Meyer that they’re violating Federal Laws comparable to arms export – this raises the question, “Can the gov prevent academics from publishing on crypto?
1977 DES considered insecure
1977 First handheld electronic game
1977 RSA public key encryption invented
1978 McEliece Cryptosystem invented, first asymmetric encryption algorithm to use randomization in the encryption process
1980s Large data centers began being built to store files and give users a better faster experience – companies rented space from them - Data centers would not only store data but scour it to show people what they might want to see and in some cases, sell data
1980s Reaganomics and Thatcherism
1980 A decade of intense bank failures begins; the FDIC reports that 1,600 were either closed or received financial assistance from 1980 to 1994
1980 Chrysler Bailout – lost over $1 billion due to major hubris on the part of its executives - $1.5 billion one of the largest payouts ever made to a single corporation.
1980 Protocols for public key cryptosystems – Ralph Merkle
1980 Flash memory invented – public in ‘84
1981 “Untraceable Electronic Mail, Return Addresses and Digital Pseudonumns” – Chaum
1981 EFTPOS, Electronic funds transfer at point of sale is created
1981 IBM Personal Computer
1982 “The Ethics of Liberty” Murray Rothbard
1982 Commodore 64
1982 CD
1983 Satellite TV
1983 First built in hard drive
1983 C++
1983 Stereolithography
1983 Blind signatures for untraceable payments
Mid 1980s Use of ATMs becomes more widespread
1984 Continental Illinois National Bank and Trust bailed out due to overly aggressive lending styles and - the bank’s downfall could be directly traced to risk taking and a lack of due diligence on the part of bank officers - $9.5 billion in 2008 money
1984 Macintosh Computer - the first mass-market personal computer that featured a graphical user interface, built-in screen and mouse
1984 CD Rom
1985 Zero-Knowledge Proofs first proposed
1985 300,000 simultaneous telephone conversations over single optical fiber
1985 Elliptic Curve Cryptography
1987 ARPANET had connected over 20k guarded computers by this time
1988 First private networks email servers connected to NSFNET
1988 The Crypto Anarchists Manifesto – Timothy C May
1988 ISDN, Integrated Services Digital Network
1989 Savings & Loan Bailout - After the widespread failure of savings and loan institutions, President George H. W. Bush signed and Congress enacted the Financial Institutions Reform Recovery and Enforcement Act - This was a taxpayer bailout of about $200 billion
1989 First commercial emails sent
1989 Digicash - Chaum
1989 Tim Berners-Lee and Robert Cailliau built the prototype system which became the World Wide Web, WWW
1989 First ISPs – companies with no network of their own which connected people to a local network and to the internet - To connect to a network your computer placed a phone call through a modem which translated analog signals to digital signals – dial-up was used to connect computers as phone lines already had an extensive network across the U.S. – but phone lines weren’t designed for high pitched sounds that could change fast to transmit large amounts of data
1990s Cryptowars really heat up...
1990s Some countries started to change their laws to allow "truncation"
1990s Encryption export controls became a matter of public concern with the introduction of the personal computer. Phil Zimmermann's PGP cryptosystem and its distribution on the Internet in 1991 was the first major 'individual level' challenge to controls on export of cryptography. The growth of electronic commerce in the 1990s created additional pressure for reduced restrictions.[3] Shortly afterward, Netscape's SSL technology was widely adopted as a method for protecting credit card transactions using public key cryptography.
1990 NSFNET replaced Arpanet as backbone of the internet with more than 500k users
Early 90s Dial up provided through AOL and Compuserve
People were leery to use credit cards on the internet
1991 How to time-stamp a digital doc - Stornetta
1991 Phil Zimmermann releases the public key encryption program Pretty Good Privacy (PGP) along with its source code, which quickly appears on the Internet. He distributed a freeware version of PGP when he felt threatened by legislation then under consideration by the US Government that would require backdoors to be included in all cryptographic products developed within the US. Expanded the market to include anyone wanting to use cryptography on a personal computer (before only military, governments, large corporations)
1991 WWW (Tim Berners Lee) – made public in ‘93 – flatten the “tree” structure of the internet using hypertext – reason for HTTP//:WWW – LATER HTTPS for more security
1992 Erwise – first Internet Browser w a graphical Interface
1992 Congress passed a law allowing for commercial traffic on NSFNET
1992 Cpherpunks, Eric Hughes, Tim C May and John Gilmore – online privacy and safety from gov – cypherpunks write code so it can be spread and not shut down (in my earlier chapter)
1993 Mosaic – popularized surfing the web ‘til Netscape Navigator in ’94 – whose code was later used in Firefox
1993 A Cypherpunks Manifesto – Eric Hughes
1994 World’s first online cyberbank, First Virtual, opened for business
1994 Bluetooth
1994 First DVD player
1994 Stanford Federal Credit Union becomes the first financial institution to offer online internet banking services to all of its members in October 1994
1994 Internet only used by a few
1994 Cybercash
1994 Secure Sockets Layer (SSL) encryption protocol released by Netscape. Making financial transactions possible.
1994 One of the first online purchases was made, a Pizza Hut pepperoni pizza with mushrooms and extra cheese
1994 Cyphernomicon published – social implication where gov can’t do anything about it
1994-1999 Social Networking – GeoCities (combining creators and users) – had 19M users by ’99 – 3rd most popular after AOL and Yahoo – GeoCities purchased by Yahoo for $3.6B but took a hit after dotcom bubble popped and never recovered – GC shut down in ‘99
1995-2000 Dotcom bubble – Google, Amazon, Facebook: get over 600M visitors/year
1995 DVD
1995 MP3 term coined for MP3 files, the earlier development of which stretches back into the ‘70s, where MP files themselves where developed throughout the ‘90s
1995 NSFNET shut down and handed everything over to the ISPs
1995 NSA publishes the SHA1 hash algorithm as part of its Digital Signature Standard.
1996, 2000 President Bill Clinton signing the Executive order 13026 transferring the commercial encryption from the Munition List to the Commerce Control List. This order permitted the United States Department of Commerce to implement rules that greatly simplified the export of proprietary and open source software containing cryptography, which they did in 2000 - The successful cracking of DES likely helped gather both political and technical support for more advanced encryption in the hands of ordinary citizens - NSA considers AES strong enough to protect information classified at the Top Secret level
1996 e-gold
1997 WAP, Wireless Access Point
1997 NSA researchers published how to mint e cash
1997 Adam Back – HashCash – used PoW – coins could only be used once
1997 Nick Szabo – smart contracts “Formalizing and Securing Relationships on Public Networks”
1998 OSS, Open-source software Initiative Founded
1998 Wei Dai – B-money – decentralized database to record txs
1998 Bitgold
1998 First backdoor created by hackers from Cult of the Dead Cow
1998 Musk and Thiel founded PayPal
1998 Nick Szabo says crypto can protect land titles even if thugs take it by force – said it could be done with a timestamped database
1999 Much of the Glass-Steagal Act repealed - this saw US retail banks embark on big rounds of mergers and acquisitions and also engage in investment banking activities.
1999 Milton Friedman says, “I think that the Internet is going to be one of the major forces for reducing the role of government. The one thing that's missing, but that will soon be developed, is a reliable e-cash - a method whereby on the Internet you can transfer funds from A to B without A knowing B or B knowing A.”
1999 European banks began offering mobile banking with the first smartphones
1999 The Financial Services Modernization Act Allows Banks to Grow Even Larger
Many economists and politicians have recognized that this legislation played a key part in the subprime mortgage crisis of 2007.
1999-2001 Napster, P2P file sharing – was one of the fastest growing businesses in history – bankrupt for paying musicians for copyright infringement

submitted by crypto_jedi_ninja to Bitcoin [link] [comments]

Online CryptoCurrency Calculator with multi-Cryptocurrencies Simple Bitcoin Converter How Much Can You Make Mining Bitcoin With 6X 1080 Ti ... BITCOIN HACKING - YouTube Bitcoin Mining 2019 - Should We Mine Bitcoin? - YouTube Inside of a HUGE BITCOIN mining FARM ! - YouTube

Any timestamp server can be used: I recently switched from my issuer's timestamp server to Verisign since I found that GlobalSign's server was unreliable. Furthermore, Thawte don't run their own timestamp server but recommend people to use Verisign's. Bitcoincharts is the world's leading provider for financial and technical data related to the Bitcoin network. It provides news, markets, price charts and more. The Current Unix Timestamp. 1603664722 seconds since Jan 01 1970. (UTC) This epoch translates to: 10/25/2020 @ 10:25pm (UTC) 2020-10-25T22:25:22+00:00 in ISO 8601 Sun, 25 Oct 2020 22:25:22 +0000 in RFC 822, 1036, 1123, 2822 Sunday, 25-Oct-20 22:25:22 UTC in RFC 2822 2020-10-25T22:25:22+00:00 in RFC 3339. Timestamp Converter. Enter a Date & Time: - - @ : : (24h:min:sec) Enter a Timestamp ... Current block timestamp as seconds since 1970-01-01T00:00 UTC Every few seconds 4 Bits Current target in compact format The difficulty is adjusted 4 Nonce 32-bit number (starts at 0) A hash is tried (increments) 4 The body of the block contains the transactions. These are hashed only indirectly through the Merkle root. Because transactions aren't hashed directly, hashing a block with 1 ... To form a distributed timestamp server as a peer-to-peer network, bitcoin uses a proof-of-work system. This work is often called bitcoin mining. The signature is discovered rather than provided by knowledge. This process is energy intensive. Electricity can consume more than 90% of operating costs for miners. A data center in China, planned mostly for bitcoin mining, is expected to require up ...

[index] [29024] [42524] [22366] [15339] [37808] [22878] [15967] [39652] [21729] [41932]

Online CryptoCurrency Calculator with multi-Cryptocurrencies Simple Bitcoin Converter

This lecture covers the following topics: Overview of Peer-to-Peer Electronic Cash System Transactions Timestamp Server Proof of Work Steps to run the network Incentive Reclaiming Disk Space ... bitcoin paypal, bitcoin price gbp, bitcoin price live, bitcoin rate gbp, bitcoin scam, bitcoin server, bitcoin to dollars, bitcoin to usd, bitcoin trade, bitcoin trading, bitcoin trading price ... Support Me On Patreon! https://www.patreon.com/TheModernInvestor ----- Protect And Sto... Bitcoin is skyrocketing right now ! We had a look behind the scenes of bitcoin mining and a bitcoin miner Farm. join the event here: https://www.miningconf.o... Crypto exchange rate calculator helps you convert prices online between two currencies in real-time. Online CryptoCurrency Calculator with multi-Cryptocurrencies. Cryptocurrency converter, calculator.

#