![blocktrades update.png](https://images.hive.blog/DQmSihw8Kz4U7TuCQa98DDdCzqbqPFRumuVWAbareiYZW1Z/blocktrades%20update.png) Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last post. # [Hived (blockchain node software)](https://gitlab.syncad.com/hive/hive) ### Hived nodes can now directly exchange compressed blocks over p2p network Hive nodes can now directly exchange blockchain blocks in compressed form. Previously each node would receive new blocks in uncompressed form, then compress them locally before storing them to their block_log file. In our most recent changes, the p2p network protocol has been enhanced so that when a node connects to a new peer, the two nodes inform each other of whether they can accept compressed blocks (only nodes compiled with the new code can accept compressed blocks, older ones can only accept uncompressed ones). So new nodes will send blocks in compressed form to other new nodes, saving the recipients from having to compress the blocks locally. In other words, only one node in the network has to compress the block, and all the other new nodes who receive that compressed block to will benefit from the work done by that node. Eventually, once the network is only running with new nodes, all blocks exchanged over the network will be compressed. This not only reduces network traffic, it will also reduce the overall CPU cost for supporting compressed blocks, because each block will only need to be compressed by one node in the network (i.e. the witness that produces the block). The update to the p2p negotiation protocol was also designed to allow for interoperability between nodes in the face of further enhancements to the compression algorithms, where we could once again see a situation where we have old nodes and new nodes that compress by different methods. ### Finished up storage of block ids in the block_log.artifacts file We completed the changes for storing block ids in the block_log.index file (now renamed as block_log.artifacts to avoid confusion with the older-format files) and integrated those changes with the full_block/full_transaction code discussed in my previous post. This optimization reduces the amount of CPU work that hived nodes need to do. It also speeds up get_block API calls and sync speeds between peers. We changed from our initial algorithm for creating the new block_log.artifacts from an existing compressed block_log, reducing the time to generate these files on a magnetic hard drive (HDD) from 126 minutes down to 72 minutes. Note that this task is now only IO bottlenecked by the time to read the block_log file backwards, so on a set of 4xraided NVME drives it takes as little as 7.5 minutes. ### New [compress_block_log](https://gitlab.syncad.com/hive/hive/-/blob/develop/programs/util/compress_block_log.cpp) tool We also have a new utility for compressing/uncompressing a block_log file. This tool should be especially useful for existing hived nodes that will likely want to compress their existing block_log file to save disk space and serve up compressed blocks more efficiently on the p2p network (of course, there’s also an option to just grab a compressed block_log from someone else instead of compressing your own). ### Optimizations to Hive’s transaction and block processing speed When looking at the performance of the Hive network, it is useful to look at two different performance metrics: 1) the speed at which transactions can be added to the blockchain (transaction performance) and 2) the speed at which data about the blockchain can be served to Hive client applications (block explorers, hive.blog, gaming apps, etc). These two metrics could also be viewed as the “write” speed and the “read” speed of the blockchain network. Transactions write data to the blockchain, and API calls allow apps to read this data. A lot of our time this hardfork was spent on improving the read speed of the network: mainly by creation of the HAF framework, but the creation of the block_log.artifacts file was another example of this effort and it helps speed up `get_block` API calls. Read speed is very important, because it puts an upper limit on how many people can see what is going on in the network, and in a blockchain network, more people are reading data than writing data, especially when that network supports social media functionality. But our work during the last month has focused on the “write” speed, the ability to process more transactions without breaking a sweat or slowing down the ability to read the data being written. This year, during periods where nodes were receiving high bursts of transaction traffic (typically due to bots playing splinterlands), we’ve seen hived nodes put under stress when a bunch of these transaction build up that can’t be included into a block (either because of lack of time to process the transactions or because the block was simply full). These non-included transactions were then re-applied to the node’s state after the block was generated so that they could be added into future blocks, but previously this reapplication of these transactions delayed the send of the newly generated block to other nodes. So in one example of a flow change that we made as part of recent optimizations, new blocks are now sent to the p2p network for distribution prior to the reapplication of these non-included transactions. The upshot of this is that nodes can handle much larger bursts of transaction traffic better, because we can insure timely delivery of new blocks to other nodes, even when we have a lot of non-included transactions due to burst traffic. After all our recent transaction/block optimizations, we ran benchmarks with the new code exposed to transactions levels 20x higher than current traffic levels. These tests were performed both with 128K blocks (2x current size) where lots of transactions don’t get included due to limits on the block size, and 1MB blocks (16x current size) where all transactions typically did get included into the larger blocks, and in both cases the new nodes respond without even a hint of a hiccup. ### Completed optimization of OBI (one-block irreversibility) protocol We resumed work on OBI optimizations at the beginning of this week and completed them as of yesterday. Currently we’re writing tests to verify the performance of the optimizations. ### Hived testing We continued creating tests for the latest changes and found a few more bugs related to the new transaction serialization of assets. In a lot of cases, this is just a continuation of the long term task of creating a complete test suite for existing hived API calls. Now that we have tests for more API calls, it’s easier for us to detect when a change breaks something, but we still don’t have a comprehensive set of tests yet. In related work, we’re adding more transaction functionality to the CLI wallet to enable it to serve as a better testing tool, but this task isn’t in the critical path for the hardfork, so we will pick it up again after the hardfork is past (or if a dev comes free in the meantime). # [Hive Application Framework (HAF)](https://gitlab.syncad.com/hive/haf) We’ve begun re-examination of scripts for backup of HAF database and HAF apps in light of changes to HAF since scripts were created. Need to check on the status of this task still. # [HAF-based block explorer](https://gitlab.syncad.com/hive/haf_block_explorer) We’re currently creating new tables in the block explorer database schema to track votes for witnesses. These tables will be used to serve up various forms of voting data for the new “witness” page of the block explorer. We’ll probably also create something similar later for a “DHF proposals” page on the explorer. # [HAF-based balance tracker application](https://gitlab.syncad.com/hive/balance_tracker) While working on the HAF-based block explorer, we found it would be useful to have vest balances for all accounts in the block explorer’s database, so rather than rewrite such code, we’re incorporating the balance_tracker prototype app into the block explorer as a sub-app. Originally the balance tracker was created as a simple “tutorial” app to show new HAF devs how to create a HAF app, so it wasn’t fully tested for accuracy, but now that we’re going to be deploying it for production use we undertook to do such testing and we found and fixed a few bugs due to some of the more esoteric events that occurred during the history of the blockchain (for example, splitting of vests at hardfork 1). # [HAF-based hivemind (social media middleware server used by web sites)](https://gitlab.syncad.com/hive/hivemind) We fixed some bugs, did some code refactoring, and worked on CI changes for HAF-based hivemind. Last remaining big task is to create and test a docker for it to ease deployment. This will hopefully be done in next week. # Some upcoming tasks * Test enhancements to one-block irreversibility (OBI) algorithm and merge it to develop by this weekend. * Merge in RC cost rationalization code (there is a small possibility this change doesn’t get into the HF if we encounter too many issues in the next couple of days). * Continue testing and performance analysis of hived on mirrornet. * Test hived release candidate on production API node to confirm real-world performance improvements. * Tag hived release candidate early this coming week (probably Monday or Tuesday). * Finish testing and dockerize HAF-based hivemind and use docker image in CI tests. * Test and update deployment documentation for hived and HAF apps. * Complete work for get_block API supplied via HAF app * Continue work on HAF-based block explorer * Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode (low priority) * Document all major code changes since last hardfork # When hardfork 26? We found a few more errors related to the changes to transaction serialization of assets than we initially expected, so we’re going to take a more comprehensive approach to make sure we’ve eliminated any potential problems from this change and this will slightly delay the hardfork date. A formal announcement of the date should follow shortly from hive.io account.