![blocktrades update.png](https://images.hive.blog/DQmSihw8Kz4U7TuCQa98DDdCzqbqPFRumuVWAbareiYZW1Z/blocktrades%20update.png) Below are highlights of some of the Hive-related programming issues worked on by the BlockTrades team since my last post. # Hived (blockchain node software) work ### Mirrornet (testnet that mirrors traffic from mainnet) to test p2p code We implemented and tested the change I mentioned last post to allow a node to request multiple transactions from an idle peer. This resolved the bottleneck with transactions not getting distributed in a timely manner on the mirror net when there were only a few peers connected to a node. We also experimented with various programatic and server-level settings and we were able to dramatically decrease the latency of p2p traffic. On the mirrornet, we’re now seeing negative Block Time Offsets on all nodes in the network (that’s a good thing). I did come up with an idea for one further enhancement to reduce block latency, but I’ve decided it is best to postpone that work until after the hardfork as our latency times are already looking very good at this point. Details of these changes can be found here: https://gitlab.syncad.com/hive/hive/-/merge_requests/437 ### Further optimization of OBI (one-block irreversibility) protocol While analyzing the OBI protocol implementation, we realized we could make a further optimization to detect the case where a block in a fork received enough votes to become irreversible and immediately switch to that fork. Previously, the node would not switch forks until it received enough blocks on the fork that the fork became a longer chain. To see the benefits of this optimization, consider a network split occurs, with 16 block producers remaining interconnected on one side of the split, and 5 block producers on the other side of the split. If the block producers on the minority split generate 5 of the next 6 blocks, then rejoin during the other 16 block producers, they wouldn’t switch to the irreversible fork approved by the other 16 block producers until 6 more blocks were produced (which would trigger the normal fork switching logic because it would be the longest chain). With the new optimization, the minority block producers will typically rejoin the majority fork after one block gets produced and approved by the majority block producers. ### Command-Line Interface (CLI) wallet changes In addition to creating new tests and fixes of small issues uncovered by those tests, we made changes to the CLI wallet to support generating both “legacy” transactions and the new style where assets are represented using NAI (network asset identifiers) instead of strings (e.g. “HIVE”). These changes should be helpful for API library developers when they update their libraries to support NAI-based transactions. Next we need to modify hived itself to support binary serialization of the new transaction format using NAIs, but this is expected to be done quickly (probably just a day or two). Another feature we’re looking to add to the CLI wallet is to be able to a write transaction to a file in the new formats, which may later be useful as another means for generating transactions for cold wallets. ### Compressed block logs We’ve merged in all the changes associated with maintaining block logs in compressed form. After many optimizations, we were able to achieve a compression ration of 49% without adding much in the way of CPU overhead. The way this works currently, hived automatically compresses new blocks using the zstd library as they are written to the node’s local block_log file. In a future version of hived, we’re planning to actually exchange blocks between nodes directly in their compressed form, which would have two benefits: 1) blocks only compressed once rather than by every node and 2) reduced p2p traffic. But with the hardfork rapidly approaching, it was decided to postpone this enhancement to a later date. We also added a new utility program called `compress_block_log` that can be used to compress/uncompress an existing block_log. It also replaces the functionality of the now-obsolete utility `truncate_blocklog`. # Hive Application Framework (HAF) We updated some of the database roles used to manage a HAF database and fixed some permissions issues (we’ve lowered permissions where possible for various roles). In particular, the “hive” role has been renamed “hive_app_admin” to reflect that the usage of this role is install new HAF apps, and it only has the privileges necessary for this task now. It can only read data in the “hive” schema that contains blockchain data, and it can create new schemas to store any data required by the HAF application being installed. We’ve also made various improvements (still ongoing) to the scripts used to setup a HAF database and build docker images. # HAF account history app (aka hafah) We completed tests associated with the postgREST server version of the hafah, and these changes should be merged into develop in the next couple of days, along with changes to the benchmarking scripts. With the postgREST server, hafah now supports two distinct APIs: the “legacy” API which uses the same syntax as the account history plugin (where API names are embedded into the json body) and a new “direct” API (where API names are embedded directly into the URL). These two APIs support the exact same set of methods, but the direct API has better relative performance, especially when making calls where the database query time doesn’t dominate. The legacy API, as the name suggests, is to allow legacy applications to move in their own time to the new API. It’s also worth noting in passing that even with the “legacy” API, performance is significantly better than the performance of the old account history plugin. # Hivemind (social media middleware server used by web sites) We’re currently working on the port of hivemind to a HAF-based app. We completed our first test of a full sync to the headblock. We fixed a memory leak discovered during this process and we’re now analyzing a problem that occurred as hivemind switched from massive sync to live sync mode. Concurrently with the above testing, we’re also making modifications to store hivemind’s data directly into the HAF database using a hivemind-specific schema (the one currently being tested still stores hivemind data to a separate database). # Some upcoming tasks * Modify hived to process transactions containing either NAI based assets or legacy assets. * Merge in new RC cost rationalization code. * Analyzing performance issues observed with postponed transactions when spamming the network with a massive number of transactions. * Finish dockerization and CI improvements for HAF and HAF apps. * Update hafah benchmarking code to benchmark the new “direct” API. * Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode. * Test postgREST-based hafah on production server (api.hive.blog). * Comple and benchmark HAF-based hivemind, then deploy and test on our API node. * Complete enhancements to one-block irreversibility (OBI) algorithm and test them. * Test updated blockchain converter for mirrornet. # When hardfork 26? We discovered some new tasks during this last work period that we decided to be take on before the hardfork (enhancement of OBI, binary serialization for transactions containing NAIs, and a few others), but some previous tasks also completed faster than expected, so I think we’re still on track for the same date mentioned in my last post (around end of June), barring any as yet unknown issues uncovered during further testing.

See: 8th update of 2022 on BlockTrades work on Hive software by @blocktrades