![blocktrades update.png](https://images.hive.blog/DQmSihw8Kz4U7TuCQa98DDdCzqbqPFRumuVWAbareiYZW1Z/blocktrades%20update.png) Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last post. # [Hived (blockchain node software)](https://gitlab.syncad.com/hive/hive) The work on this hardfork and the associated work on HAF-based projects, represents the biggest amount of labor ever focused on Hive’s development. We have 12 people from the BlockTrades team (including myself) working full-time on these projects. This hardfork has also taken the most time of any of the Hive hardforks to complete, and the sheer amount of work done to the hived blockchain software can also be seen in the number of merge requests (these are made to add features or fix bugs to the hived code base). Over the lifetime of Hive, there have been 484 merge requests merged into the hive code repository so far, and more than half of them have been merged since the last hardfork. Our work during this last week and a half has been focused on creating new tests and finding and fixing bugs in hived, so work on HAF-related projects has temporarily slowed (but nonetheless continued) over the last week or so. ## Automated testing (continuous integration tests) Our automated tests have helped us identify a lot of bugs, but there is some costs associated both with updating the tests to account for new hived behavior and also dealing with occasional weaknesses in the tests themselves that get exposed (sometimes test fails are false positives, especially if there is some inadvertent race condition in the test itself). So when a test fails, it can still be a challenging job at times to identify the true root of the failure: the software, the test, or even some issue with the hardware where the test is running. Sometimes we find quite strange problems in hived as a result of automated tests that are time-sensitive, however. For example, in the past couple of days this lead us to the discovery that testnet nodes were starting up about two to six seconds slower than they should (depending on the hardware speed and loading levels) because of a longstanding bug that erroneously assumed that that the testnet had missed 69M blocks at startup (since the testnet starts with no blocks, and the genesis block time was back in 2016, it considered all those potentially-generated blocks to be missed blocks) and then looped through all 69 million of them to potentially report them as missed by individual witnesses. Computers are fast, but doing work 69 million times still requires a little time. As an example of a more serious bug caught by the tests, we found that the new operation used by one-block-irreversibility (OBI) to allow witnesses to approve blocks had shifted the operation ids of all the virtual operations, breaking the filtering of virtual operations by account_history API calls. Our ultimate solution to this dilemma was to re-use the id of an existing-but-never-used-or-deployed operation for reporting a witness for double production, and therefore avoiding the shift of the virtual operation ids by one. But without our automated tests, this is the kind of bug that could have easily sneaked through simple testnet-based tests. ## Mirrornet testing (a testnet that replicates traffic from the mainnet) We’ve also been testing OBI on the mirrornet. The mirrornet has been critical to testing of OBI, because it would take weeks to create automated tests that can properly simulate a network with a lot of traffic and frequent network interruptions that cause forks. But with the mirror network, we were able to easily construct test scenarios with 4-5 nodes with groups of witnesses on each node, then temporarily disable network connections between these nodes to observe OBI behavior under fragmented network conditions, all in a single day. OBI itself performed well under these testing conditions, but we did get a chance to observe various other behaviors under these conditions that we can probably improve in the future (e.g. cases where the new data provided by OBI could allow more resilience for a fragmented network). And we may make a another optimization to OBI itself to reduce unnecessary OBI-related traffic during forking conditions:currently a scheduled witness casts an approval vote for every block it adds to its head block when switching forks, but such votes are probably not useful for blocks in the distant past when switching to a “long” fork of more than 20 blocks (one of the tests we ran created forks of several hundred blocks). We also found another bug during mirrornet testing that was extremely subtle: when we updated to a later version of the open-source boost library, this brought in a new behavior for multi-index containers (these containers are used throughout hived to track blockchain state information and in the p2p layer as well). Now the modify call for multi-index containers will erase the object being modified if the lambda function being used to do the modification throws an exception (previously this only caused the modify to be considered a failure, but the object stayed in the container). This unexpected change in behavior started showing up as random trash in objects that had references to deleted objects in these containers, and eventually caused one of the mirrornet nodes with the highest activity (the one mirroring traffic from the mainnet) to crash. We already have one workable solution to this new behavior, and we’ll examine other options tomorrow before making a final decision on how we’ll handle it. ## Near to a release candidate Despite the bugs discovered and the testing challenges, getting all tests created so far to pass has given us the confidence to merge in all the outstanding features branches for hardfork 26. As of today, we’ve merged in all performance optimizations, including support for OBI and resource credit rationalization, to the develop branch of the hive repository, and we’re now doing what can be viewed as final testing of this branch prior to tagging a release candidate. # [Hive Application Framework (HAF)](https://gitlab.syncad.com/hive/haf) While working on the block explorer and the balance tracker apps, we discovered it would be useful to have HAF create a table to track the block numbers where each hardfork was triggered. We’ll implement this new table soon. # [HAF-based block explorer](https://gitlab.syncad.com/hive/haf_block_explorer) Current work here is focused on optimizing queries associated with the new tables recently added to the block explorer (witness table, witness vote tables, and vesting balances for accounts). # [HAF-based balance tracker application](https://gitlab.syncad.com/hive/balance_tracker) We added support for a few more operations, including newly added virtual operations, to allow the balance_tracker to correctly compute vests for accounts. This is an iterative process: we add support for more operations, then compare the computed balances against a replay of the blockchain that dumps balances for each account at each block. # [HAF-based hivemind (social media middleware server used by web sites)](https://gitlab.syncad.com/hive/hivemind) Two of our Hive developers are currently working on HAF-based hivemind. One is focused on continuous integration and docker support, and the other is writing tests, fixing bugs, and making optimizations. # Some upcoming tasks * Final testing and performance analysis of hived release candidate on mirrornet. * Test hived release candidate on production API node to confirm real-world performance improvements. * Tag hived release candidate ASAP (hopefully Friday, if not then Monday) * Finish testing and dockerize HAF-based hivemind and use docker image in CI tests. * Test and update deployment documentation for hived and HAF apps. * Complete work for get_block API supplied via HAF app * Continue work on HAF-based block explorer * Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode (low priority) * Document all major code changes since last hardfork # When hardfork 26? A formal announcement of the date should follow shortly from hive.io account, but the base assumption is that it will be no less than 2 weeks after the release candidate is tagged and documentation has been provided to witnesses and exchanges that need to deploy the new software.

See: 13th update of 2022 on BlockTrades work on Hive software by @blocktrades