9th update of 2022 on BlockTrades work on Hive software
![blocktrades update.png](https://images.hive.blog/DQmSihw8Kz4U7TuCQa98DDdCzqbqPFRumuVWAbareiYZW1Z/blocktrades%20update.png)
Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last post.
# Hived (blockchain node software) work
### Mirrornet (testnet that mirrors traffic from mainnet)
We continued to make updates to the mirrornet code to fix some issues related to transaction expiration and duplicate transactions and we’re now launching a new mirrornet network to resume testing.
### Further optimization of OBI (one-block irreversibility) protocol
The optimizations to the OBI protocol are mostly done, but the dev for this work is currently tied up with another Hive-related coding task, so it still needs to be fully tested.
### Switching from legacy assets to NAI assets is going to delay the hardfork
As I mentioned in my last post, one of the last planned changes to hived was to support assets using “network asset identifiers” instead of strings. When we originally planned this change, it looked fairly simple: when the node received a transaction, it would first try to decode it with NAI assets, then if that failed, fallback to processing the transaction as if the assets were in string format.
Unfortunately, we found that internally the binary form of the transaction is discarded in favor of a decoded form as a C++ object and then only much later packed back into a binary form. And to support the two formats, there were now going to be several places in the code where we would need to pack with both formats to see which one matched the original format it was signed in and this would add some significant CPU overhead.
Fortunately, there is a better solution to this problem, and it fits well with one of the optimization changes that I mentioned in my last post that we were planning to make after the hardfork related to the new “compressed block” feature we recently added.
Currently, hived compresses blocks when it writes them to the blocklog (the file that stores all the blockchain blocks), but it can only share blocks over the p2p network in uncompressed form. This leads to some amount of unnecessary compression/decompression of blocks during normal operation and it also means that more data has to be exchanged over the p2p network (because the blocks aren’t shared in compressed form).
So we were already planning to introduce a change after the HF to allow nodes to exchange blocks in either uncompressed or compressed form (the nodes would negotiate during their connection whether or not they understand each other’s compression formats or if they will need to exchange blocks uncompressed). To do this efficiently, we planned to retain the binary form of the block received from a node (as well as the unpacked object that exists today).
But we decided to implement this optimization now instead of after the HF, because it will yield substantial performance benefits by eliminating a lot of object copying, packing, unpacking, compression, decompression done by the current code; plus it will help with the change to NAI serialization (which can only be done via a HF).
Preliminary analysis shows that the performance benefits of this change are quite large and should dramatically increase the amount of transactions that can be processed in a block by a hived node.
But it does require reasonably large changes to both the p2p and the block processing code, so I think it is only prudent to allow more time to not only implement the changes, but also perform extensive testing of those changes. My best guess right now is that this will push the HF out to late July.
# Hive Application Framework (HAF)
HAF development is still quite active and a lot of the current work is focused on creating docker images for the HAF server and associated HAF apps to simplify deployment and management .
We’ve also added some additional information from the blockchain to the “default” HAF tables based on needs found when developing the HAF-based block explorer.
# HAF account history app (aka hafah)
We found and fixed a few more issues and annoyances with the HAF account history application, plus added some minor features (e.g. get_version API to report the current version of the account history app).
We also updated the benchmarking code to test various aspects of hafah operation. These benchmarks can now test just the internal SQL calls (to analyze the SQL time without the overhead from the web servers), the python-based web server, and the postgREST server (including testing both the legacy API and the new higher-performance “direct URL” API).
We also did more benchmarking to firmly convince ourselves that the postgREST web server performs better than the python-based one (one of our devs had benchmarks that suggested otherwise, but this appears to have been a fluke). In the end we found that postgREST, as expected, did much better (on average, the response times were about 3x better). We also double-checked that both servers responded the same, to be sure that there wasn’t some functional difference that was responsible for the improved performance.
At this point, there are no known issues with hafah functionality or performance, so I expect we’ll resume real world testing in a production environment this upcoming week.
One further issue I’ve been thinking about lately is that I think we should be able to use hafah to implement the get_block API as well, which would further lower the API processing load on hived nodes AND allow for faster response times I think, but I haven’t had a chance to look at this closely yet.
# HAF-based hivemind (social media middleware server used by web sites)
We completed the work to store hivemind’s tables directly into a hivemind-specific schema in a HAF database (i.e. the same method used by other HAF apps) and we’ve been testing this new version with full syncs to the headblock (i.e. to live sync mode).
During this testing process, we’ve found and fixed several bugs along the way, but the testing is time-consuming (it takes almost 54 hours to reach live sync, which performance-wise is pretty good for processing of 6 years of blockchain history, but is a long time when you just want to find out if there’s a bug that regular testing didn’t expose).
Anyways, despite the long test times, I think we’re close to completion of this task; at least to the point where it should be a more-than-adequate replacement for the current production version of hivemind (we may be able to make further optimizations later based on the HAF-based design approach).
# HAF-based Hive block explorer
We have a preliminary version of the HAF-based Hive block explorer completed, and I’ll share some screenshots and technical description of it in a separate post later this week. One of the ideas behind this design is to really decentralize the tech behind Hive block explorers so that any HAF server operator can easily run one.
# Some upcoming tasks
* Modify hived to process transactions containing either NAI-based assets or legacy assets. As mentioned in the hived section of this post, this task has morphed into the wrapping of existing block and transaction objects with smarter objects that retain the binary form of these objects (and other meta data which is costly to recompute). This task is currently in progress and I think we’ll have a preliminary version ready for testing next week.
* Merge in new RC cost rationalization code.
* Finish dockerization and CI improvements for HAF and HAF apps.
* Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode.
* Test postgREST-based hafah running inside a docker on a production server (api.hive.blog).
* Complete and benchmark HAF-based hivemind, then deploy and test on our API node.
* Test enhancements to one-block irreversibility (OBI) algorithm.
* Continue testing using updated blockchain converter for mirrornet.
# When hardfork 26?
Due to the unexpected decision to optimize hived p2p and block processing before the hardfork rather than after, the hardfork date has been shifted to late July to ensure we have adequate time to test these changes.
On the positive side, I’ve pretty certain this will be the last update we make to the hardfork timing, as this should be the last changes we need for the hardfork.
As a side note, several times there have been requests for a summary of the changes made as part of the hardfork and I'll try to put together a list of the changes soon (including non-hardfork changes, since many of those are actually the most significant changes).
See: 9th update of 2022 on BlockTrades work on Hive software by @blocktrades