![blocktrades update.png](https://images.hive.blog/DQmSihw8Kz4U7TuCQa98DDdCzqbqPFRumuVWAbareiYZW1Z/blocktrades%20update.png) Below is a list of some of the Hive-related programming issues worked on by the BlockTrades team during the past month. Because of the amount of time that has passed since the last report and the amount of work that has been done, I’ll just cover a few highlights here. # Hive Application Framework: framework for building robust and scalable Hive apps We put a lot of effort into creating new user documentation for HAF(e.g. documentation for HAF app developers). Despite all that work, I expect the docs will still need further improvement as we get feedback from users trying out HAF for the first time. As a supplement to the written documentation, we’ve also created scripts to simplify setup of a HAF-server (either in a docker container or running “bare-metal” on a computer). Here’s a look at the options to the top-level “one-step” script for setting up a HAF server: ``` ./scripts/setup_haf_instance.sh parameters: --help Usage: ./scripts/setup_haf_instance.sh [OPTION[=VALUE]]... One-step setup for a HAF server instance OPTIONS: --host=VALUE Optionally specify a PostgreSQL host location (defaults to /var/run/postgresql) --port=NUMBER Optionally specify a PostgreSQL operating port (defaults to 5432) --hived-data-dir=PATH Optionally specify a path where hived node will store its data. For faster setup, put a recent blockchain/block_log and block_log.index file in this directory before running this script. --hived-option=OPTION Optionally specify a hived option to be passed to the automatically spawned hived process (this option can be repeated to pass multiple hived options). --option-file=FILE Optionally specify a file containing other options specific to this script's arguments. This file cannot contain another --option-file option within it. --haf-database-store=DIRECTORY_PATH Optionally specify a directory where Postgres SQL data specific to the HAF database will be stored. --branch=branch Optionally specify a branch to checkout and build. --help Display this help screen and exit ``` We’ve continued to improve the build-and-test system for HAF and it now uses the aforementioned scripts to setup test scenarios for HAF. We also made more optimizations and fixed some minor bugs exposed during testing. I’m planning to write an overview post for HAF developers early next week that should serve as a jumping off point for anyone interested in using HAF to design their app (in my opinion this should be anyone building a new Hive-based app). ### Balance tracker (example HAF app) We created an API and a web interface for the balance_tracker (an example HAF app) using the PostgREST web server as an experiment in creating a HAF app that only requires SQL and Javascript coding. For more details on this app, see my previous posts. We finished optimizing this application, then created a 2nd version of this application that uses a Python-based web server to compare performance against the PostgREST web server. After fully optimizing the SQL queries that both web servers rely on, we found that the PostgREST server performs better than the Python-based web server (at first this fact was obscured by the slowness of the unoptimized SQL queries). The results from these benchmarks between the two different server types suggest that we may want to make PostgREST the recommended server for HAF apps, so next we’re planning to experiment with replacing the Python-based web server used in the HAF account history app to see if we can improve its performance in a similar way. ### HAF account history app (aka hafah) We’ve continued to work on hafah, the account history app that will replace the need for API nodes to run a resource-intensive account history hived node. We setup a build-and-test system for hafah and we fixed some bugs exposed during our verification of the data produced compared to the data produced by an account history hived node. We also completed some benchmarks that show superior performance by hafah operating on a live-synced HAF database compared to a hived-based account history node. One of the next things we plan to examine is the performance of a version of hafah that only operates on irreversible block data (this type of operation is useful for exchanges and other financial apps). We expect the performance of hafah working only on irreversible blocks will be faster than standard hafah, but we want to quantify the extent of the performance advantage. # Hived (blockchain node software) work We made some major optimizations in the way that hived serializes API results to JSON and this has a decent impact in the performance of some of the API calls (including the account history api calls that are being compared against hafah’s performance). We also continued to make improvements to the testing system. Hived can now report its “state” to testtools to allow for more testing scenarios. We added tests for comment options. We also created more tests for the account history plugin in order to help us verify the functionality of hafah. We continued to do work on RC costs analysis, but this work is temporarily on hold now until we trace down why we’ve seen an increase in block time offsets under heavy traffic conditions, because our analysis so far indicates that this slowdown isn’t related to the time to process the transactions in the blockchain thread and it dominates and obscures the real costs of processing these operations.. I suspect that this problem is a result of a known issue with the use of boost-based locks that are incompatible with the multitasking model used by the peer-to-peer layer. These locks got introduced when the p2p code was first lifted out of BitShares and re-used in the Steem code, but they haven’t caused any real bottlenecking until chain activity increases to a very high level. We have a tentative solution for fixing this issue and we’ll start on it in a week or so once a dev frees up to work on it. With respect to protocol changes (hardfork changes), we added the ability for an account to replace its signing keys twice per hour (previously it was limited to once per hour). This change was requested in order to allow for more secure key changing. # What’s next? * Make a few more improvements to one-step script for installing HAF * Finish write up of “Getting started with HAF” post * Add support for filtering of operations by sql_serializer to allow for smaller HAF server databases * Collect benchmarks for hafah operating in irreversible block mode * Finish conversion of hivemind to a HAF-based app * Fix locking/task incompatibility issue between blockchain and p2p threads and see if this fixes block time offset increase under heavy transaction loading. * Complete work on resource credit rationalization after block time offset issue is resolved # When hardfork 26? We’ve only a few remaining tasks to complete for hardfork 26, and I’m somewhat optimistically hoping we can complete the remaining ones by the end of the month. In the meantime, we’ve setup a first testnet that can be used to test @howo’s RC delegations (and incidentally test many of the other changes we’ve made so far). Once we’ve completed all the changes for this hardfork, we’ll launch a final testnet for a month, with a planned hardfork time at the end of that month, assuming no problems are discovered. Realistically, this means the earliest possible hardfork date would be the end of March, but it wouldn’t surprise if that time pushes into April.

See: First update of 2022 on BlockTrades work on Hive software by @blocktrades