![blocktrades update.png](https://images.hive.blog/DQmSihw8Kz4U7TuCQa98DDdCzqbqPFRumuVWAbareiYZW1Z/blocktrades%20update.png) Below is a list of some of the Hive-related programming issues worked on by BlockTrades team during the past work period: # Hived work (blockchain node software) ### Updating RC cost calculation code We continued to analyze the resource credits code. We’ve written some code to dump rc costs during replay of the blockchain and created some graphs to analyze how the rc resource pools change over the blockchain’s operational history. Our most significant finding so far is that the execution time costs are totally inaccurate right now, largely because signature verification time wasn’t accounted for at all. To get better estimates of real world execution time costs, we’re probably going to create a tool to measure execution times for various operations during replay as a starting point for updating them to accurate values, with a separate cost calculation that accounts for signature costs based on number of signatures used by the transaction containing the operations. ### Testing via new “mirror net” technology identified a new bug (now fixed) As mentioned a while back, we’ve been developing a tool for taking an existing block_log (e.g. a block_log from the mainnet) as a starting point for launching a testnet that more closely matches the configuration of the mainnet. This technology is conceptually similar to the idea behind the older tinman/gatlin code, but is designed for higher performance. The new mirror net code is already proving it’s worth, as we found a bug in the hived code while testing the mirror net code, whereby the reward balance could go negative. For more on this bug, see the fix and associated issue: https://gitlab.syncad.com/hive/hive/-/merge_requests/306 In the longer term, we’ll be integrating this technology into our build-and-test system (continuous integration system) for various advanced test scenarios. ### Finished testing and merged in Command-line interface (CLI) wallet improvements to develop branch Complete improvements for offline use of the CLI wallet: https://gitlab.syncad.com/hive/hive/-/merge_requests/265 Add default value to the server-rpc-endpoint option: https://gitlab.syncad.com/hive/hive/-/merge_requests/273 Other hived-related work: * Finished testing and merged in fixes for sql_serializer and account history plugins: https://gitlab.syncad.com/hive/hive/-/merge_requests/289 https://gitlab.syncad.com/hive/hive/-/merge_requests/294 * Merged in changes for HBD limits for HF26: https://gitlab.syncad.com/hive/hive/-/merge_requests/297 * Removed obsolete option to `SKIP_BY_TX_ID` from compile options: https://gitlab.syncad.com/hive/hive/-/merge_requests/301 * Fix for problem with faketime library on some platforms: https://gitlab.syncad.com/hive/hive/-/merge_requests/303 * Updated some API pattern tests based on bug fixes: https://gitlab.syncad.com/hive/hive/-/merge_requests/299 * Improved testtools robustness when there is a temporary communication interruption to nodes being tested: https://gitlab.syncad.com/hive/hive/-/merge_requests/302 * Updated to use newer clang-tidy linter (now uses the default one on Ubuntu 20): https://gitlab.syncad.com/hive/hive/-/merge_requests/300 * Compile all targets with boost > 1.70 available on Ubuntu 20.04: https://gitlab.syncad.com/hive/hive/-/merge_requests/307 # Hive Application Framework: framework for building robust and scalable Hive apps A lot of our work during the last period has continued to focus on app framework development and testing. We continued to work on code cleanup associated with the new HAF repo (this is the new repo mentioned last week that contains the components that are common to all HAF-based applications that was created to better manage version compatibility among HAF components and prerequisite applications such as hived). A lot of documentation was added and/or updated, more testing by 3rd party testers (like me) to ensure instructions are clear and accurate on “clean systems” (i.e. not the developer’s computer), fixes were made for build and test compatibility on both Ubuntu 18 and 20 (although Ubuntu 20 is still the recommended platform for any HAF-related development), etc. A new API call, `hive.connect`, was added which handles database inconsistencies that can potentially arise if the connection between hived and the postgres server is broken, allowing serializing to be smoothly resumed. https://gitlab.syncad.com/hive/haf/-/merge_requests/13 We also added system tests for the sql_serializer to the new HAF repo: https://gitlab.syncad.com/hive/haf/-/merge_requests/17 With this addition, we have a full test suite for all the components contained in the HAF repo. ### Optimizing HAF-based account history app (Hafah) This week we continued to do performance testing and optimization of hafah. We added a library that allows us to track memory usage, and the latest incarnation with memory optimizations was able to sync all the way to the headblock using only 4GB of virtual memory while configured to use 7 threads for sending/receiving data. Previously using this many threads required nearly 128GB of memory, so it was a useful improvement. Further improvements were also made to the sync process, which at least in the 5M block scenario that was tested, reduced sync time from 280s down to 180s. This improvement still needs to be benchmarked with a full sync to headblock, but it is likely we'll see similar improvements in performance for a full sync. ### Benchmark multi-threaded jsonrpc server for HAF apps (using hafah as our “test” app) Preliminary benchmarking using a multi-threaded jsonrpc server showed 2-3x speed improvement in api performance for an experimental version of hafah synced to 5M blocks (2-3x performance measured relative to the original single-threaded jsonrpc server), but we still need to repeat these benchmarks with a version of hafah synced to the current head block. # Hivemind (social media middleware app used by social media frontends like hive.blog) As mentioned last week, there was a bug detected during our production testing with some notifications showing as from 1970. We’ve fixed this bug and the new version with this fix is now being tested on production. Assuming no problems, we’ll tag an official release with the bug fix in the next couple of days. # Work in progress and upcoming work * In progress: experiment with generating the “impacted accounts” table directly from sql_serializer to see if it is faster than our current method where hafah generates this data on demand as it needs it. This task will also require creating a simplified form of hafah. In fact, the new hafah would be so simplified on the indexer side, that we’re considering adding additional functionality to it, to allow it to maintain a per-block account balance history so that there is something that does real work in the indexing portion of the code. This would also be useful as a template for future work on 2nd-layer tokens. * Release a final official version of hivemind with postgres 10 support, then update hivemind CI to start testing using postgres 12 instead of 10. * Run tests to compare results between account history plugin and HAF-based account history apps. * Finish setup of continuous integration testing for HAF account history app. * Finish conversion of hivemind to HAF-based app. Once we’re further along with HAF-based hivemind, we’ll test it using the fork-inducing tool. * Fix rc cost estimations for execution time as described in hived section of this post. * Deploy new version of condenser that displays RC level for signed-in account.