![blocktrades update.png](https://images.hive.blog/DQmSihw8Kz4U7TuCQa98DDdCzqbqPFRumuVWAbareiYZW1Z/blocktrades%20update.png) Below is a list of Hive-related programming issues worked on by BlockTrades team during the last week or so: # Hived work (blockchain node software) We’re continuing to test and make fixes as a precursor to tagging a second release candidate for hived. We’ve created a new python-based library currently called “testtools” for creating test scenarios for hived and hived’s CLI wallet. We’re replacing the beempy library that was previously used for this purpose in order to accelerate the speed at which the tests execute. For now, the primary purpose for this python library is testing hived, but it may have more general applicability as a library for communicating with hived, in which case we will rename it later to something more appropriate: https://gitlab.syncad.com/hive/hive/-/merge_requests/242 We created some unit-test based stress tests for the new recurrent transfers functionality, and initially we found some surprising results in terms of memory usage, but ultimately this was traced to a misconfiguration of the hived instance (it was configured with the deprecated chainbase account history plugin which is known to consume too much memory). With that plugin replaced by the rocksdb-account-history plugin, memory consumption and general performance was fine. We also fixed some minor issues with the recurrent transfer operation: https://gitlab.syncad.com/hive/hive/-/merge_requests/246 We’ve added a few new network API calls to hived for getting peer count, getting connected peers, adding peers, and setting allowed peers. These functions were primarily added to facilitate testing scenarios (e.g. testing forking logic), but they can be useful to node operators as well: https://gitlab.syncad.com/hive/hive/-/merge_requests/244 We’ve added support for building with boost 1.70 (tested on Ubuntu 18 and 20). We also modified the fc library to enable a simplified logging syntax. For example, instead of: `ilog(“my variable=${my_variable}”,(“my_variable”,my_variable));` you can simply use: `ilog(“my variable=${my_variable}”,(my_variable));` Note that the older syntax is still required when you need to call a function on the variable to get the value to log. The two syntaxes can be mixed-and-matched in a single log statement. During our testing of the fix of the longstanding “duplicate operations in account history” bug, we found that this problem could also arise when the value of the last irreversible block was “undone” as part of the shutdown of hived (i.e. when a node operator presses Ctrl-C to shutdown the node). On a subsequent start, with the last irreversible block set to an earlier block, the code would re-add the operations from the already processed blocks. To fix this, we’re making sure the irreversible block number doesn’t get reverted by the database state undo operation anymore. Once the above issue is fixed and tested in replay mode in conjunction with a full sync of hivemind, we’ll be tagging a second release candidate for the testnet (probably Thursday or Friday). Barring any unexpected issues during testnet testing, I expect that this will be our last release candidate before the official release, based on testing results so far. # Hivemind (2nd layer applications + social media middleware) Last week we’ve been making final fixes and doing performance tests in preparation for a new release of hivemind for API node operators later this week. ## Changing back to using pip for hivemind installation We recently found that our current installation methodology for hivemind could lead to unexpected package versioning issues, so we’re switching back to using pip (python package installer) and pinning the versions of packages that hivemind uses. ## Performance testing and optimization for hivemind While testing the develop branch of hivemind on our production API node (https://api.hive.blog), we noticed a slowdown in performance of the query `bridge_get_ranked_post_by_created_for_tag` (went from average of 64ms to nearly 2s average time). This problem was ultimately traced down to a lack of sufficient statistics being accumulated for the tags_ids column in the hive_posts table. The collected statistics weren’t sufficient to model the probability distribution of the tags used by posts, which resulted in the query planner selecting an under-performing query plan. What’s interesting here is that this was a latent performance issue that could have potentially occurred on any given API node if it collected an unlucky statistical set (the problem wasn’t really a master vs develop branch issue). We fixed the issue by increasing the statistics collected for this column from 100 to 1000: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/503 ## Hivemind memory consumption We’re still researching potential ways to decrease the amount of memory consumed by the hivemind sync process over time. We’ve reduced memory consumption some, but more looks possible. ## Postgres 13 vs Postgres 10 for hivemind During our search for a possible solution to the above problem (before we realized increasing statistics was the best solution), we also tried updating our SQL database from postgres 10 with postgres 13, to see if it would select a better query plan. The database upgrade had no impact on the above problem, but we found another slowdown during hive sync (the indexer that adds data from the blockchain to the database) tied to postgres 13. This problem occurs because the postgres13 planner incorrectly estimates the costs of updating rshare totals during ‘live sync’ and decides to do a just-in-time (jit) optimization which adds 100ms to the query time (`update_posts_rshares` normally averages around 3ms). We confirmed this was the issue by increasing the threshold cost required before the planner was allowed to employ jit optimization (effectively disabling jit usage in the query). In this scenario, performance was just slightly better for postgres 13 than for 10. Once we move to 13, we’ll need to select a long term solution for this issue (either improve the cost estimation or just disable jit for this query), but that’s an issue for a later day. ## Functional testing and fixes for hivemind While working on fixes to community-related API calls, we also improved mock testing capabilities to verify the changes (mock testing allows us to generate “fake” data for testing purposes into an existing hivemind data set). https://gitlab.syncad.com/hive/hivemind/-/merge_requests/496 https://gitlab.syncad.com/hive/hivemind/-/merge_requests/499 https://gitlab.syncad.com/hive/hivemind/-/merge_requests/501 # Modular hivemind (Application framework for Hive apps) We’re currently building a sample application with the prototype for our modular hivemind framework that will support the account history API. Hopefully we’ll be able to perform a full test of this sample application by sometime next week. # Condenser wallet We’ve been doing some condenser wallet testing and bug fixing. We fixed a bug in the new feature by @quochuy that generates a CSV file with a user’s transaction history. The fix has been deployed to https://wallet.hive.blog. https://gitlab.syncad.com/hive/wallet/-/merge_requests/106 # Testnet We’ve had a few brave souls do some testing with the testnet, but I’d like to see a lot more, especially from users supporting Hive API libraries and Hive-based apps. But everyone is welcome to play around on the testnet and try to break things. As a regular Hive user, you can login with your normal credentials via: https://testblog.openhive.network (hive.blog-like testing site) or https://testnet.peakd.com/ (peakd-like testing site) You can also browse the testnet with this block explorer: https://test.ausbit.dev/ Going forward, the testnet should be the preferred vehicle for initial testing of Hive apps. And testing new features now, before the hardfork, helps us to identify areas where we may want to make changes to API responses, etc, before there’s an “official” API response that must then be changed later. # Planned date for hardfork 25 I’m still projecting the hardfork will be in the last week of June.

See: 13th update of 2021 on BlockTrades work on Hive software by @blocktrades