This is the 2nd update of 2021 on the Hive programming work being done by the BlockTrades development team. We’ve recently been having meetings to decide how best to approach some of the larger [upcoming tasks (modular hivemind framework, smart contract platform, etc)](https://hive.blog/hive-139531/@blocktrades/roadmap-for-hive-related-work-by-blocktrades-in-the-next-6-months) and what things we need to prototype first. While we’re waiting for those prototypes to be be built, we have a number of programmers free to work on smaller tasks, so we’re prioritizing a lot of “low-hanging fruit” tasks that are relatively easy to do and still provide useful benefits. This means we’re reviewing a lot of the existing open issues in Hive’s gitlab repository, closing completed issues that didn’t get properly marked as done, deciding which open issues we can complete quickly, and discussing possible implementations of these issues with other Hive programmers. Below are some of the tasks we completed during this period on various Hive projects: # Hived work (blockchain node software) To avoid potential performance problems, the ability to vote on expired DHF proposals was disabled: https://gitlab.syncad.com/hive/hive/-/merge_requests/161 New tests were created to check a hived node’s behavior when replaying the node and stopping and restarting the node (this work was done a while ago, but only recently merged into the development branch): https://gitlab.syncad.com/hive/hive/-/merge_requests/89 One of the prototyping projects going on right now is the plugin for directly writing data from hived to hivemind. The work for that is going on here: https://gitlab.syncad.com/hive/hive/-/commits/km_live_postgres_dump/ As part of the above task, there’s been some debate about how we might want to introduce new data computed in hived into hivemind. Generally speaking, I hope that we don’t need to introduce a lot of new types of data that aren’t already there, but we know of at least some accounting information, for example, that isn’t currently shared with hivemind. I think the simplest method for this is just to have hived generate some new virtual operations when it processes a block, since all virtual operations are already being passed to hivemind. Virtual operations are essentially “side-effects” of user-generated operations that report about how those operations changed the internal state of the blockchain. However, some of the other Hive programmers are concerned about the potential performance impact of generating these additional operations (especially if we found the need to generate many more, which could also lead to problems keeping the code simple), so we’re going to do some performance testing with a prototype that BlockTrades has already written that generates these additional operations, measuring it against a version that doesn’t. It’s also worth noting that we could conceivably make generation of many virtual operations optional and configure a hived to only generate the virtual operations that a given hivemind configuration needed. The issue for this task is: https://gitlab.syncad.com/hive/hive/-/issues/111 # Hivemind (2nd layer microservice for social media) Here’s a list of some of the code changes that were merged into hivemind this work period: Improvements to the code for upgrading a hivemind database to a new version of the database schema: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/457 A fix to user notifications: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/457 Improvements to interrupting both the fast sync and live sync indexing process (interrupting with CTRL-C and restarting operation): https://gitlab.syncad.com/hive/hivemind/-/merge_requests/435 Improved testing of mute and follow_mute operations: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/433 Speedup of get_follow_list API call: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/440 Increase the amount of version information available via API about an API node’s hivemind version (see issue https://gitlab.syncad.com/hive/hivemind/-/issues/124 for more details): https://gitlab.syncad.com/hive/hivemind/-/merge_requests/462 Add some code to measure time consumed after initial sync (time to “vacuum analyze” tables to build table statistics to improve SQL query planning, generate indexes needed by API calls, etc): https://gitlab.syncad.com/hive/hivemind/-/merge_requests/464 We also continued to work on extending the coverage of hivemind tests over Hivemind's extensive API. # Condenser and web wallet (software that powers hive.blog and similar sites) We made some fixes to the hive web wallet, to handle issues related to the display of transfers on old accounts that didn’t have much recent activity: https://gitlab.syncad.com/hive/wallet/-/merge_requests/85 https://gitlab.syncad.com/hive/wallet/-/merge_requests/87 https://gitlab.syncad.com/hive/wallet/-/merge_requests/90 # Near-term work plans and work in progress On the hived side, we continue to work on the governance changes discussed in our six-month roadmap post. On the hivemind side, since we’re now working on directly injecting the blockchain data into hivemind, we also need to change the hivemind indexer (which takes the raw blockchain data and builds the auxiliary tables used to answer hivemind API queries). Previously the indexer made API calls to hived to get the raw blockchain data, so now that code has to be changed to get the data from the block and operations tables created by the hived plugin. We’re also going to experiment with using the block and operations data stored in hivemind by the hived plugin to serve up the get_account_history API data. I believe we can tremendously speedup the performance of these API calls in this way and allow for full querying of the history of a user’s operations with extensive filtering capabilities. This issue for this task is: https://gitlab.syncad.com/hive/hivemind/-/issues/132 Both of the above tasks are tightly related to the design of the modular hivemind framework and serve as prototypes for its functionality. Modular hivemind, in turn, will likely serve as the basis for our smart contract platform.