Below is a list of some Hive-related programming issues worked on by BlockTrades team during the past few weeks: # Hived work (blockchain node software) ## Code cleanup and investigation of support for Ubuntu 20 Separate from our hardfork 25 changes, we’ve been doing some general code cleanup to make the code more maintainable, and we also started a task to get the code to compile cleanly under Ubuntu 20 without requiring “fiddling” with the build process. ## HF25 changes (these are described in detail in our Hive roadmap post) ### Expiration of old governance votes We’ve completed and written tests for all changes related to handling expiration of governance votes (votes for witnesses and Hive Fund proposals). ### Curation rewards calculation changes We completed analysis and implementation of voting window curation changes for HF25. We’re also writing some tests for the curation calculator, as we found the current tests were inadequate (our changes only triggered one fail of existing tests). During the process, we also discovered an error in the implementation of the square-root function used by the current curation algorithm for unsigned 128 bit integers, but we’re removing use of this square-root function in the new curation calculation, as part of our removal of the convergent-linear curve code (this is the code that weakened small votes). The new curation reward algorithm works as follows: * first day (24 hours) linear rewards (equal weight to all voters in that window) * second window (24 hours to 72 hours/3 days) with reward weight/2 * remaining votes in 3rd window with reward weight/8 Under the new algorithm, anyone voting with the first 24 hours of the post receives the same proportional rewards. In other words, for any given voting strength, the voter will get the same percentage return-on-investment as any other voter during that period. Voters voting during the second and third window receive a smaller proportional curation reward (and voters who voted during the first 24 hour period receive a little more reward when voters vote during the 2nd or 3rd window). Note that if no one votes during the first window, then 2nd window voters will receive the same amount of curation as if they had voted during the first window. The basic idea behind the new algorithm is to encourage voters to find good content, but to put them on an equal footing with voting bots. Under the current algorithm that we’re replacing, voting bots have an advantage because there’s a short window in time to cast a vote for optimal curation rewards. Note that author rewards are not affected by this change: this change only affects how curation rewards are distributed among voters. ### Hive to HBD conversion operation The only code we’re still working on in hived for hardfork 25 is the new operation that allows users to convert liquid Hive to HBD. We’re still researching some issues associated with the existing code that computes the median price for Hive, but we expect to have the conversion code completed and tested this week. ### Hardfork code freeze in middle of this month We expect to do a code freeze on 4/15 (middle of this month) so that witnesses can launch a testnet and begin evaluating and testing the code changes for hardfork 25. ### Testnet to operate for at least one month Barring any problems, we expect the testnet to operate for at least one month, then we’ll begin final prep for HF25. This will allow time for Hive API libraries and frontend web sites to make changes to provide notifications related to vote expiration and to enable the use of the new Hive→HBD conversion operation and the recurrent payments and rc delegation functionality implemented by @howo. But strictly speaking, most of such frontend functionality can be implemented after the hardfork is executed without causing any problems, so the primary reason for this time interval is to allow for testing and evaluation of the performance of the new algorithms and features. # Modular hivemind (application framework for 2nd layer apps) ## Syncing modular hivemind from SQL account history plugin We were able to successfully sync a hivemind instance from the data injected by the SQL account history plugin (with the syncing taking place as the SQL data was injected by the plugin), but we encountered a problem at the end when some of the indexes were being recreated by hivemind as hivemind exited full sync mode and entered live sync mode. We’re investigating this issue now and we expect a fix in the next couple of days. ## Performance measurements for hivemind sync with SQL account history plugin Despite the issue when exiting full sync mode, we were able to collect some useful performance measurements. On the regular hivemind sync version, where we first do a hived replay to fill hivemind’s database, and where we then do a hivemind sync where indexes and foreign keys are dropped automatically and rebuilt at the end), the hivemind sync process took 50983s (hivemind sync) + 4047s (index creation) + 1998s (foreign key creation) = 57028s On the modified version of hivemind sync, where indexes are created before the sync begins, the hivemind sync took 67947s (regular version was 10918s faster). Despite the increased time for the modified version (10918s/3600s = 3.03 hours), this allows for an overall decrease in the time to fully sync the hivemind node using the SQL account history plugin, because it means that the hivemind sync can be started while the hived node is being replayed to fill hivemind’s database. In the “regular method”, the total time would be hived sync (~8 hours) + hivemind full sync time (15.84 hours) = 23.84 hours. With the modified method, it looks like we can get this time down to just hivemind modified sync time (18.87 hours). Note that all these times should ultimate be compared with the existing time to do a hivemind sync without the SQL account history plugin (~90+ hours). So it seems possible we could be looking at a 4x or better speedup in the time to do a full hivemind sync of a new node with the SQL account history plugin, if I haven’t messed up any where in my assumptions (I didn’t want to delay this report any longer, so it’s not been “peer-reviewed”). # Hivemind (social media middle-ware) ## Significantly reduced memory usage by hivemind process We fixed an issue in hivemind where a dictionary was used as a cache for post ids, and this dictionary was progressively consuming more memory as the blockchain grew in size. We spotted this issue during some our performance testing of hivemind syncing using the data provided by the new SQL account history plugin (but the problem exists for all existing hivemind implementations). Unfortunately, we haven’t had a chance to measure the exact memory savings for the fix yet, as we were focused on other tasks, but I should have those numbers for our next report. ## Testing hivemind syncing on a low-end server We’ve setup a low end computer (8GB of RAM and only a conventional hard disk drive, no SSD drive) to see what the minimum requirements are for a full hivemind node are, and to see if we can lower those requirements. In our tests, the hivemind process did manage to finish the full sync process, but it hit some problems during creation of indexes, so we’ll be digging into this issue further in the upcoming week. ## Testing performance of hivemind with Postgres 13 Currently postgres version 10 is the recommended version of the database for use with hivemind, but we did some tests this week to check if hivemind was compatible with the latest version of postgres (version 13) and to measure the relative performance. It’s all good news: no code changes were required to support postgres 13, and as an extra bonus, we saw a 5% speedup in full sync time in our test. We didn’t make any comprehensive tests yet of API response time related to SQL query speed, but signs are good that we can only expect performance improvements and no regressions. ## Miscellaneous hivemind bug fixes and documentation We fixed a couple of small bugs reported by users and frontend devs, such as a pagination issue with a community-related API call and an issue with community name validation: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/488 https://gitlab.syncad.com/hive/hivemind/-/merge_requests/489 And we have an open merge request for the code to generate openapi documentation for the various hivemind API methods: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/486 # Another progress update soon I’ll probably be putting out another progress update early next week, after we have more performance numbers. I kept delaying this one hoping to include those numbers, but we were caught up in too many tasks and small issues spoiled a bunch of our measurement attempts, and then the Easter holiday hit. [EDIT] Several people have misinterpreted the function of the weights in the new algorithm, and this has led to a misapprehension about curation rewards for late voters. The weights are used to allocate the rewards between the curators. So if there are few strong voters in the first period and most in a later period, the late voters will receive roughly the same rewards as if they had voted in the early period. Total post reward amount (author + curation rewards) will be calculated based on total rshares with the new algorithm, not the weights. In pretty much every case, this new algorithm is designed to be MORE favorable to late voters than the current algorithm. So if this leads to people only voting in the first 24 hours, it's only due to misinformation. We'll be presenting more data later to explain how the new algorithm distributes curation rewards.

See: 8th update of 2021 on BlockTrades work on Hive software by @blocktrades