![blocktrades update.png](https://images.hive.blog/DQmSihw8Kz4U7TuCQa98DDdCzqbqPFRumuVWAbareiYZW1Z/blocktrades%20update.png) Below is a list of some of the Hive-related programming issues worked on by BlockTrades team during the past work period: # Hived work (blockchain node software) ### Updates to resource credit (RC) plugin We’ve implemented the changes to include costs for verifying cryptographic signatures during resource credit (RC) cost calculations for transactions. Now we’ve begun analyzing the rationality of costs computed by the resource pools. We also created more tests for RC delegations and reported some errors to @howo based on the test results. We’ll resume creating more tests after those issues are fixed. ### Miscellaneous work on hived We fixed an erroneous error message that was sometimes displayed related to the –exit-before-sync command-line option. Also, yesterday I started an analysis of how transactions are being treated when there wasn’t enough RC to get them included into the blockchain. This is because while analyzing a high traffic time period on the p2p network, I had seen evidence that such transactions are lingering until the account either regenerates enough RC or the transaction expires, which isn’t desirable, in my opinion, from a performance view. I’m currently testing an experimental version of hived that should reject such transactions and avoid propagating them to the rest of the p2p network. # Hive Application Framework: framework for building robust and scalable Hive apps Most of our work lately continues to be HAF-related: ### Experimenting with sql_serializer implementation and operation filtering We’re modifying the sql_serializer to directly write some additional tables to the database to indicate what accounts are affected by which blockchain operations. The hope is that this will result in a speedup versus the current method, where each HAF app has to re-compute this data on-the-fly. We have a first pass at this done now, but the sql_serializer phase takes twice as long as it did before, so we’re going to see if there is anything that can be done to optimize that, since it doesn’t seem like it should take twice as long. We’re also planning to add an option to sql_serializer to allow filtering of what operations get saved to the database. This should be useful for lowering the storage requirements and improving performance of HAF servers that are dedicated to supporting a specific HAF app. ### New example HAF app: balance_tracker We created a new HAF application called balance_tracker that maintains a history of all the coin balances for an account changes over time (e.g. you can plot a graph of how your hive and hbd balance change over time). It’s not completely finished yet, but it works well as a prototype already, and has excellent performance (processed 58M blocks in 2.5 hours), so it’s a great example of how fast HAF apps can be. It’s also not fully optimized for efficiency yet (account names are stored as strings in all the records), but even so its disk usage isn’t too bad (~22GB to store asset histories for every account on the blockchain). The primary reason for creating this app was to serve as an example for new programmers starting out with HAF, but I think it is useful enough that we will further improve it and add a web UI for it (and we still need to add an API interface to the data before that). Each time we create one of these small apps, we learn a little bit more about the most efficient way to build them, so we hope these examples will serve as guidelines for future app development. ### Optimizing HAF-based account history app (Hafah) We completed benchmarking for Hafah and achieved a new performance record for syncing the data to headblock with the latest version (4.9 hours to sync 58M blocks, which was about a 40% speedup over pre-optimized version). ### Some progress on optimized multi-threaded json-rpc server for HAF apps We finished an initial implementation of a the multi-threaded json-rpc server for HAF apps, but unfortunately benchmarking it as an API server for Hafah showed its performance was worse than the previous async-io based json-rpc server. We’ve identified the likely problems (one of which was continual creation of new connections to the SQL server), so we’ll be updating the server this week and re-running the benchmark after we’ve improved the implementation. I hope we can complete this task in the coming week, but if the upcoming optimizations aren’t sufficient, we may need to look at another jsonrpc server implementation such as Sanic (Sanic is used by the Jussi proxy server which is also implemented in Python). We are also creating a CI test for hafah and it should be done in the next week. # Hivemind (social media middleware app) We’ve tagged the final version of hivemind that supports postgres 10 for production deployment. We’re currently creating a database dump file to ease upgrading by API server nodes. All new versions of hivemind after the one just released will require postgres 12, but we’re planning to convert hivemind to be HAF-based before we tag another production version, and that new version will need to undergo rigorous testing and benchmarking before we would tag it for production usage because of the magnitude of the change in the hivemind sync algorithm. # Condenser (code for hive.blog) We deployed a new version of hive.blog with @quochuy’s change to display the amount of resource credits (RC) available as a circle around each account’s profile icon (it is also displayed at some other new places such as "Account stats" below a post being created). # Work in progress and upcoming work * Continue experimenting with generating the “impacted accounts” table directly from sql_serializer to see if it is faster than our current method where hafah generates this data on demand as it needs it. * Above task will also require creating a simplified form of hafah. * Fix the algorithm used by sql_serializer to determine when it should drop out of massive sync mode into normal live sync mode (currently it drops out of massive sync too early). * Finish up work on multi-threaded jsonrpc server. * Run tests to compare results between account history plugin and HAF-based account history apps. * Finish conversion of hivemind to HAF-based app. Once we’re further along with HAF-based hivemind, we’ll test it using the fork-inducing tool. * Continue RC related work.

See: 29th update of 2021 on BlockTrades work on Hive software by @blocktrades