Hive HardFork 25 Jump Starter Kit
Intended for the Hive API node operators, witnesses, and developers.
At the time of Eclipse release I made a similar post that saved many (hours) of lives, so I’m creating an updated one for the upcoming Hard Fork 25.
https://www.youtube.com/watch?v=mrwgrOhl7Yw Yes, new Hive Hard Fork, new fancy logo reveal.
# Code
## GitLab
https://gitlab.syncad.com/hive/hive
Our core development efforts takes place in a community hosted GitLab repository (thanks @blocktrades). There's Hive core itself, but also many other Hive related software repositories.
## GitHub
https://github.com/openhive-network/hive
We use it as a push mirror for GitLab repository, mostly for visibility and decentralization - if you have an account on GitHub please fork at least [hive](https://github.com/openhive-network/hive) and [hivemind](https://github.com/openhive-network/hivemind) and star them if you haven’t done so yet. We haven't paid much attention to it but apparently it's important for some outside metrics.
![star_fork.png](https://images.hive.blog/DQmQbRRrtoTFPZA9QmDRjvswPtaGuXrWXje1WHp9CeeGbV2/star_fork.png)
Please click both buttons
# Services
## API node
https://api.openhive.network
Soon to be switched to `v1.25.0` but because it’s heavily used in Hive related R&D it might not be your best choice if you are looking for a fast API node without any rate limiting. During the maintenance mode, it will fall back to https://api.hive.blog
## Seed node
`hived` `v1.25.0` listens on `gtg.openhive.network:2001`
to use it in your `config.ini` file just add the line:
```
p2p-seed-node = gtg.openhive.network:2001
```
If you don't have any `p2p-seed-node = ` entries in your config file, built-in defaults will be used (which contains my node too).
# Stuff for download
TL;DR https://gtg.openhive.network/get
## Binaries
`./get/bin`contains `hived` and `cli_wallet` binaries built on `Ubuntu 18.04 LTS` which should also run fine on `Ubuntu 20.04 LTS`
## Blocks
`./get/blockchain`
As usual, the `block_log` file, roughly 350GB and counting.
For testing needs there's also `block_log.5M` that is limited to first 5 million blocks.
## Snapshots
### API
`./get/snapshot/api/` contains a relatively recent snapshot of the API node with all the fancy plugins.
There’s a snapshot for the upcoming version `v1.25.0` but also for the old one `v1.24.8` if you need to switch back.
Uncompressed snapshot takes roughly 480GB
There’s also the `example-api-config.ini` file out there that contains settings compatible with the snapshot.
To decompress, you can use simply run it through something like: `lbzip2 -dc | tar xv`
(Using parallel bzip2 on multi-threaded systems might save you a lot of time)
To use snapshot you need:
- A `block_log` file, not smaller than the one used when the snapshot was made.
- A `config.ini` file, compatible with the snapshot (see above), adjusted to your needs, without changes that could affect it in a way that changes the state.
- A `hived` binary compatible with the snapshot
All of that you can find above.
Run `hived` with `--load-snapshot name`, assuming the snapshot is stored in `snapshot/name`
`hived` API node runtime currently takes 823GB (incl. shm 19GB, excl. snapshot)
### Exchanges
There’s also a snapshot meant for exchanges in `./get/snapshot/exchange/` that allows them to quickly get up and running, it requires a compatible configuration and that exchange account is one of those who are tracked by my node. If you run an exchange and want to be on that list to use a snapshot, just please let me know.
## Hivemind database dump
`./get/hivemind/` contains a relatively recent dump of the Hivemind database.
I use self-describing file names such as:
`hivemind-20210616-47a41c96.dump`
Date when dump was taken, revision of `hivemind` that was running it.
You need at least that version, remember about `intarray` extension
Consider running `pg_restore` with at least `-j 6` to run long running tasks in parallel
After restoring the database, make sure to run the `db_upgrade` script.
Even though during full sync database size peaks easily over 750GB, when restored from dump it takes roughly 500GB. Dump file itself is just 53GB.
### All resources are offered AS IS.