Intended for the Hive API node operators, witnesses, and developers. All you need to do to bring your new Hive instance up within a couple of hours, or upgrade within minutes.
https://www.youtube.com/watch?v=xNbpwk58gjQYes, new Hive Hard Fork, new fancy logo reveal.
Our core development efforts take place in a community hosted GitLab repository (thanks @blocktrades). There's Hive core itself, but also many other Hive related software repositories.
We use it as a push mirror for GitLab repository, mostly for visibility and decentralization - if you have an account on GitHub please [fork](https://github.com/openhive-network/hive/fork) at least [hive](https://github.com/openhive-network/hive/fork) and star it if you haven’t done so yet. We haven't paid much attention to it but apparently it's important for some outside metrics.
## API node
## Seed node
`hived` `v1.26.0` listens on `gtg.openhive.network:2001`
to use it in your `config.ini` file just add the line:
p2p-seed-node = gtg.openhive.network:2001
If you don't have any `p2p-seed-node = ` entries in your config file, built-in defaults will be used (which contains my node too).
# Stuff for download
`./get/bin`contains `hived` and `cli_wallet` binaries built on `Ubuntu 20.04 LTS` which should also run fine on `Ubuntu 22.04 LTS`.
### Ubuntu 18.04 LTS
For those who are late to the party and facing troubles upgrading their `Ubuntu 18.04 LTS` to either `20.04` or `22.04`, there’s also a `./get/bin/ubuntu-18.04-lts/` dir, that has binaries built for `Ubuntu 18.04 LTS`. It’s not officially supported and building it required some extra steps, but if you have no other choice, be my guest.
Of course you should soon upgrade your system anyway because it will reach the end of its hardware and maintenance support on April 30, 2023.
The compressed `block_log` file (roughly 338GB), and `block_log.artifacts` file are updated once in a while and supersedes old uncompressed `block_log` (roughly 678GB) and `block_log.index` files.
Unfortunately updating your local block_log by continuing download is no longer supported because of the offset differences between individual nodes.
`./get/snapshot/api/` contains a relatively recent snapshot of the API node with all the fancy plugins.
There’s a snapshot for the upcoming version `v1.26.0` but also for the old one `v1.25.0` if you need to switch back.
Uncompressed snapshot takes roughly 813GB
There’s also the `example-api-config.ini` file out there that contains settings compatible with the snapshot.
To decompress, you can use simply run it through something like: `lbzip2 -dc | tar xv`
(Using parallel bzip2 on multi-threaded systems might save you a lot of time)
To use snapshot you need:
- A `block_log` file not smaller than the one used when the snapshot was made.
- A `block_log.artifacts` file that’s matching your block_log file to save time for its generation (otherwise it could be regenerated)
- A `config.ini` file compatible with the snapshot (see above), adjusted to your needs, without changes that could affect it in a way that changes the state.
- A `hived` binary compatible with the snapshot
All of that you can find above.
Run `hived` with `--load-snapshot name`, assuming the snapshot is stored in `snapshot/name`
There’s also a snapshot meant for exchanges in `./get/snapshot/exchange/` that allows them to quickly get up and running, it requires a compatible configuration and that exchange account is one of those who are tracked by my node. If you run an exchange and want to be on that list to use a snapshot, just please let me know.
## Hivemind database dump
`./get/hivemind/` contains a relatively recent dump of the Hivemind database.
I use self-describing file names such as:
Date when dump was taken, revision of `hivemind` that was running it.
You need at least that version, remember about `intarray` extension
Consider running `pg_restore` with at least `-j 6` to run long running tasks in parallel
After restoring the database, make sure to run the `db_upgrade` script.
When restored from the dump it takes roughly 675GB. Dump file itself is just 60GB.
## Some more useful tips:
- Remember that you need to add `plugin = wallet_bridge_api` in your `config.ini` file if you are going to use `cli_wallet`.
- Upgrading from v1.25.0 to v1.26.0 requires replay. If there's existing state file then remove it or use `--force-replay`.
- If you can, use your current `block_log` file, that will save you a lot time/bandwidth.
- If your `block_log` file is uncompressed you can (optionally) use compression tool like `~/build/programs/util/compress_block_log -j 32 -i blocks/uncompressed/ -o blocks/compressed/` or download already compressed one from https://gtg.openhive.network/get/blockchain/compressed/ or just continue using it (new blocks will be added in compressed form)
- `block_log.index` file is replaced by `block_log.artifacts`
- To avoid replay if you are in a hurry you can use one of the suitable snapshots.
- Snapshot for exchanges can be used also for seed nodes and witness nodes (it just has more data than required).
- Make sure to periodically backup your instance (using `--dump-snapshot`, etc.) to save a lot of time required by reindexing.
- If you need help, try asking community members on https://openhive.chat/channel/dev (login with your Hive account using Hivesigner)
### All resources are offered AS IS.