Witness Update: Release Candidate for Eclipse is out.
Optimists expected it sooner.
Pessimists expected it later.
Realists are smarter than that.
https://www.youtube.com/watch?v=pOisXHxmmIY
Yet another promotional video rendered for the brand new Hive fork 24. Bee Impressed ;-)
> _"and everything under the Sun is in tune but the Sun is eclipsed by the moon"_
- _Eclipse_ , Pink Floyd
Nearly two months ago, the first version of Eclipse code was released. It was tagged as `v1.0.0` You can read more about it in my [previous witness update](/@gtg/witness-update-brand-new-hive-eclipse-is-coming).
Last week, an official release candidate was announced.
https://gitlab.syncad.com/hive/hive/-/releases/v1.0.11
It’s meant for witnesses and developers to perform more extensive tests before things are set in stone. HF24 will occur on September 8, 2020, provided no major bugs are discovered. You can find more information here: [Hive HF24 Information Mega Post](/@hiveio/tentative-hardfork-date-hive-hf24-information).
I didn’t have enough time to post a more detailed Hive Pressure episode, because there are still a lot of things to do before the Hard Fork, so there’s no time left for "blogging".
![eclipse_paradox.png](https://images.hive.blog/DQmemZfXRHBLiyCM4vR4Ge4hV7wgSEGyxUqiGHm1KVd4xdm/eclipse_paradox.png)
_"The Eclipse Paradox"_ courtesy of @therealwolf
So let’s keep things short and useful.
# Public node
There’s a `v1.0.11` instance configured for a "complete hived API" available at https://beta.openhive.network
`curl -s --data '{"jsonrpc":"2.0", "method":"condenser_api.get_version", "params":[], "id":1}' https://beta.openhive.network | jq .`
```
{
"jsonrpc": "2.0",
"result": {
"blockchain_version": "1.24.0",
"hive_revision": "1aa1b7a44b7402b882326e13ba59f6923336486a",
"fc_revision": "1aa1b7a44b7402b882326e13ba59f6923336486a",
"chain_id": "0000000000000000000000000000000000000000000000000000000000000000"
},
"id": 1
}
```
You can use that endpoint to test your current apps and libs and their ability to communicate with the new version. Please keep in mind that this is only a `hived` endpoint - no jussi, no hivemind.
If you want a hands-on experience, here are some useful tips on how to tame the Eclipse.
# Build Eclipse
#### Make sure you have all the prerequisites
```
apt-get install -y \
autoconf \
automake \
autotools-dev \
build-essential \
cmake \
doxygen \
git \
libboost-all-dev \
libyajl-dev \
libreadline-dev \
libssl-dev \
libtool \
liblz4-tool \
ncurses-dev \
python3 \
python3-dev \
python3-jinja2 \
python3-pip \
libgflags-dev \
libsnappy-dev \
zlib1g-dev \
libbz2-dev \
liblz4-dev \
libzstd-dev
```
You can get that list from the `Builder.DockerFile` file.
#### Clone sources
`git clone https://gitlab.syncad.com/hive/hive`
#### Checkout the release candidate
`cd hive && git checkout v1.0.11`
`git submodule update --init --recursive`
#### Build it
`mkdir -p ~/build-v1.0.11 && cd ~/build-v1.0.11`
```
cmake -DCMAKE_BUILD_TYPE=Release \
-DLOW_MEMORY_NODE=ON \
-DCLEAR_VOTES=ON \
-DSKIP_BY_TX_ID=OFF \
-DBUILD_HIVE_TESTNET=OFF \
-DENABLE_MIRA=OFF \
-DHIVE_STATIC_BUILD=ON \
../hive
```
`make -j4`
#### Get the resulting hived and cli_wallet
They are at:
`~/build-v1.0.11/programs/hived/hived`
and
`~/build-v1.0.11/programs/cli_wallet/cli_wallet`
respectively.
You can check if you have a proper version running
`~/build-v1.0.11/programs/hived/hived --version`
The result should be exactly:
`"version" : { "hive_blockchain_hard_fork" : "1.24.0", "hive_git_revision" : "1aa1b7a44b7402b882326e13ba59f6923336486a" }`
# Run it
Now you are ready to run it.
Not much has changed… well, OK, even the name of the binary has changed.
It’s now `hived` not ~~`steemd`~~ for obvious reasons.
Any ~~steem~~ related configuration options were changed to their hive equivalents.
Most of the basics you know from previous versions are here.
I’ll try to present some sample configurations for most common use cases.
Of course, they need to be adjusted to suit your specific needs.
## Fat Node
Let's start from the configuration of the Fat Node, because it’s the slowest one, replays for many days, and is a real pain when it comes to maintenance.
Run it with:
`true fact is that the fat node is gone`
Yes, that’s a real command.
No, it doesn’t make sense to run it.
Yes, you can do that anyway, it won’t harm you.
Improvements made:
It’s fast, lightweight, and gone.
Hooray!
## Simple Hive Node
Next, the most common node in our Hive universe.
With some small changes to configuration, it can act as a seed node, a witness node or a personal node for your private wallet communication. If we wanted to use the Bitcoin naming convention, we’d call it a full node, because it has everything you need to keep Hive running. (It just doesn’t talk much).
### Example config for Seed Node / Witness Node
```
log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = witness
shared-file-size = 20G
shared-file-full-threshold = 9500
shared-file-scale-rate = 1000
p2p-seed-node = api.openhive.network:2001
webserver-thread-pool-size = 32
flush-state-interval = 0
```
Whether this node is just a seed node or a witness node, the `config.ini` file for it is pretty much the same.
One difference is that the witness should add their `witness` and `private-key` entries such as:
```
witness = "gtg"
private-key = 5Jw5msvr1JyKjpjGvQrYTXmAEGxPB6obZsY3uZ8WLyd6oD56CDt
```
(No, this is not my key. The purpose is to show you that the value for `witness` is in quotes and the value for `private-key` is without them. Why? Because of reasons.)
Another difference is that while the witness node is usually kept secret, a public seed node should be available to the outside world, so you might want to explicitly choose which interface / port it should bind to for p2p communication:
```
p2p-endpoint = 0.0.0.0:2001
```
You need 280GB for blocks and 20GB for the state file. 16GB RAM should be enough. All that, of course, increases with time.
## AH API node
We used to have a "full API node" (with every plugin possible). Since the Hivemind era, we’ve had an "AH API node" and a "Fat API node" to feed the Hivemind.
Now, we’ve managed to get rid of the Fat node, and feed the Hivemind from a single type of instance.
It’s usually called an AH Node, where AH stands for Account History. While it has many more plugins, `account_history_rocksdb` is the biggest, heaviest, and "most powerful" one, hence the name.
Its build configuration is the same as for the simplest seed node.
Its runtime configuration makes it what it is.
### Example config for AH Node
```
log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = webserver p2p json_rpc
plugin = database_api condenser_api
plugin = witness
plugin = rc
plugin = market_history
plugin = market_history_api
plugin = account_history_rocksdb
plugin = account_history_api
plugin = transaction_status
plugin = transaction_status_api
plugin = account_by_key
plugin = account_by_key_api
plugin = reputation
plugin = reputation_api
plugin = block_api network_broadcast_api rc_api
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"
shared-file-size = 20G
shared-file-full-threshold = 9500
shared-file-scale-rate = 1000
flush-state-interval = 0
market-history-bucket-size = [15,60,300,3600,86400]
market-history-buckets-per-size = 5760
p2p-endpoint = 0.0.0.0:2001
p2p-seed-node = api.openhive.network:2001
transaction-status-block-depth = 64000
transaction-status-track-after-block = 46000000
webserver-http-endpoint = 0.0.0.0:8091
webserver-ws-endpoint = 0.0.0.0:8090
webserver-thread-pool-size = 256
```
Aside from the account history, which is implemented using `account_history_rocksdb` plugin (don’t use the old non-rocksdb), there are other plugins and corresponding APIs included in the configuration to serve information about resource credits, internal market history, transaction statuses, reputation etc.
Yes, `shared-file-size` at current block height can be really that small.
So are the memory requirements.
Account History RocksDB storage currently takes about 400GB.
The blockchain itself takes 280GB.
I'd suggest at least 32GB or 64GB RAM depending on the workload, so the buffers/cache could keep the stress away from storage.
## Exchange Node
A lot depends on internal procedures and specific needs of a given exchange.
(Internal market support? One tracked account or more?)
If you ran it before, you probably know exactly what you need.
One thing to pay attention to is that the RocksDB flavor of AH, i.e. `account_history_rocksdb` and all related settings have to be configured with the rocksdb version.
### Example config for the Exchange Node
```
log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = webserver p2p json_rpc
plugin = database_api condenser_api
plugin = witness
plugin = rc
plugin = account_history_rocksdb
plugin = account_history_api
plugin = transaction_status
plugin = transaction_status_api
plugin = block_api network_broadcast_api rc_api
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"
shared-file-size = 20G
shared-file-full-threshold = 9500
shared-file-scale-rate = 1000
flush-state-interval = 0
account-history-rocksdb-track-account-range = ["binance-hot","binance-hot"]
account-history-rocksdb-track-account-range = ["bittrex","bittrex"]
account-history-rocksdb-track-account-range = ["blocktrades","blocktrades"]
account-history-rocksdb-track-account-range = ["deepcrypto8","deepcrypto8"]
account-history-rocksdb-track-account-range = ["huobi-pro","huobi-pro"]
p2p-endpoint = 0.0.0.0:2001
p2p-seed-node = api.openhive.network:2001
transaction-status-block-depth = 64000
transaction-status-track-after-block = 46000000
webserver-http-endpoint = 0.0.0.0:8091
webserver-ws-endpoint = 0.0.0.0:8090
webserver-thread-pool-size = 32
```
It’s very similar to the AH API node setup, but instead of tracking 1.4 million accounts, we are using `account-history-rocksdb-track-account-range` to specify account(s) used by the exchange.
Pay attention to "rocksdb" in the variable name and make sure you track only the account(s) you need. Usually it’s just one, such as `bittrex`.
Please note that each time you change the list of tracked accounts, you will have to start over with the replay.
# Getting blocks
Nothing has changed here. Either use `block_log` from your own source or get one from a public source such as:
https://gtg.openhive.network/get/blockchain/block_log
(It’s always up to date, it takes roughly 280GB)
# Putting things together
By default, your tree should look like this:
```
.hived/
├── blockchain
│ └── block_log
└── config.ini
```
As you can see, we’ve moved from `~/.steemd` to `~/.hived` so `config.ini` should be placed there.
`~/.hived/blockchain` should contain a `block_log` file.
Once you start a replay, `block_log.index` file will be generated.
If you’ve enabled the `account_history_rocksdb` plugin, then you will also have a `~/.hived/blockchain/account-history-rocksdb-storage` directory with RocksDB data storage.
# Replay
I’d recommend starting with a clean data dir as shown above, but you can use `--force-replay`.
Why force it?
Because of another cool feature that will try to resume replay, and in this case we want to avoid that and start from scratch.
## Replay times
Of course, this depends on your configuration, node type, your hardware and admin skills, but for a well-tuned environment they shouldn’t be more than:
- 8-12 hours for witnesses
- 12-36 hours for exchanges
- 18-72 hours for public API node operators.
# No Hivemind yet
It still has some rough edges that have to be smoothed out before the release. It only needs an AH API Node to feed the data from and it should replay from scratch within 2-4 days or so.
# Any questions?
Ask them here in the comments or on the [OpenHive.Chat](https://openhive.chat/) but please be patient.
I might respond with delay.
See: Witness Update: Release Candidate for Eclipse is out. by @gtg