dhive: RPC Failover Support
![dhive.png](https://files.peakd.com/file/peakd-hive/therealwolf/nKbEIVHB-dhive.png)
---
Greetings #Hivers & fellow developers,
I've just pushed a new version, 0.13.3, to [dhive](https://www.npmjs.com/package/@hivechain/dhive) (JS library) that enables native RPC node failover-support. This feature is optional and is only active if an array of RPC nodes is given as the first argument in a `Client.`
```
import {Client} from '@hivechain/dhive'
// Failover
const client = new Client(['https://api.hive.blog', 'https://api.hivekings.com', 'https://anyx.io'], { failoverThreshold: 3 /* default */})
// No failover
const simpleClient = new Client('https://api.hive.blog')
```
#### Relevant Options Parameters
`new Client(url, options)`
```
- failoverThreshold (default: 3): Specifies the number of times the URLs (RPC nodes) should be iterated and retried in case of timeout errors. Requires the first URL parameter to be an array! It can be set to 0 to iterate and retry forever.
- timeout (default: 60 * 1000ms): Send timeout, how long to wait in milliseconds before giving up on an RPC call. It can be set to 0 to retry forever.
```
### GitLab Pull Request (Code)
https://gitlab.syncad.com/hive/dhive/-/merge_requests/3
### Why is this relevant?
Normally, you have to take care of potential failovers in your application yourself. This can produce redundant code. Having a native solution is far cleaner.
Another solution is a wrapper around dhive that enables failover. There is [dsteem-pool](https://www.npmjs.com/package/dsteem-pool), but it didn't work for me.
### Info
Code has been tested and everything *should* be working fine: https://gitlab.syncad.com/hive/dhive/-/blob/master/test/client.ts#L15-20
However, for very sensitive operations, please still be careful and rather double-check.
---
With this said:
If there are any problems/questions, please create an issue https://gitlab.syncad.com/hive/dhive/issues
#HiveOn
Wolf
*[therealwolf.me](https://therealwolf.me)*