Hyperledger Fabric is fully decentralized. To store world state it uses a key-value database, like for example, CouchDB. Retrieval of data from CouchDB becomes a bottleneck when complex queries are run on that database. So, ELK (Elasticsearch-Logstash-Kibana) comes for a rescue to get the results faster.
CouchDB enables rich querying against a Fabric ledger’s world state (the cache of current values of entries in the blockchain) when those values are modelled as JSON data.
But when the number of entries in the ledger grows, CouchDB queries take more time to get the results. When a query includes searching or sorting on different fields, CouchDB cannot get results in optimal time.
To overcome this, CouchDB can be synced to Elasticsearch. Hyperledger Fabric uses either Google-created LevelDB or Apache CouchDB® as state databases.
If CouchDB is used, then the blockchain world state data is stored in CouchDB. Below is an example demonstrating how CouchDB data can be synced to Elasticsearch:
1. Hyperledger Fabric setup with CouchDB as state database
2. Create a dockerized setup of the ELK stack. Use the below yaml file to create it.
In the below screenshot we have 3 documents in the world state database, which happens to be called “companydb”.
Let’s sync these documents in two different indexes in Elasticsearch. To do this, we need to configure Logstash.
Logstash reads data from different data sources and sends it to a destination, which in our case is Elasticsearch. It is a data processing pipeline. It has 3 stages: input, output and filter and at each stage one can make use of a plugin to process the data.
To do this, SSH into the Logstash container and follow the steps to install a plugin for receiving and processing changes to a CouchDB database.
Steps to install couchdb_changes plugin:
- Navigate to /usr/share/logstash/bin
- Command: ` logstash-plugin install logstash-input-couchdb_changes `
- Under the directory( /usr/share/logstash/config) open the file and pipelines.yml and update the location of logstash.conf file for the field path.config. In my case I have created the file logstash.conf under the same folder.
- Update the configuration entries in logstash.conf for the input, output and filter plugins according to your need.
The example setup above will replicate the data in “companydb” in Elasticsearch under the two indexes shown (“hrindex” and “softwareenggindex”).
Elasticsearch can store, search and analyze large volumes of data quickly. The main advantages of Elasticsearch are speed, scalability, query fine tuning, data types and support for plugins. This setup is made even more useful to human users by a third technology in the E-L-K stack — Kibana.
Kibana is a visualization and analytics platform which lets us visualize data from Elasticsearch. Using this, you can create visualizations in charts, tables and maps.
- Login to Kibana console.
- Query for indexes. Use the following command to get synced indices:
This screenshot below shows the output of this query, as well as one I’ll describe in a moment:
In the above screenshot, the query is run to get the list of indexes in Elasticsearch. On the right side for hrindex the count of documents is 1 and for softwareenggindex the count of documents is 2.
The second query shows the documents under hrindex.
GET /_cat/count/ to get a count of documents in any specific index.
GET /_cat/indices to get all indices
Hyperledger Fabric with ELK stack is a good combination to get the query results faster.
I hope my insights were helpful to you!
Elasticsearch, Logstash, and Kibana are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. Hyperledger Fabric is a project hosted by The Linux Foundation® and Hyperledger is its registered trademark. Apache CouchDB and CouchDB are trademarks of the Apache Software Foundation.
Author —Sai Raja, DLT Labs™