Tutorial: An End-to-end Streaming Analytics Stack for Juniper Streaming Telemetry
May 23, 2019
Eric Graham
This is Part 3 of our ongoing series on using Imply for network telemetry data. Follow these links for Part 1 and Part 2.
In this tutorial, we will step through how to set up Imply, Kafka, and Open-NTI to build an end-to-end streaming analytics stack that can handle Juniper Native streaming telemetry data. The setup described will use a single AWS instance for simplicity, but can be used as reference architecture for a fully distributed production deployment.
Modify conf-quickstart/druid/_common/common.runtime.properties with the right directories for segments and logs. If you have plenty of local disk you can keep the default configuration. A good reference is the Imply quickstart documentation: https://docs.imply.io/on-prem/quickstart
Start Imply from the Imply directory with the quickstart configuration by typing the following:
In the open-nti directory execute the following commands:
sudo make stop
sudo make start
Configure your Juniper router to send gpb over UDP to a destination address of the server you just setup. Send the streaming telemetry to destination port 50000.
Verify you are receiving streaming telemetry at the host, open-nti and Kafka.
On your open-nti host run tcpdump -i port 50000. You should see udp packets for your streaming telemetry.
To see if data is received at the open-nti container use the following command to check the opennti_input_jti log.
docker logs opennti_input_jti
You should see scrolling messages showing the json rom the streaming telemetry data.
Start Imply by opening a browser and either going to localhost:9095 (if browser is being run from your localhost) or <public_ip:9095>. Remember to modify your security rules to allow destination port 9095 from your source IP. Select the Data/+Load Data (upper right), and the following options will be displayed.
Select the “Other (supervised)”.
Use this specification replacing the Kafka IP with your IP. This will help flatten the Juniper JSON so it can easily be imported into Druid. Also, this is for interface streaming telemetry only. Additional statistics are possible but they will need to be added to the specification file. Please contact us for help.
Select “Send”.
When your data is loaded you can now slice and dice your streaming telemetry data at amazing speeds.
It’s Time to Rethink Observability: The Event-Driven Future
Observability has evolved. Forward-looking teams are already moving beyond static dashboards and fragmented telemetry—treating all observability data as events and unlocking real-time insights across their...
5 Reasons to Use Imply Polaris over Apache Druid for Real-Time Analytics
Introduction Real-time analytics is a game-changer for businesses that need to make fast, data-driven decisions. Whether you’re analyzing user activity, monitoring applications and infrastructure, detecting...
We are excited to announce the release of Apache Druid 32.0. This release contains over 341 commits from 52 contributors. It’s exciting to see a 30% increase in our contributors!
Druid 32.0 is a significant...