Tutorial: An End-to-end Streaming Analytics Stack for Juniper Streaming Telemetry

May 23, 2019
Eric Graham

This is Part 3 of our ongoing series on using Imply for network telemetry data. Follow these links for Part 1 and Part 2.

In this tutorial, we will step through how to set up Imply, Kafka, and Open-NTI to build an end-to-end streaming analytics stack that can handle Juniper Native streaming telemetry data. The setup described will use a single AWS instance for simplicity, but can be used as reference architecture for a fully distributed production deployment.


A bare metal server or cloud instance (such as an AWS m5d.xlarge instance) with 16GB RAM, 100GB of disk, and an ethernet interface. The server should be running Linux. You should have sudo or root access on the server. A Juniper router that can send Juniper native streaming telemetry data (gpb over UDP) for ingress interface statistics described here: https://www.juniper.net/documentation/en_US/junos/information-products/pathway-pages/junos-telemetry-interface/junos-telemetry-interface.pdf.

The architecture we will be setting up looks like the following:

Install docker, docker-ce and open-NTI

  • Follow the installation steps for docker, docker-ce and open-NTI from the following URL but ignore the configurations: https://open-nti.readthedocs.io/en/latest/install.html, https://github.com/juniper/open-nti

  • Replace the docker-compose.yml located in the open-nti installation directory with the following (filling in your server IP for kafka).
    Input-jti: image: $INPUT_JTI_IMAGE_NAME:$IMAGE_TAG container_name: \$INPUT_JTI_CONTAINER_NAME environment: - "OUTPUT_INFLUXDB=false" - "OUTPUT_STDOUT=true" - "OUTPUT_KAFKA=true" - "KAFKA_ADDR=<ip_of_your_server>" - "KAFKA_PORT=9092" - "KAFKA_TOPIC=jnpr.jvision" ports: - "\$LOCAL_PORT_JTI:50000/udp" Volumes: - /etc/localtime:/etc/localtime
  • Configure the following parameters in the open-nti.params (filling in your server IP for kafka).
    cat /home/ubuntu/open-nti.params
    export LOCAL_PORT_JTI=50000 export KAFKA_PORT=9092 export KAFKA_ADDR=<ip_of_your_server>

Install Imply

  • Download the most recent Imply distribution by going to the following URL: https://imply.io/get-started
  • Refer to the following quickstart for installation help and system requirements: https://docs.imply.io/on-prem/quickstart
  • Modify conf-quickstart/druid/_common/common.runtime.properties with the right directories for segments and logs. If you have plenty of local disk you can keep the default configuration. A good reference is the Imply quickstart documentation: https://docs.imply.io/on-prem/quickstart
  • Start Imply from the Imply directory with the quickstart configuration by typing the following:
    sudo bin/supervise -c conf/supervise/quickstart.conf &

Install Kafka

  • Download the most recent Kafka distribution from the following URL:

    Note: The Imply distribution already includes Apache Zookeeper, which Kafka will use when you start it.

  • Start Kafka with the following command from within the Kafka directory:

    sudo ./bin/kafka-server-start.sh config/server.properties &
  • Create a Kafka topic using the following command.
    sudo ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic jnpr.jvision

Stop and start open-nti

In the open-nti directory execute the following commands:

sudo make stop
sudo make start

Configure your Juniper router to send gpb over UDP to a destination address of the server you just setup. Send the streaming telemetry to destination port 50000.

Verify you are receiving streaming telemetry at the host, open-nti and Kafka.

  • On your open-nti host run tcpdump -i port 50000. You should see udp packets for your streaming telemetry.
  • To see if data is received at the open-nti container use the following command to check the opennti_input_jti log.
    docker logs opennti_input_jti
    You should see scrolling messages showing the json rom the streaming telemetry data.
    2019-05-14 07:43:45 +0000 juniperNetworks.jnpr_interface_ext: {"key_fields":{"interface_stats.if_name":"xe-2/2/0"},"interface_stats.init_time":1511574753,"interface_stats.snmp_if_index":822,"interface_stats.parent_ae_name":"ae3","interface_stats.if_operational_status":"UP","interface_stats.if_transitions":1009,"interface_stats.ifLastChange":292239641,"interface_stats.ifHighSpeed":10000,"interface_stats.ingress_errors.if_errors":0,"interface_stats.ingress_errors.if_in_qdrops":0,"interface_stats.ingress_errors.if_in_frame_errors":0,"interface_stats.ingress_errors.if_discards":0,"interface_stats.ingress_errors.if_in_runts":0,"interface_stats.ingress_errors.if_in_l3_incompletes":0,"interface_stats.ingress_errors.if_in_l2chan_errors":0,"interface_stats.ingress_errors.if_in_l2_mismatch_timeouts":0,"interface_stats.ingress_errors.if_in_fifo_errors":0,"interface_stats.ingress_errors.if_in_resource_errors":0,"device":"cr01.jfk07.host","host":"92d97785837","sensor_name":"jnpr_interface_ext","time":1557819825015}
  • To verify that the open-NTI messages are reaching Kafka check your Kafka topic for streaming telemetry data:
    bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic jnpr.jvision --from-beginning

Connect Kafka and Imply

Start Imply by opening a browser and either going to localhost:9095 (if browser is being run from your localhost) or <public_ip:9095>. Remember to modify your security rules to allow destination port 9095 from your source IP. Select the Data/+Load Data (upper right), and the following options will be displayed.

  • Select the “Other (supervised)”.
  • Use this specification replacing the Kafka IP with your IP. This will help flatten the Juniper JSON so it can easily be imported into Druid. Also, this is for interface streaming telemetry only. Additional statistics are possible but they will need to be added to the specification file. Please contact us for help.
  • Select “Send”.

When your data is loaded you can now slice and dice your streaming telemetry data at amazing speeds.

A great way to get hands-on with Druid is through a Free Imply Download or Imply Cloud Trial.

Other blogs you might find interesting

No records found...
Jul 03, 2024

Using Upserts in Imply Polaris

Transform your data management with upserts in Imply Polaris! Ensure data consistency and supercharge efficiency by seamlessly combining insert and update operations into one powerful action. Discover how Polaris’s...

Learn More
Jul 01, 2024

Make Imply Polaris the New Home for your Rockset Data

Rockset is deprecating its services—so where should you go? Try Imply Polaris, the database built for speed, scale, and streaming data.

Learn More
Jun 26, 2024

Announcing Imply Polaris on Microsoft Azure: Elevating Real-Time Analytics for Developers

We are excited to announce that Imply Polaris, our Database-as-a-Service (DBaaS) solution built from Apache Druid, is now available on Microsoft Azure. Azure customers worldwide can now take advantage of a...

Learn More

Let us help with your analytics apps

Request a Demo