Tutorial: An End-to-end Streaming Analytics Stack for Juniper Streaming Telemetry

by Eric Graham · May 23, 2019

This is Part 3 of our ongoing series on using Imply for network telemetry data. Follow these links for Part 1 and Part 2.

In this tutorial, we will step through how to set up Imply, Kafka, and Open-NTI to build an end-to-end streaming analytics stack that can handle Juniper Native streaming telemetry data. The setup described will use a single AWS instance for simplicity, but can be used as reference architecture for a fully distributed production deployment.

Prerequisites

A bare metal server or cloud instance (such as an AWS m5d.xlarge instance) with 16GB RAM, 100GB of disk, and an ethernet interface. The server should be running Linux. You should have sudo or root access on the server. A Juniper router that can send Juniper native streaming telemetry data (gpb over UDP) for ingress interface statistics described here: https://www.juniper.net/documentation/en_US/junos/information-products/pathway-pages/junos-telemetry-interface/junos-telemetry-interface.pdf.

The architecture we will be setting up looks like the following:

Architecture diagram

Install docker, docker-ce and open-NTI

  • Follow the installation steps for docker, docker-ce and open-NTI from the following URL but ignore the configurations: https://open-nti.readthedocs.io/en/latest/install.html, https://github.com/juniper/open-nti
  • Replace the docker-compose.yml located in the open-nti installation directory with the following (filling in your server IP for kafka).

    Input-jti:
      image: $INPUT_JTI_IMAGE_NAME:$IMAGE_TAG
      container_name: \$INPUT_JTI_CONTAINER_NAME
      environment:
      - "OUTPUT_INFLUXDB=false"
      - "OUTPUT_STDOUT=true"
      - "OUTPUT_KAFKA=true"
      - "KAFKA_ADDR=<ip_of_your_server>"
      - "KAFKA_PORT=9092"
      - "KAFKA_TOPIC=jnpr.jvision"
      ports:
      - "\$LOCAL_PORT_JTI:50000/udp"
    Volumes:
    - /etc/localtime:/etc/localtime
    
  • Configure the following parameters in the open-nti.params (filling in your server IP for kafka).

    cat /home/ubuntu/open-nti.params
    
    export LOCAL_PORT_JTI=50000
    export KAFKA_PORT=9092
    export KAFKA_ADDR=<ip_of_your_server>
    

Install Imply

  • Download the most recent Imply distribution by going to the following URL: https://imply.io/get-started
  • Refer to the following quickstart for installation help and system requirements: https://docs.imply.io/on-prem/quickstart
  • Modify conf-quickstart/druid/_common/common.runtime.properties with the right directories for segments and logs. If you have plenty of local disk you can keep the default configuration. A good reference is the Imply quickstart documentation: https://docs.imply.io/on-prem/quickstart
  • Start Imply from the Imply directory with the quickstart configuration by typing the following:
    sudo bin/supervise -c conf/supervise/quickstart.conf &
    

Install Kafka

  • Download the most recent Kafka distribution from the following URL: http://www-us.apache.org/dist/kafka/0.11.0.3/kafka_2.11-0.11.0.3.tgz Note: The Imply distribution already includes Apache Zookeeper, which Kafka will use when you start it.
  • Start Kafka with the following command from within the Kafka directory:

    sudo ./bin/kafka-server-start.sh config/server.properties &
    
  • Create a Kafka topic using the following command.

    sudo ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic jnpr.jvision
    

Stop and start open-nti

In the open-nti directory execute the following commands:

sudo make stop
sudo make start

Configure your Juniper router to send gpb over UDP to a destination address of the server you just setup. Send the streaming telemetry to destination port 50000.

Verify you are receiving streaming telemetry at the host, open-nti and Kafka.

  • On your open-nti host run tcpdump -i port 50000. You should see udp packets for your streaming telemetry.

  • To see if data is received at the open-nti container use the following command to check the opennti_input_jti log.

    docker logs opennti_input_jti
    

    You should see scrolling messages showing the json rom the streaming telemetry data.

    2019-05-14 07:43:45 +0000 juniperNetworks.jnpr_interface_ext: {"key_fields":{"interface_stats.if_name":"xe-2/2/0"},"interface_stats.init_time":1511574753,"interface_stats.snmp_if_index":822,"interface_stats.parent_ae_name":"ae3","interface_stats.if_operational_status":"UP","interface_stats.if_transitions":1009,"interface_stats.ifLastChange":292239641,"interface_stats.ifHighSpeed":10000,"interface_stats.ingress_errors.if_errors":0,"interface_stats.ingress_errors.if_in_qdrops":0,"interface_stats.ingress_errors.if_in_frame_errors":0,"interface_stats.ingress_errors.if_discards":0,"interface_stats.ingress_errors.if_in_runts":0,"interface_stats.ingress_errors.if_in_l3_incompletes":0,"interface_stats.ingress_errors.if_in_l2chan_errors":0,"interface_stats.ingress_errors.if_in_l2_mismatch_timeouts":0,"interface_stats.ingress_errors.if_in_fifo_errors":0,"interface_stats.ingress_errors.if_in_resource_errors":0,"device":"cr01.jfk07.host","host":"92d97785837","sensor_name":"jnpr_interface_ext","time":1557819825015}
    
  • To verify that the open-NTI messages are reaching Kafka check your Kafka topic for streaming telemetry data:

    bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic jnpr.jvision --from-beginning
    

Connect Kafka and Imply

Start Imply by opening a browser and either going to localhost:9095 (if browser is being run from your localhost) or <public_ip:9095>. Remember to modify your security rules to allow destination port 9095 from your source IP. Select the Data/+Load Data (upper right), and the following options will be displayed.

Connecting Kafka and Imply

  • Select the “Other (supervised)”.
  • Use this specification replacing the Kafka IP with your IP. This will help flatten the Juniper JSON so it can easily be imported into Druid. Also, this is for interface streaming telemetry only. Additional statistics are possible but they will need to be added to the specification file. Please contact us for help.
  • Select “Send”.

When your data is loaded you can now slice and dice your streaming telemetry data at amazing speeds.

Back to blog

How can we help?