VPP ICMP Responder

Table of contents

The simplest possible case for Network Service Mesh is to have is connecting a Client via a vWire to another Pod that is providing a Network Service. Network Service Mesh allows flexibility in the choice of mechanisms used to provide that vWire to a workload. the icmp responder example does this with kernel interfaces. The vpp-icmp-responder provides and consumes the same ‘icmp-responder’ Network Service, but has Client’s and Endpoint’s that use a memif high speed memory interfaces to achieve performance unavailable via kernel interfaces.



Utilize the Run instructions to install the NSM infrastructure, and then type:

helm install nsm/vpp-icmp-responder

What it does

This will install two Deployments:

vpp-icmp-responder-nscThe Clients, four replicas
vpp-icmp-responder-nseThe Endpoints, two replicas

And cause each Client to get a vWire connecting it to one of the Endpoints. Network Service Mesh handles the Network Service Discovery and Routing, as well as the vWire ‘Connection Handling’ for setting all of this up.


In order to make this case more interesting, Endpoint1 and Endpoint2 are deployed on two separate Nodes using PodAntiAffinity, so that the Network Service Mesh has to demonstrate the ability to string vWires between Clients and Endpoints on the same Node and Clients and Endpoints on different Nodes.


First verify that the vpp-icmp-responder example Pods are all up and running:

kubectl get pods | grep vpp-icmp-responder

To see the vpp-icmp-responder example in action, you can run:

curl -s https://raw.githubusercontent.com/networkservicemesh/networkservicemesh/master/scripts/nsc_ping_all.sh | bash

Table of contents