2. How to evaluate NPAP using the ERD

This chapter provides a hands-on guide, for connecting and testing an NPAP Evaluation Reference Design (ERD). It covers the essential steps, from understanding the IP and port numbering scheme to configuring your host PC’s network interface. It further describes how you can use the npap-admin command-line tool to dynamically reconfigure your ERD. This tool is part of the NPAP HAL python package and provides a simple interface to manage your NPAP system directly from a host computer or integrated processing system. For further information check out the NPAP HAL user guide (available upon request).

By reading this chapter, you will gain an understanding of how to use standard command-line tools -such as ping, netperf, and telnet- to verify connectivity, measure performance, and test core functionalities of your NPAP ERD.

2.1. IP and Port Numbers

The IPv4 addresses of the ERDs are specific to the ERD number and NPAP instance. The port number is specific to the TCP/UDP functionality. The first TCP port number starts at 50000. The first UDP port starts at 55000.

Important

The default NPAP ERD IPv4 addressing scheme is:
192.168.<ERD No>.<NPAP Instance>:<TCP/UDP Port>

Example:

Connection to the first TCP port of the 10G synchronous NPAP instance of the ERD 0. The ERD overview table provides the following information for ERD 0:

  • ERD No: 0

  • NPAP instance of the 10G synchronous session: 1

  • TCP Port: 1

This leads to following IPv4 address and port number: 192.168.0.1:50000

2.1.1. Change IPv4 address with npap-admin

While the ERDs come pre-configured with default IPv4 addresses based on the numbering scheme above, you can easily customize the network configuration at runtime using the npap-admin command-line tool.

The npap-admin tool uses intuitive YAML-formatted configuration files to define hardware parameters and network sessions. To change the IPv4 address, you simply need to define the local_ip and mac parameters at the instance level within your configuration file.

Example Configuration (YAML):

boards:
  - name: "example-board-name"
    port: "/dev/ttyUSB0"
    baudrate: 115200
    instances:
      - name: "Instance 1"
        local_ip: "192.168.2.12"
        mac: "02:0A:35:08:00:01"
        sessions: []

Once your YAML configuration file is ready, you can apply the new IPv4 address to the hardware by executing the following command in your terminal:

$ npap-admin <path_to_YAML_configuration_file>

Note that npap-admin currently does not support changing the subnet mask. To do that you have to directly interact with NPAP HAL, for further information please check the NPAP HAL User Guide (available upon request).

2.2. Bring up Ethernet Port on the PC Side

To communicate via the host PC with the NPAP ERD, set the IPv4 address of the connected network interface according to the setting of the NPAP ERD IPv4 addressing scheme.

In this example ERD 2 is used, following the pattern: 192.168.<ERD No>.<NPAP Instance>
leads to following IP: 192.168.2.<NPAP Instance>

As the PC should not have the same IP as any of the NPAP instances, 105 was chosen as the last octet for the host PC, which leads to the final PC IP: 192.168.2.105 . There are two options to set the PC IPv4 address:

With net-tools (assuming network interface eth2):

$ sudo ifconfig eth2 192.168.2.105/16
eth2        Link encap:Ethernet  HWaddr 90:e2:ba:4a:d9:ad
inet addr:192.168.2.105  Bcast:192.168.255.255  Mask:255.255.0.0
inet6 addr: fe80::92e2:baff:fe4a:d9ad/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:608 errors:2 dropped:0 overruns:0 frame:2
TX packets:3215 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:135015 (135.0 KB)  TX bytes:352322 (352.3 KB)
Then check if the corresponding link is up use the following code:
$ sudo ethtool eth2

Then check if the corresponding link is up use the following code:

$ sudo ethtool eth2
Settings for eth2:
Supported ports: [ FIBRE ]
Supported link modes:   10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes:  10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

With IP:

sudo ip addr add dev <interface> <ipaddr>/16
sudo ip link set dev <interface> up
sudo ip addr show (to check the link)

Both examples use a subnet mask of 255.255.0.0 since it is the default subnet mask configured in an ERD, see Section 3.

2.3. PING

The simplest connectivity test is done by using the ICMP layer echo request/reply mechanism, widely known as ping and used by the program ping, which already gives an impression about the short and deterministic latency offered by NPAP.

This example uses ERD 2 and the first NPAP instance.
following the pattern 192.168.<ERD No>.<NPAP Instance> the IP is: 192.168.2.1

Example Ping:

$ ping -c 10 192.168.2.1
PING 192.168.2.101 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=255 time=0.035 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=3 ttl=255 time=0.029 ms
64 bytes from 192.168.2.1: icmp_seq=4 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=5 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=6 ttl=255 time=0.026 ms
64 bytes from 192.168.2.1: icmp_seq=7 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=8 ttl=255 time=0.031 ms
64 bytes from 192.168.2.1: icmp_seq=9 ttl=255 time=0.028 ms
64 bytes from 192.168.2.1: icmp_seq=10 ttl=255 time=0.026 ms
--- 192.168.2.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8997ms
rtt min/avg/max/mdev = 0.026/0.028/0.035/0.004 ms

2.4. Netperf

Netperf tests the throughput of a network path, which in this case is a point-to-point connection between the host with its Network Interface Card (NIC) and the Board with the NPAP ERD (ERD). Netperf comprises a server (netserver) and a client (Netperf). The hardware provided by the NPAP package is a hybrid implementation, which may function as a server or a client, but not both at the same time. For now, the Netperf server (equivalent to the netserver tool) is used as it is already set up to listen on the default netserver port (12865) for incoming test requests per default.

Note

The Netperf HDL implementation only works with Netperf version v2.6.0

The TCP_STREAM test implements a bandwidth performance test for a stream from the client to the server.

This example uses ERD 2 and the first NPAP instance.
following the pattern 192.168.<ERD No>.<NPAP Instance +1> the IP is: 192.168.2.1

Example Netperf:

$ netperf -t TCP_STREAM -H 192.168.2.1 -l 5
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.101 () port 0 AF_INET : demo
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec
87380  16384  16384    5.00     9416.79

Note

The performance results of the Netperf tests highly depend on the configuration of the host PC. For examples on how to configure a performant host Linux system, e.g. see the RedHat Network Performance Tuning Guide. https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf

Supported tests in Netperf HDL Implementation:

The Netperf HDL implementation is based on Netperf 2.6.0 so the counterpart needs to be the same version. The following features are supported in the HDL implementation:

  • Comfigurable control and test ports

  • TCP_STREAM

  • TCP_MAERTS

  • TCP_RR

  • UDP_STREAM

  • UDP_RR

Following test options are available:

  • Test length

  • packet sizes:

    • TCP no jumbo frame setting 1-1460 Byte

    • TCP jumbo frame setting 1-8192 Byte

    • UDP no jumbo frame setting 1-1471 Byte

2.5. TCP Loopback

Per default every TCP user session implements a TCP echo server, which mirrors any incoming data received back to the sender. The first TCP user session listens for incoming connections on port 50000 with every subsequent TCP user session listening on port 50000 + the number of the TCP user session. Therefore, an ERD (ERD) implementing 3 TCP user sessions listens for connections on ports: 50000, 50001 and 50002.

To interactively test the TCP loopback implementation telnet allows for connecting to the server as well as interactively sending and receiving data. Telnet sends out data when the return key is pressed:

Example Loopback:

$ telnet 192.168.2.1 50001
Trying 192.168.2.1...
Connected to 192.168.2.1.
Escape character is '^]'.
Hi MLE TCP Loopback on ZCU102
Hi MLE TCP Loopback on ZCU102

2.5.1. Configuring TCP Loopback with npap-admin

While the TCP user sessions on the ERD defaults to loopback incoming data, you can exercise full control over your session parameters using the npap-admin command-line tool. This allows you to quickly reconfigure a user session to act as a passive loopback server on a custom port without needing to recompile the hardware.

Within your YAML configuration file, session blocks are nested directly inside the sessions list of their respective hardware instance. To define a TCP user session in loopback mode, set the type to "tcp", set active to false (making it a passive session), and set the processing_mode to "LOOPBACK".

Example Configuration (YAML):

sessions:
  - name: "tcp_0_passive_loopback"
    type: "tcp"
    session_id: 0
    params:
      active: false
      local_port: 50000
      processing_mode: "LOOPBACK"

Apply this configuration by executing:

$ npap-admin <path/to/YAML_configuration_file>

2.6. UDP Loopback

The UDP loopback functionality mirrors any incoming UDP data back to the original sender, allowing for a full-duplex throughput test. This guide uses the command-line tools socat to establish the connection, dd to generate a high-volume data stream, and pv to monitor the transfer rate in real-time.

Note

This test requires the tools socat and pv to be installed on your host system. You can typically install them using your system’s package manager (e.g., apt install socat pv).

Because the test involves sending and receiving data simultaneously, you will need to open two separate terminal windows on your host PC.

First, ensure your network interface is configured with an appropriate IPv4 address address and a large MTU for best performance (jumbo frames):

$ sudo ifconfig <interface> 192.168.2.105 netmask 255.255.255.0 mtu 9000

This example again uses the PC IPv4 address 192.168.2.105, which corresponds to ERD 2.

Terminal 1: Start the Receiver

In the first terminal, start the receiving process. This command configures socat to listen on a specific port for the looped-back data from the ERD. The received data is then piped to pv to display the throughput before being discarded.

This example uses ERD 2, which has an IPv4 address of 192.168.2.1. The PC IPv4 address is 192.168.2.105 and will listen on port 60000.

$ socat -u udp-recvfrom:60000,bind=192.168.2.105,fork - | pv > /dev/null

Terminal 2: Start the Sender

In the second terminal, start the sending process. This command uses dd to generate a data stream, which is piped through pv (to monitor the sending rate) and then sent by socat to the ERD’s UDP port (55000). The sourceport=60000 option ensures the ERD loops the data back to the port our receiver is listening on.

$ dd bs=10M if=/dev/zero | pv | socat -u -b 8192 - udp-sendto:192.168.2.1:55000,sourceport=60000

Once both commands are running, you will see the live throughput statistics in both terminals. To stop the test, press CTRL+C in either window. The final output will show the total data transferred and the average rate.

Example output from the receiver after stopping:

15.4GiB 0:00:32 [ 493MiB/s] [ <=>                                     ]

Example output from the sender after stopping:

15.4GiB 0:00:32 [ 493MiB/s] [ <=>                                     ]
3166+1 records in
3166+1 records out
16188948480 bytes (16 GB, 15 GiB) copied, 32.8912 s, 492 MB/s

Please note that the final throughput figures are highly dependent on the performance and network stack configuration of the host PC. You may observe that the sending bandwidth is slightly higher than the receiving bandwidth, which indicates packet loss. As UDP does not guarantee delivery, this is an expected behavior when pushing the system to its performance limits.

A reliable method to determine the source of packet loss is to use a managed network switch. By connecting both the host PC and the NPAP ERD to the switch, you can use its statistics as a neutral point of observation.

After running the UDP loopback test, check the traffic counters on the switch ports connected to the ERD and the host PC. You should look for the following:

  • On the ERD’s switch port, the number of incoming (ingress) packets should be nearly identical to the number of outgoing (egress) packets. This confirms that the NPAP board is successfully looping back all the data it receives.

  • If the switch confirms the ERD is not dropping packets, but your receiving application on the host still reports loss, the bottleneck is located on the host PC side.

This analysis typically demonstrates that packet loss at high data rates occurs within the host’s network stack or receiving application, not on the NPAP ERD itself.

2.6.1. Configuring UDP Loopback with npap-admin

Similarly to TCP, you can dynamically configure your hardware to reflect incoming UDP traffic back to the sender using npap-admin.

To configure a UDP user session to loopback data, add a session block to your YAML configuration file. Define the type as "udp", specify your desired local_port, and ensure the processing_mode is set to "LOOPBACK".

Example Configuration (YAML):

sessions:
  - name: "udp_0_loopback"
    type: "udp"
    session_id: 0
    params:
      local_port: 55000
      processing_mode: "LOOPBACK"

Apply your changes by running:

$ npap-admin <path/to/YAML_configuration_file>