2. How to evaluate NPAP using the Evaluation Reference Design

This chapter provides a hands-on guide, describing how to connect and test an NPAP Evaluation Reference Design (ERD). It covers the essential steps, from understanding the IP and port numbering scheme to configuring your host PC’s network interface. You will gain an understanding of how to use standard command-line tools like ping, netperf, and telnet to verify connectivity, measure performance, and test core functionalities.

2.1. IP and Port Numbers

To keep different ERD designs organized with the IPv4 address and TCP/UDP port numbers, MLE utilizes a scheme using the ERD Number and NPAP instantiation: The IP address of the ERD is specific to the ERD number and NPAP instance. The port number is specific to the TCP/UDP functionality. The first TCP port number starts at 50000. The first UDP port starts at 55000.

Important

The default NPAP ERD IP addressing scheme is:
192.168.<ERD No>.<NPAP Instance>:<TCP/UDP Port>

Example:

Connection to the first TCP port of the 10G synchronous NPAP instance from the ERD 0. The ERD overview table provides the following information for ERD 0:

  • ERD No: 0

  • NPAP instance of the 10G synchronous session: 1

  • TCP Port: 1

This leads to following IP and port number: 192.168.0.1:50000

2.2. Bring up Ethernet Port on the PC Side

To communicate via the host PC with the NPAP Evaluation Reference Design, set the IP address of the connected network interface according to the setting of the NPAP ERD IP address scheme.

In this example ERD 2 is used, following the pattern: 192.168.<ERD No>.<NPAP Instance>
leads to following IP: 192.168.2.<NPAP Instance>

As the PC should not have the same IP as any of the NPAP instances, 105 was chosen as the last octet for the host PC, which leads to the final PC IP: 192.168.2.105 . There are two options to set the PC IP address:

With net-tools (assuming network interface eth2):

$ sudo ifconfig eth2 192.168.2.105
eth2        Link encap:Ethernet  HWaddr 90:e2:ba:4a:d9:ad
inet addr:192.168.2.105  Bcast:192.168.2.255  Mask:255.255.255.0
inet6 addr: fe80::92e2:baff:fe4a:d9ad/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:608 errors:2 dropped:0 overruns:0 frame:2
TX packets:3215 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:135015 (135.0 KB)  TX bytes:352322 (352.3 KB)
Then check if the corresponding link is up use the following code:
$ sudo ethtool eth2

Then check if the corresponding link is up use the following code:

$ sudo ethtool eth2
Settings for eth2:
Supported ports: [ FIBRE ]
Supported link modes:   10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes:  10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

With IP:

sudo ip addr add dev <interface> <ipaddr>/24
sudo ip set dev <interface> up
sudo ip addr show (to check the link)

2.3. PING

The simplest connectivity test is using the ICMP layer echo request/reply mechanism, widely known as ping and used by the program ping, which already gives an impression about the short and deterministic latency offered by NPAP.

This example uses ERD 2 and the first NPAP instance.
following the pattern 192.168.<ERD No>.<NPAP Instance> the IP is: 192.168.2.1

Example Ping:

$ $ ping -c 10 192.168.2.1
PING 192.168.2.101 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=255 time=0.035 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=3 ttl=255 time=0.029 ms
64 bytes from 192.168.2.1: icmp_seq=4 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=5 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=6 ttl=255 time=0.026 ms
64 bytes from 192.168.2.1: icmp_seq=7 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=8 ttl=255 time=0.031 ms
64 bytes from 192.168.2.1: icmp_seq=9 ttl=255 time=0.028 ms
64 bytes from 192.168.2.1: icmp_seq=10 ttl=255 time=0.026 ms
--- 192.168.2.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8997ms
rtt min/avg/max/mdev = 0.026/0.028/0.035/0.004 ms

2.4. Netperf

Netperf tests the throughput of a network path, which in this case is a point-to-point connection between the host with its Network Interface Card (NIC) and the Board with the NPAP Evaluation Reference Design (ERD). Netperf comprises a server (netserver) and a client (Netperf). The hardware provided by the NPAP package is a hybrid implementation, which may function as a server or a client, but not both at the same time. For now, the Netperf server (equivalent to the netserver tool) is used as it is already set up to listen on the default netserver port (12865) for incoming test requests per default.

Note

The Netperf HDL implementation only works with Netperf version v2.6.0

The TCP_STREAM test implements a bandwidth performance test for a stream from the client to the server.

This example uses ERD 2 and the first NPAP instance.
following the pattern 192.168.<ERD No>.<NPAP Instance +1> the IP is: 192.168.2.1

Example Netperf:

$ netperf -t TCP_STREAM -H 192.168.2.1 -l 5
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.101 () port 0 AF_INET : demo
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec
87380  16384  16384    5.00     9416.79

Note

The performance results of the Netperf tests highly depend on the configuration of the host PC. For examples on how to configure a performant host Linux system, e.g. see the RedHat Network Performance Tuning Guide. https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf

Supported tests in Netperf HDL Implementation:

The Netperf HDL implementation is based on Netperf 2.6.0 so the counterpart needs to be the same version. The following features are supported in the HDL implementation:

  • Comfigurable control and test ports

  • TCP_STREAM

  • TCP_MAERTS

  • TCP_RR

  • UDP_STREAM

  • UDP_RR

Following test options are available:

  • Test length

  • packet sizes:

    • TCP no jumbo frame setting 1-1460 byte

    • TCP jumbo frame setting 1-8192 byte

    • UDP no jumbo frame setting 1-1471 byte

2.5. TCP Loopback

Per default every TCP user session implements a TCP echo server, which mirrors any incoming data received back to the sender. The first TCP user session listens for incoming connections on port 50000 with every subsequent TCP user session listening on port 50000 + the number of the TCP user session. Therefore, an Evaluation Reference Design (ERD) implementing 3 TCP user sessions listens for connections on ports: 50000, 50001 and 50002.

To interactively test the TCP loopback implementation telnet allows for connecting to the server as well as interactively sending and receiving data. Telnet sends out data when the return key is pressed:

Example Loopback:

# $ telnet 192.168.2.1 50001
Trying 192.168.2.1...
Connected to 192.168.2.1.
Escape character is '^]'.
Hi MLE TCP Loopback on ZCU102
Hi MLE TCP Loopback on ZCU102

2.6. UDP Loopback

The UDP loopback functionality mirrors any incoming UDP data back to the original sender, allowing for a full-duplex throughput test. This guide uses the command-line tools socat to establish the connection, dd to generate a high-volume data stream, and pv to monitor the transfer rate in real-time.

Note

This test requires the tools socat and pv to be installed on your host system. You can typically install them using your system’s package manager (e.g., apt install socat pv).

Because the test involves sending and receiving data simultaneously, you will need to open two separate terminal windows on your host PC.

First, ensure your network interface is configured with an appropriate IP address and a large MTU for best performance (jumbo frames):

$ sudo ifconfig <interface> 192.168.2.105 netmask 255.255.255.0 mtu 9000

This example again uses the PC IP 192.168.2.105, which corresponds to ERD 2.

Terminal 1: Start the Receiver

In the first terminal, start the receiving process. This command configures socat to listen on a specific port for the looped-back data from the ERD. The received data is then piped to pv to display the throughput before being discarded.

This example uses ERD 2, which has an IP of 192.168.2.1.
The PC IP is 192.168.2.105 and will listen on port 60000.
$ socat -u udp-recvfrom:60000,bind=192.168.2.105,fork - | pv > /dev/null

Terminal 2: Start the Sender

In the second terminal, start the sending process. This command uses dd to generate a data stream, which is piped through pv (to monitor the sending rate) and then sent by socat to the ERD’s UDP port (55000). The sourceport=60000 option ensures the ERD loops the data back to the port our receiver is listening on.

$ dd bs=10M if=/dev/zero | pv | socat -u -b 8192 - udp-sendto:192.168.2.1:55000,sourceport=60000

Once both commands are running, you will see the live throughput statistics in both terminals. To stop the test, press CTRL+C in either window. The final output will show the total data transferred and the average rate.

Example output from the receiver after stopping:

15.4GiB 0:00:32 [ 493MiB/s] [ <=>                                     ]

Example output from the sender after stopping:

15.4GiB 0:00:32 [ 493MiB/s] [ <=>                                     ]
3166+1 records in
3166+1 records out
16188948480 bytes (16 GB, 15 GiB) copied, 32.8912 s, 492 MB/s

Please note that the final throughput figures are highly dependent on the performance and network stack configuration of the host PC. You may observe that the sending bandwidth is slightly higher than the receiving bandwidth, which indicates packet loss. As UDP does not guarantee delivery, this is an expected behavior when pushing the system to its performance limits.

A reliable method to determine the source of packet loss is to use a managed network switch. By connecting both the host PC and the NPAP ERD to the switch, you can use its statistics as a neutral point of observation.

After running the UDP loopback test, check the traffic counters on the switch ports connected to the ERD and the host PC. You should look for the following:

  • On the ERD’s switch port, the number of incoming (ingress) packets should be nearly identical to the number of outgoing (egress) packets. This confirms that the NPAP board is successfully looping back all the data it receives.

  • If the switch confirms the ERD is not dropping packets, but your receiving application on the host still reports loss, the bottleneck is located on the host PC side.

This analysis typically demonstrates that packet loss at high data rates occurs within the host’s network stack or receiving application, not on the NPAP ERD itself.