1. NPAP Evaluation Reference Design Overview

MLE provides its IP-Cores usually as an Evaluation Reference Design which not only allow to see how the implementation is done but also functions as a testplatform to evaluate the performance and functions of the IP. As NPAP is a highly configurable IP Core, MLE uses multiple designs with different configurations on multiple boards to cover all the functions NPAP offers. To keep the designs organized with the IPv4 address and TCP/UDP port numbers, MLE uses a scheme with the ERD Number and NPAP instantiation.

1.1. IP and Port Numbers

The IP address of the ERD is specific to the ERD number and NPAP instance. The port number is specific to the TCP/UDP functionality. The first TCP port number starts at 50001. The first UDP port starts at 60001.

The scheme is following: 192.168.<ERD No>.<NPAP Instance + 1>:<TCP/UDP Port>

Example: Connection to the first TCP port of the 10G synchronous NPAP instance from the ERD 0. The Table in the next chapter presents the following information:

  • ERD No: 0

  • NPAP instance of the 10G synchronous session: 1

  • TCP Port: 1

This leads to following IP and port number: 192.168.0.1:50001

1.2. Bring up Ethernet Port on the PC Side

To setup the host PC to be able to communicate with the NPAP ERD, set the IP address of the connected network interface according to the setting to the Eval.

In this example ERD 2 is used, following the pattern: 192.168.<ERD No>.<NPAP Instance +1>
This leads to following IP: 192.168.2.<NPAP Instance +1>

As PC should not have the same IP as any of the sessions, 105 was picked which leads to the final PC IP: 192.168.2.105 There are two options to do this:

With net-tools:

$ sudo ifconfig eth2 192.168.2.105
eth2        Link encap:Ethernet  HWaddr 90:e2:ba:4a:d9:ad
inet addr:192.168.2.105  Bcast:192.168.2.255  Mask:255.255.255.0
inet6 addr: fe80::92e2:baff:fe4a:d9ad/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:608 errors:2 dropped:0 overruns:0 frame:2
TX packets:3215 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:135015 (135.0 KB)  TX bytes:352322 (352.3 KB)
Then check if the corresponding link is up use the following code:
$ sudo ethtool eth2

Then check if the corresponding link is up use the following code:

$ sudo ethtool eth2
Settings for eth2:
Supported ports: [ FIBRE ]
Supported link modes:   10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes:  10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

With IP:

sudo ip addr add dev <interface> <ipaddr>/24
sudo ip set dev <interface> up
sudo ip addr show (to check the link)

1.3. PING

The simplest connectivity test is using the ICMP layer echo request/reply mechanism, widely known as ping and used by the program ping, which already gives an impression about the short and deterministic latency offered by NPAP.

This example uses ERD 2 and the first NPAP instance.
following the pattern 192.168.<ERD No>.<NPAP Instance +1> the IP is: 192.168.2.1

Example Ping:

$ $ ping -c 10 192.168.2.1
PING 192.168.2.101 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=255 time=0.035 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=3 ttl=255 time=0.029 ms
64 bytes from 192.168.2.1: icmp_seq=4 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=5 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=6 ttl=255 time=0.026 ms
64 bytes from 192.168.2.1: icmp_seq=7 ttl=255 time=0.027 ms
64 bytes from 192.168.2.1: icmp_seq=8 ttl=255 time=0.031 ms
64 bytes from 192.168.2.1: icmp_seq=9 ttl=255 time=0.028 ms
64 bytes from 192.168.2.1: icmp_seq=10 ttl=255 time=0.026 ms
--- 192.168.2.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8997ms
rtt min/avg/max/mdev = 0.026/0.028/0.035/0.004 ms

1.4. Netperf

Netperf tests the throughput of a network path, which in this case is a point to point connection between the host with its NIC and the Board with the NPAP ERD. Netperf comprises a server (netserver) and a client (netperf). The hardware provided by the NPAP package is a hybrid implementation, which may function as a server or a client, but not both at the same time. For now the netperf server (equivalent to the netserver tool) is used as it is already set up to listen on the default netserver port (12865) for incoming test requests per default.

Note

The Netperf HDL implementation only works with Netperf version v2.6.0

The TCP_STREAM test implements a bandwidth performance test for a stream from the client to the server.

This example uses ERD 2 and the first NPAP instance.
following the pattern 192.168.<ERD No>.<NPAP Instance +1> the IP is: 192.168.2.1

Example Netperf:

$ netperf -t TCP_STREAM -H 192.168.2.1 -l 5
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.101 () port 0 AF_INET : demo
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec
87380  16384  16384    5.00     9416.79

Note

The performance results of the Netperf tests highly depend on the configuration of the host PC. For examples on how to configure a performant host Linux system, e.g. see the RedHat Network Performance Tuning Guide. https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf

Supported tests in Netperf HDL Implementation:

The Netperf HDL implementation is based on Netperf 2.6.0 so the counterpart needs to be the same version. The following features are supported in the HDL implementation:

  • Comfigurable control and test ports

  • TCP_STREAM

  • TCP_MAERTS

  • TCP_RR

  • UDP_STREAM

  • UDP_RR

Following test options are available:

  • Test length

  • packet sizes:

    • TCP no jumbo frame setting 1-1460 byte

    • TCP jumbo frame setting 1-8192 byte

    • UDP no jumbo frame setting 1-1471 byte

1.5. TCP Loopback

Per default every TCP user session implements a TCP echo server, which mirrors any incoming data received back to the sender. The first TCP user session listens for incoming connections on port 50000 with every subsequent TCP user session listening on port 50000 + the number of the TCP user session. Therefore, an ERD implementing 3 TCP User sessions listens for connections on ports: 50000, 50001 and 50002.

To interactively test the TCP loopback implementation telnet allows for connecting to the server as well as interactively sending and receiving data. Telnet sends out data when the return key is pressed:

Example Loopback:

# $ telnet 192.168.2.1 50001
Trying 192.168.2.1...
Connected to 192.168.2.1.
Escape character is '^]'.
Hi MLE TCP Loopback on ZCU102
Hi MLE TCP Loopback on ZCU102

1.6. UDP Loopback

Roadmap Item

2. ERD Overview

Below is a Table of the available, in development or planned MLE NPAP Evaluation Reference Designs. The table reflect ERD version 3.4.0.

ERD

Number

Board /

Device Family

Status

NPAP

Instance

NPAP

Speed

MAC

Clock

Netperf

TCP

UDP

IP

Address

SFP

Port No

ERD 0

AMD ZCU102

ready

1

1G

Sync

yes

4

4

192.168.0.1

0

2

1G

Async

yes

4

4

192.168.0.2

1

ERD 1

AMD ZCU102

ready

1

10G

Sync

yes

8

1

192.198.1.1

0

ERD 2

MLE NPAC-KETCH

early access

1

10G

Async

yes

1

1

192.198.2.1

0

2

10G

Async

yes

1

1

192.168.2.2

1

ERD 3

AMD ZCU111

early access

1

25G

Async

yes

5

1

192.168.3.1

0

ERD 4

MLE NPAC-KETCH

road map

1

40G

Async

yes

1

1

192.168.4.1

0,1,2,3

ERD 5

AMD ZCU111

early access

1

100G

Async

yes

3

1

192.168.5.1

0,1,2,3

ERD 6

reserved

ERD 7

Microchip MPF300

early access

1

10G

Sync

yes

3

2

192.168.7.1

0

ERD 8

Trenz TE0950

early access

1

25G

Async

yes

2

1

192.168.8.1

0

ERD 9

N6001 Intel AGF014

road map

1

25/100G

Async

yes

1

1

192.168.9.1

0

ERD 10

AMD Alveo U200

early access

1

25G

Async

yes

5

2

192.168.10.1

0

2

25G

Async

yes

5

2

192.168.11.2

1

ERD 11

AMD Alveo U55C

early access

1

25G

Async

yes

1

1

192.168.11.1

0

2

25G

Async

yes

1

1

192.168.11.2

1

2

25G

Async

yes

1

1

192.168.11.3

2

2

25G

Async

yes

1

1

192.168.11.4

3

ERD 12

Versa-G (ES)

in development

1

25G

Async

yes

1

1

192.168.12.1

0

ERD 13

Arrow Agilex 5E

in development

1

25G

Async

yes

5

3

192.168.13.1

0

2.1. Synchronous / Asynchronous

The image below shows the difference between a synchonous and an asynchronous desgin.

../_images/sync_async_design.png

The asynchronus design uses a different clock for NPAP than for the MAC. A synchronous design uses the same clock for NPAP and the MAC.

2.2. SPF Port Location

Overview NPAC-KETCH:

../_images/NPAC-Ketch.jpg ../_images/MLE_NPAC_Ketch_SFP.png

Overview AMD ZCU111:

../_images/zcu111.jpeg ../_images/AMD_ZCU111_SFP.png

Overview AMD ZCU102

../_images/zcu102.png ../_images/AMD_ZCU102_SFP.png