Evaluating MLE NPAP in the MLE FPGA Cloud
MLE’s Network Protocol Accelerator Platform (NPAP) for 1/2.5/5/10/25/40/50/100 Gigabit Ethernet is a TCP/UDP/IP network protocol Full-Accelerator subsystem which instantiates the standalone 128 bit TCP/IP Stack technology from German Fraunhofer Heinrich-Hertz-Institute (HHI). This Fraunhofer HHI 10 GbE TCP/IP Stack was designed for embeddable FPGA and ASIC system solutions and offers the following features:
- Interface to 1 / 2.5 / 5 / 10 / 25 / 40 / 50 / 100 Gigabit Ethernet
- Full-duplex with 128 bit wide bidirectional datapath
- Full line rate up to 70 Gbps per NPAP instance
- Full line rate >100 Gbps per individual TCP session in ASIC
- Low round trip time NPAP-to-NPAP 700 nanoseconds for 100 Bytes RTT
Implementing distributed systems using TCP/IP and/or UDP/IP is a complex task, especially when it comes to performance (as in throughput and/or latency). MLE NPAP is a highly parameterizable TCP/UDP/IPv4 stack for FPGA which has many features.
At MLE we integrate 3rd party IP cores ourselves, so we fully understand the need to try, evaluate and benchmark NPAP whether it is suitable for your project needs.
To make things easy, Team MLE came up with a fine tuned approach for evaluating MLE NPAP quickly and efficiently. Here are the many choices Team MLE offers to you:
- Discuss your system requirements and constraints with Team MLE. This may help understand TCP/IP and/or UDP/IP related aspects such as performance, implications of the bandwidth-delay-product, for example, etc.
- Quickly evaluate and benchmark MLE NPAP using the remote evaluation. No need to obtain FPGA hardware or run FPGA synthesis!
- Obtain one of the many Evaluation Reference Designs (ERD) for MLE NPAP. Most ERDs are provided free-of-charge so you can try MLE NPAP in your lab. However, you do need physical access to one of the FPGA Development Kits supported.
- If you wish to try MLE NPAP on your custom FPGA hardware, you can define an engineering project with MLE. You will then receive a binary or netlist implementation that runs on your custom FPGA hardware.
- Sometimes, we suggest that you purchase a so-called Developers License. This is an extended evaluation license which gives you full access to the HDL source code (and the many helper files) to integrate MLE NPAP into your FPGA system.
This article explains what the MLE NPAP Remote Evaluation allows you to do and how to do it.
Benefits of Remote Evaluation for MLE NPAP
Immediately, and without the need for any FPGA design work you can learn about TCP/UDP offloading and acceleration, and experience the many benefits of MLE NPAP.
The Remote Evaluation runs Netperf on both sides of the TCP/IP or UDP/IP connection: A Linux server and NPAP on the FPGA card. This allows you to test and experience MLE NPAP, for TCP and/or for UDP, in different combinations:
- MLE NPAP vs the open source Linux TCP/UDP/IP stack – a common use case where the FPGA transports and/or receives data from a Linux server
- MLE NPAP vs MLE NPAP – a use case for high performance as FPGA Full Accelerators on both sides deliver highest throughput and lowest Round-Trip Times
This point-to-point Ethernet network is dedicated to our NPAP remote evaluation.
For administrative purposes, and fully transparent to you, we run a separate network. We also use open source Labgrid as administrative infrastructure.
You then log in into one of the servers in the FPGA cloud and, using Linux commandline perform your tests. Technically this all runs on a virtual machine which will be erased once you are finished.
System Setup for Remote Evaluation
MLE NPAP runs “bare metal” on FPGA cards in the VMAccel cloud. The FPGA card of current choice is the AMD Alveo U200. The Ethernet interface of the FPGA cards is connected via a dedicated 10/25/40/100G Ethernet switch to another FPGA card (for so-called NPAP-to-NPAP tests) as well as to a Mellanox ConnectX-3 Pro NIC (for NPAP-to-Linux tests).
The MLE NPAP Remote Evaluation environment looks like this:
Once you have been granted access to the MLE NPAP Remote Evaluation environment, you will be in charge of what is called “Customer Machine” in the diagram above. This machine is a fully virtualized machine with access to the following components:
- one Mellanox ConnectX-3 Pro NIC,
- two AMD Alveo U200 FPGA boards,
- and one Cisco Nexus 9000 switch, forming a small 25G Ethernet LAN with the previously mentioned three components.
The two AMD Alveo U200 FPGA boards are pre-configured with the MLE NPAP ERD bitstream before access to the MLE NPAP Remote Evaluation environment is granted, thus loading any bitstream is not needed.
The MLE NPAP ERD on an AMD Alveo U200 board is controlled via a UART connection through labgrid. Labgrid also allows access to the JTAG interface of both boards.
In terms of management network connectivity, the Customer Machine cannot access any Internet resource, except for the standard Ubuntu software package repositories. Other than that, only incoming SSH connections on a dedicated TCP port (5910) are allowed.
In terms of Ubuntu software packages on the Customer Machine, this is a rather minimal Ubuntu Server installation with few additional tools like netperf pre-installed.
Next to standard Ubuntu software packages, you’ll find our MLE NPAP Test Environment checked out and ready for use. It is based on pytest. Various individual sets of tests exist to set up and to test out board-to-board (“b2b” – as in NPAP-to-NPAP) and board-to-software (“b2s” – as in NPAP to Linux) performance – regarding throughput, latency and data integrity. “board-to-software” refers to running traffic between a board and the Customer machine.
The Mellanox ConnectX-3 Pro NIC is connected to the Nexus switch with its first port (Linux network interface enp5s0). This first port is automatically configured with the IPv4 address 192.168.10.100/24:
(venv) customer@vma-customer1:~$ ip addr show dev enp5s0
3: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether e4:1d:2d:71:81:a0 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.100/24 brd 192.168.10.255 scope global enp5s0
valid_lft forever preferred_lft forever
When later setting up and running tests, the MLE NPAP ERD bitstream of each FPGA board is also configured with an IPv4 address within that range (192.168.10.1/24 and 192.168.10.2/24 respectively).
➡ Download the MLE NPAP Remote Evaluation Guide for a step-by-step tutorial and evaluation options.
Legal Notice
MLE’s Remote Evaluation System adheres to German / European Data Privacy laws and your evaluation will be covered by an Evaluation License under MLE’s Product License Agreement.