CONTACT MLE
We are glad that you preferred to contact us. Please fill our short form and one of our friendly team members will contact you back.


    NPAP-10G Remote Eval.NPAP-25G Remote Eval.


    X
    CONTACT MLE

    A Deep Dive into AMD/Xilinx AXI Bridge for PCI Express (AMD/Xilinx PG194) and Why We Tweaked C_M_AXI_NUM_READQ

    A Deep Dive into AMD/Xilinx AXI Bridge for PCI Express (AMD/Xilinx PG194) and Why We Tweaked C_M_AXI_NUM_READQ Table of ContentsA Deep Dive into AMD/Xilinx AXI Bridge for PCI Express (AMD/Xilinx PG194) and Why We Tweaked C_M_AXI_NUM_READQExecutive SummaryAMD/Xilinx AXI Bridge for PCI Express OverviewPerformance Limitations in certain Scenarios Executive Summary AMD/Xilinx’ AXI Bridge for PCI Express (PG194) implements a bi-directional communication channel from and to FPGA internal memory mapped AXI4 masters and slaves to and from external PCIe connected memory mapped devices, with the FPGA operating as PCIe endpoint or root port. In many scenarios the performance of forwarding communication between the two protocols, AXI4 and PCIe, is sufficient and the AMD/Xilinx IP core can be used as well. However, in certain cases tweaking is necessary to achieve the expected throughput. Depending on the amount of extra performance required the modification ranges from simply tuning a hidden parameter to patching the IP’s HDL sources. In the example project used for this description the PCIe peer to peer (P2P) write performance from an FPGA to a 12 NVMe SSD RAID0 increased from 2.700 MiB/s to 4.900 MiB/s to 8.600 MiB/s. AMD/Xilinx AXI Bridge for PCI Express Overview The AMD/Xilinx AXI Bridge for PCI Express is implemented differently for different AMD/Xilinx FPGA families. This description

    Picking The Right Granularity When Buffering PCIe/NVMe Data

    Picking The Right Granularity When Buffering PCIe/NVMe Data Table of ContentsPicking The Right Granularity When Buffering PCIe/NVMe DataBlockRAM (BRAM)UltraRAM (URAM)DDRx DRAMHigh Bandwidth Memory (HBM2)Conclusion Non-Volatile Memory Express (NVMe) is an interface specification often used with PCIe. Its goal is to leverage the parallelism and low latency of modern SSDs. A typical PCIe payload data transfer happens in data chunks of either 128 Byte or 256 Byte. SSDs deploy several tricks (wear leveling, SLC to TLC conversion) to enhance the read and write speeds as well as their lifespan. One downside is that their read and write speed is not constant over a long write/read period which might result in backpressure. Some applications do not support back pressure that can lead to an erroneous state if one employs a standard SSD system.   One possible mitigation strategy is to have an elastic buffer between the SSD and the data source.  Using an FPGA, there are different possibilities to implement an elastic buffer. At MLE, we investigated BlockRAM (BRAM), UltraRAM (URAM), Dynamic RAM (DRAM) and the second generation of High Bandwidth Memory  (HBM2). Each memory technology has its advantages and disadvantages regarding its capabilities to handle different data chunk sizes.  We will present our findings below.  BlockRAM (BRAM) BRAM is a RAM module which can