- LENOVO
- >
- Flex System IB6132D 2-port FDR InfiniBand Adapter
Flex System IB6132D 2-port FDR InfiniBand Adapter
Features
The Flex System IB6132D 2-port FDR InfiniBand Adapter has the following features.
Performance
Based on Mellanox ConnectX-3 technology, the IB6132D 2-port FDR InfiniBand Adapter provides a high level of throughput performance for all network environments by removing I/O bottlenecks in mainstream servers that are limiting application performance. Servers can achieve up to 56 Gbps transmit and receive bandwidth. Hardware-based InfiniBand transport and IP over InfiniBand (IPoIB) stateless offload engines handle the segmentation, reassembly, and checksum calculations that otherwise burden the host processor.
RDMA over the InfiniBand fabric further accelerates application run time while reducing CPU utilization. RDMA allows very high-volume transaction-intensive applications typical of HPC and financial market firms, as well as other industries where speed of data delivery is paramount. With the ConnectX-3-based adapter, highly compute-intensive tasks running on hundreds or thousands of multiprocessor nodes, such as climate research, molecular modeling, and physical simulations, can share data and synchronize faster, resulting in shorter run times. High-frequency transaction applications are able to access trading information more quickly, making sure that the trading servers are able to respond first to any new market data and market inefficiencies, while the higher throughput enables higher volume trading, maximizing liquidity and profitability.
In data mining or web crawl applications, RDMA provides the needed boost in performance to search faster by solving the network latency bottleneck that is associated with I/O cards and the corresponding transport technology in the cloud. Various other applications that benefit from RDMA with ConnectX-3 include Web 2.0 (Content Delivery Network), Business Intelligence, database transactions, and various cloud-computing applications. The low-power consumption of Mellanox ConnectX-3 provides clients with high bandwidth and low latency at the lowest cost of ownership.
I/O virtualization
Mellanox adapters that use Virtual Intelligent Queuing (Virtual-IQ) technology with SR-IOV provide dedicated adapter resources and ensured isolation and protection for virtual machines (VM) within the server. I/O virtualization on InfiniBand gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.
Quality of service
Resource allocation per application or per VM is provided by the advanced quality of service (QoS) that is supported by ConnectX-3. Service levels for multiple traffic types can be assigned on a per flow basis, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grained control of traffic, ensuring that applications run smoothly in today’s complex environments.