The small form factor HPE Edgeline EL8000 is designed for AI tasks such as computer vision and natural-language processing. Later this month, HP Enterprise will ship what looks to be the first server aimed specifically at AI inferencing for machine learning. Machine learning is a two-part process, training and inferencing. Training is usign powerful GPUs from Nvidia and AMD or other high-performance chips to “teach” the AI system what to look for, such as image recognition. Inference answers if the subject is a match for trained models. A GPU is overkill for that task, and a much lower power processor can be used. Enter Qualcomm’s Cloud AI100 chip, which is designed for artificial intelligence on the edge. It has up to 16 “AI cores” and supports FP16, INT8, INT16, FP32 data formats, all of which are used in inferencing. These are not custom Arm processors, they are entirely new SoCs designed for inferencing. The AI100 is a part of the HPE Edgeline EL8000 edge gateway system that integrates compute, storage, and management in a single edge device. Inference workloads are often larger in scale and often require low-latency and high-throughput to enable real-time results. The HPE Edgeline EL8000 is a 5U system that supports up to four independent server blades clustered using dual-redundant chassis-integrated switches. Its little brother, the HPE Edgeline EL8000t is a 2U design supports two independent server blades. In addition to performance, Cloud AI100 has a low power draw. It comes in two form factors, a PCI Express card and dual M.2 chips mounted on the motherboard. The PCIe card has a 75 watt power envelope while the two M.2 form factor units draw either 15 watts or 25 watts. A typical CPU is draws more than 200 watts, and a GPU over 400 watts. Qualcomm says Cloud AI 100 supports all key industry-standard model formats including ONNX, TensorFlow, PyTorch, and Caffe that can be imported and prepared from pre-trained models that can be compiled and optimized for deployment. Qualcomm has a set of tools for model porting and preparation including support for custom operations. Qualcomm says the Cloud AI100 is targeting manufacturing/industrial customers, as well as those with edge AI requirements. Use cases for AI inference computing at the edge include computer vision and natural language processing (NLP) workloads. For computer vision, this could include quality control and quality assurance in manufacturing, object detection and video surveillance, and loss prevention and detection. For NLP it ncludes programming-code generation, smart assistant operations, and language translation. Edgeline servers will be available for purchase or lease through HPE GreenLake later this month. Related content news AMD holds steady against Intel in Q1 x86 processor shipments finally realigned with typical seasonal trends for client and server processors, according to Mercury Research. By Andy Patrizio May 22, 2024 4 mins CPUs and Processors Data Center news Broadcom launches 400G Ethernet adapters The highly scalable, low-power 400G PCIe Gen 5.0 Ethernet adapters are designed for AI in the data center. By Andy Patrizio May 21, 2024 3 mins CPUs and Processors Networking news HPE updates block storage services The company adds new storage controller support as well as AWS. By Andy Patrizio May 20, 2024 3 mins Enterprise Storage Data Center news ZutaCore launches liquid cooling for advanced Nvidia chips The HyperCool direct-to-chip system from ZutaCore is designed to cool up to 120kW of rack power without requiring a facilities modification. By Andy Patrizio May 15, 2024 3 mins Servers Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe