Deep learning Inference Platform
Nvidia provides a range of Inference software and Accelerators for Datacenter, Edge, Cloud and Autonomous machines.
Demand for sophisticated AI services such as image and speech recognition, natural language processing, visual search and personalized recommendations is exploding. On the other hand, growing data sets, increasingly complex networks and low latency requirements are tightening to meet user requirements
INVIDIA® TensorRT™ - a programmable inference accelerator. INVIDIA® TensorRT™ delivers the performance that is so critical to powering the next generation of Artifical intelligence products and services - wherever they may be : in the datacenter, on the edge, in vehicles, embedded in machines or on the Cloud.