Test Machine Learning and Deep Learning Models Under Different Hardware Configuration
The one-stop open platform designed to spur innovation by enabling machine learning developers, users, and system optimizers to quickly find, test, deploy, and benchmark combinations of models, frameworks and hardware configurations.
HOW TO CHOOSE RIGHT
MODELS?
HOW TO CHOOSE RIGHT
FRAMEWORK ?
HOW TO CHOOSE RIGHT
MACHINES?
Find the latest models as published in the literature for your task (be it classification, object detection, tracking, machine translation and more) and directly run those models using either standard dataset or your own dataset - without worrying about the hassle of installing any software. See how those models perform and compare with each other and draw your own conclusion.
Run and compare performance and accuracy results of the same models on a wide range of deep learning frameworks, such as Tensorflow, MXNet, PyTorch, Caffe, Caffe2, CNTK, TensorRT and more. Side-by-side comparison results clearly reveal the pros and cons of various framework.
Gain the insight of system performance bottlenecks across the hierarchical stacks, from application pipeline to model pipeline, to framework runtime pipeline, to kernel launching pipeline, to library and hardware instruction sets, with a rich set of traces collected from running the most relevant machine learning models and datasets. The supported hardware systems include X86, POWER, and ARM with accelerators including GPUS and FPGAs.