Will Google score more wins than any other submitter in the next round of the MLPerf training benchmarking suite?
MLCommons hosts MLPerf, a set of biannual benchmarking competitions to assess how fast different machine learning programs are at various tasks including image classification, object detection, speech recognition, and natural language processing (MLCommons, EnterpriseAI). Google has been using MLPerf to test the speed of its Tensor Processing Unit, an application-specific integrated circuit (ASIC) designed to accelerate AI applications (Google Cloud). In the June 2022 (v2.0) round, Google scored 5 wins. NVIDIA scored the second most wins with 3.
- All results to date are available here. Results from previous rounds can be viewed by selecting them from the “Other Rounds” dropdown box.
- To find the winner in each category, scroll across to the relevant column under "Benchmark results" (e.g., image classification). The winner will have the fastest time (i.e., the lowest number) in that column. Scroll left to see the submitter for that time.
- If multiple submitters have the lowest time on a benchmark test, they will both be considered winners of that benchmark.