Started Aug 04, 2022 07:00PM UTC   •   Closing Nov 16, 2022 04:59AM UTC

Will Google score more wins than any other submitter in the next round of the MLPerf training benchmarking suite?

MLCommons hosts MLPerf, a set of biannual benchmarking competitions to assess how fast different machine learning programs are at various tasks including image classification, object detection, speech recognition, and natural language processing (MLCommons, EnterpriseAI). Google has been using MLPerf to test the speed of its Tensor Processing Unit, an application-specific integrated circuit (ASIC) designed to accelerate AI applications (Google Cloud). In the June 2022 (v2.0) round, Google scored 5 wins. NVIDIA scored the second most wins with 3. 


Resolution criteria: This question will resolve using results reported on the MLCommons website for the "Closed" division of the December 2022 round. We expect results for the next submission round to be available in early December 2022.

Historical data:
  • All results to date are available here. Results from previous rounds can be viewed by selecting them from the “Other Rounds” dropdown box.
  • To find the winner in each category, scroll across to the relevant column under "Benchmark results" (e.g., image classification). The winner will have the fastest time (i.e., the lowest number) in that column. Scroll left to see the submitter for that time.
  • If multiple submitters have the lowest time on a benchmark test, they will both be considered winners of that benchmark.

Tip: Mention someone by typing @username