Will Google score more wins than any other submitter in the next round of the MLPerf training benchmarking suite?

Started Aug 04, 2022 07:00PM UTC
Closed Nov 14, 2022 02:43PM UTC

MLCommons hosts MLPerf, a set of biannual benchmarking competitions to assess how fast different machine learning programs are at various tasks including image classification, object detection, speech recognition, and natural language processing (MLCommons, EnterpriseAI). Google has been using MLPerf to test the speed of its Tensor Processing Unit, an application-specific integrated circuit (ASIC) designed to accelerate AI applications (Google Cloud). In the June 2022 (v2.0) round, Google scored 5 wins. NVIDIA scored the second most wins with 3. 


Resolution criteria: This question will resolve using results reported on the MLCommons website for the "Closed" division of the December 2022 round. We expect results for the next submission round to be available in early December 2022.

Historical data:
  • All results to date are available here. Results from previous rounds can be viewed by selecting them from the “Other Rounds” dropdown box.
  • To find the winner in each category, scroll across to the relevant column under "Benchmark results" (e.g., image classification). The winner will have the fastest time (i.e., the lowest number) in that column. Scroll left to see the submitter for that time.
  • If multiple submitters have the lowest time on a benchmark test, they will both be considered winners of that benchmark.


Question clarification
Issued on 10/04/22 04:23pm
If Google doesn't participate in the next round of the MLPerf benchmarking suite, Google will be considered to have scored zero wins, and this question will resolve as "No".
Resolution Notes

Google did not make any submissions to the v2.1 round of MLPerf benchmark suite.

Possible Answer Correct? Final Crowd Forecast
Yes 23.23%
No 76.77%

Crowd Forecast Profile

Participation Level
Number of Forecasters 27
Average for questions older than 6 months: 58
Number of Forecasts 80
Average for questions older than 6 months: 205
Accuracy
Participants in this question vs. all forecasters better than average

Most Accurate

Relative Brier Score

1.
-0.207959
2.
-0.207959
3.
-0.143227
4.
-0.091537
5.
-0.072539

Consensus Trend

Files
Tip: Mention someone by typing @username