How many total unique AI systems will Baidu and Alibaba submit for the next round of the MLPerf benchmarking suite?

Started Jul 28, 2022 07:00PM UTC
Closed Nov 09, 2022 02:00PM UTC

The MLPerf Benchmarking suite measures how fast systems can train models to a target quality metric. MLPerf has emerged as an industry standard for companies to publicly show how fast their hardware has become for solving machine learning problems. Here is a short summary of the current benchmarks and metrics with a detailed description of the motivation and guiding principles behind the benchmark suite.

This question will resolve using results reported on the MLCommons website for the December 2022 round. We expect results for the next submission round to be available in early December 2022. All results to date are available here. Results from previous rounds can be viewed by selecting them from the “Other Rounds” dropdown box. To count the total number of AI models submitted by Baidu and Alibaba:
  • First, under the “Closed” tab of the spreadsheet, look for the rows where Alibaba or Baidu is listed under the “Submitter” column. 
  • Each row represents an AI system with a unique number in the “ID” column.
  • The total number of unique AI systems is equal to the total number of rows where either Baidu or Alibaba is listed under the “Submitter” column.

Baidu made 4 submissions in v2.0 (Jun 2022) and 2 in v1.1 (Dec 2021). Alibaba made 4 submissions in v0.7 (Jul 2020) and 1 in v0.6 (Jun 2019).

Resolution Notes

https://mlcommons.org/en/training-normal-21/

Baidu had 1 submission and Alibaba had 0 in the v2.1 round of MLPerf results

Possible Answer Correct? Final Crowd Forecast
Less than or equal to 4 33%
Between 5 and 9, inclusive 57%
More than or equal to 10 10%

Crowd Forecast Profile

Participation Level
Number of Forecasters 24
Average for questions older than 6 months: 59
Number of Forecasts 72
Average for questions older than 6 months: 208
Accuracy
Participants in this question vs. all forecasters average

Most Accurate

Relative Brier Score

1.
-0.36201
2.
-0.260551
3.
-0.247653
4.
-0.152343
5.
-0.07945

Consensus Trend

Files
Tip: Mention someone by typing @username