I am staying at 5% for now - I still see that this is not that unlikely with
- 8 months until the question deadline,
- the presidential campaign, where both Biden and Trump may want to play the China card by acting against any possible threats to national security to score some points (TikTok bill is one of the signals here)
- China may do something seriously threatening toward Taiwan, which could provoke more pressure on tech companies from both the US government and general public to limit their cooperation with Chinese-based scientists and their presence in China.
- China may use generative AI to affect the US elections, as Microsoft itself warned. @ctsats wrote a great article with good argumentation of why generative AI is unlikely to affect elections, but I think that lessons from the Cambridge Analytica case show that the media narrative and public perceptions matter a lot more than the effect such actions may really have.
Interesting relevant recent developments:
Microsoft’s Beijing-based research group published a new open source AI model on Tuesday, only to remove it from the internet hours later after the company realized that the model hadn’t gone through adequate safety testing. The team that published the model, which is comprised of China-based researchers in Microsoft Research Asia, said in a tweet on Tuesday that they “accidentally missed” the safety testing step that Microsoft requires before models can be published. Microsoft’s AI policies require that before any AI models can be published, they must be approved by the company’s Deployment Safety Board, which tests whether the models can carry out harmful tasks such as creating violent or disturbing content, according to an employee familiar with the process.
In a now-deleted blog post, the researchers behind the model, dubbed WizardLM-2, said that it could carry out tasks like generating text, suggesting code, translating between different languages, or solving some math problems. The model is an improved version of the first generation of WizardLM, an open source model published by the same group last year. While WizardLM-2 has now been deleted from open source repositories like GitHub and Hugging Face, it was online for several hours, meaning it’s possible some users were able to download the model before it was taken down.
The episode comes as Microsoft aims to navigate growing political scrutiny over the development of AI in China. The Biden Administration has imposed limits on which AI models and hardware made by US companies can be deployed in China, and has reportedly questioned Microsoft about how the company will ensure that its Beijing-based lab develops cutting-edge technology safely.
I don't think that this safety failure will have any significant consequences itself, but it can be used as an argument against the Lab in China, if the pressure on Microsoft to close the lab rises,.
This article U.S., Microsoft elbow China's AI in Gulf shows that the US government cooperated with Microsoft to sideline China in providing AI infrastructure to U.A.E.
Tagging also @cmeinel @NukePirate @Perspectus @Tolga @NoUsernameSelected @PeterStamp
I think that with current tensions and several groups of hackers with a history of successful attacks against Iran (not only Predatory Sparrow/Gonjeshke-Darande, but also "Uprising till Overthrow" and Edalat-e Ali ("Ali's Justice") and , this is quite likely. With the clarification which I got "This question is concerned with a cyberattack that damages or interrupts the normal functioning of critical infrastructure as mentioned in the clarification" one can say that something not as spectacular as taking down the website of a bank (Financial services) or Iranian government media website (Information technology) should still count. The imprecise definition of what counts as "critical infrastructure" in the resolution criteria seems to be a potential weak point here. I will ask the team for another clarification.
@cmeinel - as an expert on that matters, do you think that the cases I mentioned above should count towards resolution as "yes"?
Normally freelance hackers don't manage to do much harm to critical infrastructure. Part of the reason is that they usually aren't that good, and they like to do something that they can brag about. Hence the defacing or shutting down of websites while leaving their calling cards. Even the worst instance of shutting down websites, the Code Red attacks, only caused minor inconveniences, for example, barge traffic on the Mississippi River when locations of the constantly moving sandbars were unavailable. The sand bars move slowly enough except in floods that missing a day or two wasn't serious. I might be wrong on how our scorers might view this, but since Iran has no navigable rivers, this is just a hypothetical.
The serious hackers are with the U.S., Israel, and other governments. Another hacker axis is Russia, China and North Korea, although I haven't heard of those nations cooperating. Also, many (most?) of Russia's top hackers fled rather than participate in the war on Ukraine. As with Stuxnet, they can collaborate among allied nations. The truly great attacks require teams of hackers and just plain luck. So just having the intention of a cyberattack doesn't mean they can do it before a deadline. The wait for a breakthrough is long, and the damage repair timeline is short.
Note also that research on the personalities of hackers by Bernadett Schell using participants at the Defcon hacker convention found that most hackers are fairly ordinary in terms of good attitudes and ethical practices. Research by Bugcrowd which offers hacker challenges paid for by the US government and major corporations found significant tendencies toward autism spectrum and ADHD. But no out-of-the-ordinary tendency toward sociopathy or psychopathy.