michal_dubrawski

Michał Dubrawski
About:
Show more

-0.002631

Relative Brier Score

27

Forecasts

91

Upvotes
Forecasting Activity
Forecasting Calendar
 

Past Week Past Month Past Year This Season All Time
Forecasts 1 5 62 27 159
Comments 2 17 131 55 394
Questions Forecasted 1 4 9 5 16
Upvotes on Comments By This User 2 15 256 91 699
 Definitions
New Badge
michal_dubrawski
earned a new badge:

Active Forecaster

New Prediction

I am reducing strongly based on my new (not published yet) analysis and my current understanding: https://www.infer-pub.com/comments/135233

I still need to check my understanding and my calculations, but unless exoplanets discovery process has some anual changes in intensity, it looks to me that it is more likely than not that there were no exoplanet discovered this year which would fit all the required criteria to be added to the HWC and the problem is not related to their irregular updates, they simply may not have anything to report. Still, they should probably publish an update already with results of using new model they mentioned to @ctsats

I still need to check the calculations and may go back up in a day or two, but this probability reflects my current state of knowledge and understanding.

Files
New Badge
michal_dubrawski
earned a new badge:

Star Commenter - Jun 2024

Earned for making 5+ comments in a month (rationales not included).
New Prediction

Reading "SITUATIONAL AWARENESS The Decade Ahead" report by Leopold Aschenbrenner, a former member of Open AI's Superalignment Team had an impact on me - in this context especially his warnings about how we need to make AI development a national security priority (I wonder if this signal will reach decision-makers responsible for National Security). You can find it here: https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf and I think this is a good video summary: https://youtu.be/om5KAKSSpNg?si=gWoPuA7TUtWzF9t9 

Here are some long quotes from the relevant part of the report starting from p. 89:

IIIb. Lock Down the Labs: Security for AGI
The nation’s leading AI labs treat security as an afterthought.
Currently, they’re basically handing the key secrets for AGI
to the CCP on a silver platter. Securing the AGI secrets and
weights against the state-actor threat will be an immense
effort, and we’re not on track.
On the current course, the leading Chinese AGI labs
won’t be in Beijing or Shanghai—they’ll be in San Francisco
and London. In a few years, it will be clear that the AGI secrets are the United States’ most important national defense
secrets—deserving treatment on par with B-21 bomber or
Columbia-class submarine blueprints, let alone the proverbial “nuclear secrets”—but today, we are treating them the way we
would random SaaS software. At this rate, we’re basically just
handing superintelligence to the CCP.
All the trillions we will invest, the mobilization of American industrial might, the efforts of our brightest minds—none of that
matters if China or others can simply steal the model weights
(all a finished AI model is, all AGI will be, is a large file on a
computer) or key algorithmic secrets (the key technical breakthroughs necessary to build AGI).
America’s leading AI labs self-proclaim to be building AGI:
they believe that the technology they are building will, before the decade is out, be the most powerful weapon America
has ever built. But they do not treat it as such. They measure
their security efforts against “random tech startups,” not “key
national defense projects.” As the AGI race intensifies—as it
becomes clear that superintelligence will be utterly decisive in
international military competition—we will have to face the
full force of foreign espionage. Currently, labs are barely able
to defend against scriptkiddies, let alone have “North Koreaproof security,” let alone be ready to face the Chinese Ministry
of State Security bringing its full force to bear.
And this won’t just matter years in the future. Sure, who cares
if GPT-4 weights are stolen—what really matters in terms of
weight security is that we can secure the AGI weights down the
line, so we have a few years, you might say. (Though if we’re
building AGI in 2027, we really have to get moving!) But the
AI labs are developing the algorithmic secrets—the key technical
breakthroughs, the blueprints so to speak—for the AGI right
now (in particular, the RL/self-play/synthetic data/etc “next
paradigm” after LLMs to get past the data wall). AGI-level
security for algorithmic secrets is necessary years before AGIlevel security for weights. These algorithmic breakthroughs
will matter more than a 10x or 100x larger cluster in a few
years—this is a much bigger deal than export controls on compute, which the USG has been (presciently!) intensely pursuing. Right now, you needn’t even mount a dramatic espionage
operation to steal these secrets: just go to any SF party or look
through the office windows.
situational awareness 91
Our failure today will be irreversible soon: in the next 12-24
months, we will leak key AGI breakthroughs to the CCP. It will
be the national security establishment’s single greatest regret
before the decade is out.
The preservation of the free world against the authoritarian
states is on the line—and a healthy lead will be the necessary
buffer that gives us margin to get AI safety right, too. The
United States has an advantage in the AGI race. But we will
give up this lead if we don’t get serious about security very
soon. Getting on this, now, is maybe even the single most important thing we need to do today to ensure AGI goes well.
(...) 
The threat model
There are two key assets we must protect: model weights
(especially as we get close to AGI, but which takes years of
preparation and practice to get right) and algorithmic secrets
(starting yesterday).
Model weights
An AI model is just a large file of numbers on a server. This
can be stolen. All it takes an adversary to match your trillions
of dollars and your smartest minds and your decades of work
is to steal this file. (Imagine if the Nazis had gotten an exact
duplicate of every atomic bomb made in Los Alamos.)
If we can’t keep model weights secure, we’re just building AGI
for the CCP (and, given the current trajectory of AI lab security,
even North Korea).
Even besides national competition, securing model weights is
critical for preventing AI catastrophes as well. All of our handwringing and protective measures are for naught if a bad actor
(say, a terrorist or rogue state) can just steal the model and do
whatever they want with it, circumventing any safety layers.
Whatever novel WMDs superintelligence could invent would
rapidly proliferate to dozens of rogue states. Moreover, security
is the first line of defense against uncontrolled or misaligned
AI systems, too (how stupid would we feel if we failed to contain the rogue superintelligence because we didn’t build and
test it in an air-gapped cluster first?).
Securing model weights doesn’t matter that much right now:
stealing GPT-4 without the underlying recipe doesn’t do that
much for the CCP. But it will really matter in a few years, once
we have AGI, systems that are genuinely incredibly powerful.
Perhaps the single scenario that most keeps me up at night is if China
or another adversary is able to steal the automated-AI-researchermodel-weights on the cusp of an intelligence explosion. China could
immediately use these to automate AI research themselves
situational awareness 94
(even if they had previously been way behind)—and launch
their own intelligence explosion. That’d be all they need to
automate AI research, and build superintelligence. Any lead
the US had would vanish.
Moreover, this would immediately put us in an existential race;
any margin for ensuring superintelligence is safe would disappear. The CCP may well try to race through an intelligence
explosion as fast as possible—even months of lead on superintelligence could mean a decisive military advantage—in the
process skipping all the safety precautions any responsible US
AGI effort would hope to take. We would also have to race
through the intelligence explosion to avoid complete CCP dominance. Even if the US still manages to barely pull out ahead
in the end, the loss of margin would mean having to run enormous risks on AI safety.
We’re miles away for sufficient security to protect weights today. Google DeepMind (perhaps the AI lab that has the best
security of any of them, given Google infrastructure) at least
straight-up admits this. Their Frontier Safety Framework outlines security levels 0, 1, 2, 3, and 4 (~1.5 being what you’d
need to defend against well-resourced terrorist groups or cybercriminals, 3 being what you’d need to defend against the
North Koreas of the world, and 4 being what you’d need to
have even a shot of defending against priority efforts by the
most capable state actors).72 They admit to being at level 0
72 Based off of their claimed correspondence of their security levels to RAND’s
weight security report’s L1-L5
(only the most banal and basic measures). If we got AGI and
superintelligence soon, we’d literally deliver it to terrorist
groups and every crazy dictator out there!
Critically, developing the infrastructure for weight security
probably takes many years of lead times—if we think AGI in
~3-4 years is a real possibility and we need state-proof weight
security then, we need to be launching the crash effort now.
Securing weights will require innovations in hardware and
radically different cluster design; and security at this level can’t
be reached overnight, but requires cycles of iteration.
If we fail to prepare in time, our situation will be dire. We will
be on the cusp of superintelligence, but years away from the se-
situational awareness 95
curity necessary. Our choice will be to press ahead, but directly
deliver superintelligence to the CCP—with the existential race
through the intelligence explosion that implies—or wait until
the security crash program is complete, risking losing our lead.
Algorithmic secrets
While people are starting to appreciate (though not necessarily
implement) the need for weight security, arguably even more
important right now—and vastly underrated—is securing algorithmic secrets.
One way to think about this is that stealing the algorithmic
secrets will be worth having a 10x or more larger cluster to the
PRC:
• As discussed in Counting the OOMs, algorithmic progress
is probably similarly as important as scaling up compute to
AI progress. Given the baseline trend of ~0.5 OOMs of compute efficiency a year (+ additional algorithmic “unhobbling”
gains on top), we should expect multiple OOMs-worth of algorithmic secrets between now and AGI. By default, I expect
American labs to be years ahead; if they can defend their
secrets, this could easily be worth 10x-100x compute.
– (Note that we’re willing to incur American investors 100s
of billions of dollars of costs by export controlling Nvidia
chips—perhaps a 3x increase in compute cost for Chinese
labs—but we’re leaking 3x algorithmic secrets all over the
place!)
• Maybe even more importantly, we may be developing the key
paradigm breakthroughs for AGI right now. As discussed previously, simply scaling up current models will hit a wall:
the data wall. Even with way more compute, it won’t be
possible to make a better model. The frontier AI labs are furiously at work at what comes next, from RL to synthetic
data. They will probably figure out some crazy stuff—
essentially, the “AlphaGo self-play”-equivalent for general
intelligence. Their inventions will be as key as the invention of the LLM paradigm originally was a number of years
situational awareness 96
ago, and they will be the key to building systems that go far
beyond human-level. We still have an opportunity to deny
China these key algorithmic breakthroughs, without which
they’d be stuck at the data wall. But without better security
in the next 12-24 months, we may well irreversibly supply
China with these key AGI breakthroughs.
• It’s easy to underrate how important an edge algorithmic
secrets will be—because up until ~a couple years ago, everything was published. The basic idea was out there: scale
up Transformers on internet text. Many algorithmic details
and efficiencies were out there: Chinchilla scaling laws, MoE,
etc. Thus, open source models today are pretty good, and
a bunch of companies have pretty good models (mostly depending on how much $$$ they raised and how big their
clusters are). But this will likely change fairly dramatically
in the next couple years. Basically all of frontier algorithmic
progress happens at labs these days (academia is surprisingly irrelevant), and the leading labs have stopped publishing their advances. We should expect far more divergence
ahead: between labs, between countries, and between the
proprietary frontier and open source models. A few American labs will be way ahead—a moat worth 10x, 100x, or
more, way more than, say, 7nm vs. 3nm chips—unless they
instantly leak the algorithmic secrets.73 


(...)
There’s a real mental dissonance on security at the leading AI labs. They full-throatedly claim to be building AGI this
decade. They emphasize that American leadership on AGI
will be decisive for US national security. They are reportedly
planning 7T chip buildouts that only make sense if you really
believe in AGI. And indeed, when you bring up security, they
nod and acknowledge “of course, we’ll all be in a bunker” and
smirk.
And yet the reality on security could not be more divorced
from that. Whenever it comes time to make hard choices to
prioritize security, startup attitudes and commercial interests
prevail over the national interest. The national security advisor
would have a mental breakdown if he understood the level of
security at the nation’s leading AI labs.
There are secrets being developed right now, that can be used
for every training run in the future and will be the key unlocks
to AGI, that are protected by the security of a startup and will
be worth hundreds of billions of dollars to the CCP.81 The reality
is that, a) in the next 12-24 months, we will develop the key
algorithmic breakthroughs for AGI, and promptly leak them
to the CCP, and b) we are not even on track for our weights to
be secure against rogue actors like North Korea, let alone an
all-out effort by China, by the time we build AGI. “Good security for a startup” simply is not even close to good enough,
and we have very little time before the egregious damage to the
national security of the United States becomes irreversible.
We’re developing the most powerful weapon mankind has
ever created. The algorithmic secrets we are developing, right
now, are literally the nation’s most important national defense
secrets—the secrets that will be at the foundation of the US
and her allies’ economic and military predominance by the
end of the decade, the secrets that will determine whether we
have the requisite lead to get AI safety right, the secrets that
will determine the outcome of WWIII, the secrets that will
determine the future of the free world. And yet AI lab security
is probably worse than a random defense contractor making
bolts.
It’s madness.
Basically nothing else we do—on national competition, and on
AI safety—will matter if we don’t fix this, soon.

Files
DimaKlenchin
made a comment:
Of course I might be wrong too. My thinking is that Microsoft will close/relocate only under unbearable political pressure. Because if it does, it will lose majority of its highly valued workers (note that most of highly competitive Chinese are no longer interested in moving to the West). The move will eventually come when the dullards in the Congress realize that the US supremacy over China is basically over on every front. But I think it's too early for them to wake up and strong-arm Microsoft to close its offices in China.
Files
New Prediction

I am reducing, with my previous forecast, I may have overreacted to the "SITUATIONAL AWARENESS The Decade Ahead" report by Leopold Aschenbrenner. It looks like Open AI made a move to make their operations more secure with former U.S. Army general and National Security Agency director Paul M. Nakasone joining the OpenAI board’s Safety and Security Committee.

“OpenAI occupies a unique role, facing cyber threats while pioneering transformative technology that could revolutionize how institutions combat them," Nakasone told the Post in a statement. "I am looking forward to supporting the company in safeguarding its innovations while leveraging them to benefit society at large.”

source: https://www.washingtonpost.com/technology/2024/06/13/openai-board-paul-nakasone-nsa/

I thought that this may lead also to some changes in Microsoft approach to security and countering espionage treats. Recently Microsoft's President Brad Smith testified before a House committee a year after Chinese hackers infiltrated Microsoft’s technology and penetrated government networks. He was questioned about their presence in China and doing business there, and he also said:

that Microsoft had been shrinking its engineering presence in China and last month offered to relocate 700 or 800 employees who “were going to need to move out of China in order to keep their job.”

 Source: https://www.nytimes.com/2024/06/13/technology/microsoft-hearing-security.html

in that context, it is worth following this law initiative (currently if I understand well, it will not force Microsoft to close the LAB, right?)

THE United States on Friday (Jun 21) issued what it described as targeted draft rules for banning or requiring notification of certain US investments in artificial intelligence (AI) and other technology sectors in China that could threaten US national security.

https://www.businesstimes.com.sg/international/global/us-proposes-targeted-restrictions-ai-and-tech-investment-china

This may also be interesting for us:
https://breakingdefense.com/2024/06/us-falls-further-behind-in-ai-race-could-make-conflict-with-china-unwinnable-report/

As for other important factors I think @efosong summarized them well here.

Files
New Prediction
michal_dubrawski
made their 2nd forecast (view all):
Probability
Answer
0% (0%)
Estonia
0% (0%)
Latvia
1% (0%)
Lithuania

Recent article about the defense of Estonia: https://www.bbc.com/news/articles/c722zxj0kyro

"The key part of this strategy of denial," says Brigadier Giles Harris, who commands UK forces here, "is to make sure we have enough forces built up in time to create more of a deterrence".
I point out that 1,200 troops does not sound like a lot when the one big lesson from the current conflict in Ukraine is that mass matters. Russia may have poor tactics and equipment but it can field such vastly superior numbers of men and ammunition that it is often able to overwhelm Ukraine’s defences.
"Your observations that one battle group isn’t enough would be a fair one a few years ago," he replies. "But our new plans see us reinforcing at brigade scale [3,000-5,000 troops] in advance of even a short, small-scale incursion [by Russia]."
"We have a formation at high readiness in the UK... to get heavier forces here in time... and it’s a complete step change from where we were before."

Recent article about Latvia preparation's:
https://www.lrt.lt/en/news-in-english/19/2286624/preparing-for-a-bad-scenario-on-latvia-s-russian-border-as-fortification-work-begins

In places where once not even border posts disturbed the landscape, earthmoving equipment is currently carving a trench down the boundary between Baltic NATO and Russian territory.
Riga will allocate some 303 million euros over the next five years to the major fortification project. “We plan to dig anti-tank ditches along the border, as you see here now, then strengthen them with 'dragon’s teeth,'" Kaspar Lazdins, the senior engineer officer of Latvia’s armed forces told Latvian media outlet LSM in early May. "But each region will have its own solutions. We plan to use not only state resources, but also attract civilian companies."

Files
New Prediction
Why do you think you're right?

I am reducing slightly because I don't think that just hacking any Iranian government website will count toward the positive resolution. If it is connected to some important services for the population, then maybe. At the same time, there are quite a few possible targets which when successfully attacked would most likely trigger the resolution as "Yes". I did the following exercise - I provided Claude 3.5 sonnet with this question full description including clarifications and I asked it what event might trigger such resolution, here are his answers:

***GENERATED WITH CLAUDE 3.5 SONNET***
"Based on the given resolution criteria and question clarification, here are some examples of events that would likely fit the definitions and lead to a "Yes" resolution:

1. A cyberattack disrupts Iran's power grid, causing widespread blackouts across multiple cities for several hours or days.

2. Iranian oil refineries experience a sudden shutdown due to a malware attack, significantly impacting the country's oil production and distribution.

3. A cyberattack on Iran's water treatment facilities leads to temporary water shortages or contamination in major urban areas.

4. Iran's major airports face disruptions due to a cyberattack on air traffic control systems, causing flight cancellations and delays.

5. A attack on Iranian banking systems causes ATMs to malfunction and online banking services to become inaccessible for a significant period.

6. Iran's emergency services communication networks are compromised, hindering the ability of first responders to coordinate effectively.

7. A cyberattack on Iran's nuclear facilities causes temporary shutdowns or safety concerns, similar to the Stuxnet attack of 2010.

8. Major government websites and online services are taken offline by a distributed denial-of-service (DDoS) attack, disrupting public services.

9. Iran's telecommunications infrastructure is targeted, causing widespread internet and phone service outages.

10. A cyberattack on Iran's railway system leads to train delays, cancellations, or safety issues.

11. Iranian hospitals face disruptions in their IT systems, affecting patient care and medical record access.

12. A attack on Iran's defense systems temporarily compromises their ability to monitor or respond to potential threats.

These examples would likely meet the criteria of impacting critical infrastructure sectors and causing harm or negative outcomes beyond just data theft or intelligence gathering. Remember that for the question to be resolved as "Yes," such an event would need to be reported by the Iranian government, state media, or other reputable sources, or claimed by the group responsible for the attack." 

I expect that nr 8 above would likely not count, but others show a range of possibilities - some of these attacks happened in the past already. 

Iran is very active with cyberattacks on Israel and other countries of the region, also on US and other countries: just a few days ago the head of Israel’s National Cyber Directorate declared that Iran is a cyber threat to the entire world and continues to attack nations across the globe, including Israel, at a rate three times higher than before Hamas’s Oct. 7 massacre. He also called for international front against Iranian hackers - source: https://www.jns.org/israeli-cyber-chief-iranian-attacks-have-tripled-since-oct-7/ and https://www.computerweekly.com/news/366589964/Israels-cyber-chief-calls-for-international-front-against-Iranian-hackers

Earlier this year the US put sanctions on Iranian individuals and organizations responsible for cyberattacks:
https://theconversation.com/us-sanctions-on-iranian-hackers-highlight-growing-concern-about-the-islamic-republics-cyberwarriors-228718

I wonder if Israel and the US will also use cyberattacks to repay Iran. And this makes me wonder if Iran has so many hackers on their payroll, do they also use them for red teaming their own networks? Do they care enough to spend these resources like that in the scale that would make their systems not an easy pray for average non-state backed hacker? If so (and I do not know if it is the case), that could  mean that Israel and US would need to use something advanced, something not yet publicly known - new kind of software or new exploits. This old artiicle about the US cyberattacks on Iran during Obama times https://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html makes me think that attacking Iran with new advanced software or new unknown exploits could lead to Iran and other hackers gaining new tools to attack people, companies and governments around the word.


Files
Why might you be wrong?

I think I should take another look at the historical base rate for this kind of attacks.

Files
New Prediction

I am updating based on my better understanding of the criteria for including an exoplanet in the catalog, which makes me think that Gliese 12 b has not that big chance of being included in the catalog as I previously thought (I got scientists' statement about its habitability wrong based on the popular press headlines and not that careful reading of the article - not being English native did not help as well).
It fits the radius (0.958 Earth) and mass (3.87 Earths) meets criteria stated in the catalog (radius up to 3x Earths, mass up to 10x Earth). But the third criterion is being within habitable zone.

My calculations here, suggest that it may be too close to the star (stellar flux I got is 1.63, and it is higher than the value of Recent Venus (1 Me) - I got: 1.477955537271435).
In the catalog there is one exoplanet with a minimally higher Stellar Flux value of 1.64 (Kepler-1606 b). But I have run calculation for Kepler-1606 b and it gave me the result that it is within the habitable zone based on the same formula:

Please enter the star's luminosity (Lsun)
0.527
Please enter the object's semi-major axis (AU)
0.6446
Please enter the star's temperature (K)
5462
This object's Seff: 1.2683234227616067
*** This systems HZ stats: ***
 DISTANCES IN AU
Recent Venus (1 Me): 0.5566072962886887
Runaway Greenhouse (1 Me): 0.7029548104271083
Maximum Greenhouse (1 Me): 1.2511799238840076
Early Mars (1 Me): 1.3196820267352714
 STELLAR FLUX (EFFECTIVE)
Recent Venus (1 Me): 1.7010333377959643
Runaway Greenhouse (1 Me): 1.0664875766166768
Maximum Greenhouse (1 Me): 0.33664415687320387
Early Mars (1 Me): 0.3026021514032103

This object is in the Optimistic Habitable Zone (Between Recent Venus and Runaway Greenhouse) 

So the fact that there is one observation with higher stellar flux value doesn't mean much, as the combination of the parameters are what matters for being in the habitable zone, apparently.

However, it is stated in the article anouncing Gliese 12 b discovery that:
"Gliese 12 b occurs just inward of the habitable zone as defined by Kopparapu et al. (2013) due to the predicted efficiency of water-loss around M-dwarfs. However, Gliese 12 b may well be within the recent Venus limit of its host star (Fig. 7) with an insolation flux of F = 1.6 ± 0.2 S⊕, less than 1σ away from the 1.5 S⊕ limit calculated from Kopparapu et al. (2013). This limit is regarded in that analysis as an optimistic habitable zone due to Venus’s potential for habitability in the past history of the Solar system. Thus the available evidence does not rule out that Gliese 12 b is potentially habitable."



News articles were more often interpreting that as "potentially habitable" for example: https://www.advancedsciencenews.com/gliese-12-b-an-exo-venus-with-earth-like-temperatures/  or here https://www.astronomy.com/science/astronomers-discover-gliese-12-b-a-potentially-habitable-exoplanet/

Maybe it will still be included in the optimistic sample, but currently I think it is maybe 38% likely. So, maybe the reason behind the lack of update of the catalog is that there was nothing to report?

The lack of update after this time makes it also slightly less likely that Gliese 12 b fits the criteria of optimistic and generous interpretation of the current data which authors of the catalog use. But lack of update between its discovery and now is not as strong signal as it may look if we look at last year updates. 

I looked again at my database of updates I created based of the archived version of catalog websites: Last year there was no update at least between 2023-01-05 and at least 2023-11-22, likely the catalog was only updated on 2024-01-08 as this archived website from 2023-12-31 states that: "The PHL will update its Habitable Exoplanets Catalog on January 8, 2024". So it looks like one update last year. The reason might be that they were working on changing their website and project name from Habitable Exoplanets Catalog to Habitable Worlds Catalog, but I am not sure. Irregular updates for sure introduces the risk of update happening to late for the resolution as discussed by @PeterStamp and @ctsats . This year they were quite active till March with four updates, of which only the one counts towards our resolution:
2024-01-08

2024-01-25

2024-02-01

2024-03-21

I also wonder if this change of project name means that they plan to go back to reporting potentially habitable exomoons as well? But the question is about exoplanets, so this should not affect the count for the resolution.


Also, it is likely not that important for us (I plan to check time between discovery as candidate and confirmation), but there is a new candidate exoplanet with some potential to be included in the catalog if confirmed: HD48948 d https://academic.oup.com/mnras/article/531/4/4464/7696217?login=false 
I don't know how long it will take for the confirmation, but since it is relatively close to Earth it might be a valuable target for further observations - as the authors of the paper state: "Given the close proximity of this planetary system, situated at a distance of 16.8 parsecs, HD 48498 emerges as a promising target (closest super-Earth around FGK stars) for future high-contrast direct imaging and high-resolution spectroscopic studies."

Unfortunately, from three candidates in this system only the planet HD 48498 d is stated as being in habitable zone:
"The outermost planet resides within the (temperate) habitable zone", and its mass may or may not be too big - 10.59 ± 1.00 M⊕ (with 10 earth masses being the limit stated in the catalog). Currently, the highest values for the mass of included planets are ≥ 9.44 for HD 216520 c, < 25.3 for LP 890-9 c, < 35.0 for Kepler-62 f and < 36.0 for Kepler-62 e. So maybe the optimistic sample is more generous than I thought with this interpretation of mass data of the last three mentioned exoplanets as potentially lower than 10 times of Earth. I think it is worth running Habitability calculations on more exoplanets from the catalog to see if there are cases with generous interpretations of this criterion.
I will also try to visualize these updates dates on some timeline and maybe heatmap (for months to see if there is some end of year, begining of year update pattern).

Tagging those who might be interested in reading this despite my style of writing 😉
@cmeinel @ctsats  @PeterStamp @Perspectus @404_NOT_FOUND @guyrecord @sai_39 @ansantillan @Jim @NoUsernameSelected @JonathanMann @DimaKlenchin 

Files
ctsats
made a comment:

Thanks Michał.

I won't (can't...) comment on the calculations. But your Internet Archive links have demonstrated unambiguously that practically the total number of worlds remained stable at 63 for almost all of 2023 (from Jan 5 to Dec 31), and that there was no update in 2023 after Jan 5. Thus, in the 3 months that followed (Jan-Mar 2024), we have gone from 63 to 70...

Files
New Badge
michal_dubrawski
earned a new badge:

Active Forecaster

New Badge
michal_dubrawski
earned a new badge:

Star Commenter - May 2024

Earned for making 5+ comments in a month (rationales not included).
Files
Tip: Mention someone by typing @username