It turned out I again had the same idea @ctsats had (great work Christos!) and I started by taking the look at the data available on their website https://phl.upr.edu/hwc/data before reading any comments. 

I agree with Christos, but 2016 looks like an outlier to me, 2014 a bit as well, but 2016 I think is a clear outlier if we can say that with such a small sample that is. So if we only look at the 7 years for which we have data after 2016, we get 4/7 with at least 5 potentially habitable planets discovered, which would give us 57% (closer to what @DimaKlenchin wrote). Of course, if we had two years like that - great for exoplanets discovery in the last 10, maybe this can be repeated, but I would first want to understand more about specific tools and methods of discovery used at that time - these might have been low-hanging fruits related to the introduction of new or better method or specific mission.    


TOLIMAN - Australian space telescope is expected to be launched this year. Its mission is to detect potentially habitable worlds in our near solar neighbourhood, the Alpha Centauri system. I could not find any precise information about the launch date other than that it is planned for 2024, or late 2024, but other sources mentioned "no earlier than 2024". It might be worth following any announcements about this mission.

other sources: https://www.spiralblue.space/post/bringing-ai-to-the-search-for-extraterrestrial-life

https://en.wikipedia.org/wiki/TOLIMAN

I will start with 60% and will try to get more insights.

Files
ctsats
made a comment:

Thanks Michał, also for taking up my request at the end of my own rationale to cross-check the data - we are in perfect match!

I don't think there is any disagreement that 2016 and 2014 are outliers in the total number of discovered exoplanets; that they are so in the number of potentially habitable ones, too (i.e. the actual quantity of interest for us here), is far less clear. And as there does not seem to be any meaningful correlation between the two numbers (see also @Tolga's comment above), I am just not sure that we can use arguments applicable to one time series to make decisions about the other (here the choice of the base rate).

So, allow me to just comment on a very general and abstract level regarding base rate choice, by copy-pasting something I wrote recently in another forum and on a completely different occasion:

Let’s try to talk a little about the base rate here in some more depth, shall we?

Aspiring and beginner forecasters may read about base rates in books like “Superforecasting” and similar, and walk away imagining that, at least among all the other elements of a forecasting pipeline, the base rate is the easiest and more straightforward to grapple with and apply – something akin to plug-and-play.

The reality of course could hardly be more different, as everyone that has tried to apply meaningful base rates in more than just a couple of forecasts quickly discovers.

Often (but of course not always), the choice of which base rate to adopt (and which to ignore) is closely related with existing biases, preconceptions, and gut feelings. And while it is true that in the logical process displayed in our finalized and published rationales we always begin with “the” base rate, the reality of the actual historical process by which we end up to our forecasts and rationales is often quite different (and seldom shared); in it, oftentimes the base rate is not the actual starting point, but is itself a choice during the process (and obviously the fact that I am writing this here by no means implies that I consider myself invulnerable from such tendencies).

Just food for thought (and debate) 😉

Files
Files
Tip: Mention someone by typing @username