That assumes you are sampling uniformly, and not in already saturated clusters. The doubling time is also not a known constant, as it depends on actions taken etc, and uncertainty in it will broaden the distribution.
Obviously, all this is basic statistics and should be known to epiodemiologists, who hopefully have some input to policies.
For large N_obs, this will maybe be less important as you're going to find the severe cases anyway, but the uncertainty is significant in the beginning stages, and it is unfortunately these stages where actions will have most impact.
As long as growth remains linear in an exponential plot, you are nowhere near the saturation point. When about half the population is already infected, the number of new cases should grow roughly linearly. In Italy, it seems to still grow exponentially.
Itally has already passed the number of simultanous cases that their healthcare system can handle. I don't believe they really want to find out at what level the saturation point can be found. That could mean hundreds of thousands, potentially millions, dead.
What is important is that Dyson has quotes, but I'm not aware of any published work by him on the problem (by "published", I don't mean peer-reviewed, arXiv etc would be fine too). This means that he probably never really put his hands on the problem as a working scientist.
I'd rather remember Dyson for his significant contributions to physics, rather than for something he did not contribute to.
If the wealth created in a transaction is proportional to the size of the transaction, that sounds like it would increase the Gini coefficient. Most new wealth is created by transactions involving those with the most wealth, and redistributed to them. Probably simple to check in the simulation. Whether this is any good as a model for the real world is then a different question...
Fortran has language-level support for things important for numerical scientific computing (complex numbers, multidimensional arrays, etc.), and it has had them since the beginning in the 1950-60s.
The convenience is not really matched by C or C++, where similar features have been added much later by language extensions or 3rd party libraries, resulting to more complicated usage, fragmentation, and interoperability problems. Newer languages also have similar issues, so for the user base that uses Fortran, there's a lack of viable competitors.
Also numerical codes can be trickier to write than you think. Rounding errors and other treacheries of the floating point approximation to the 'real' numbers can eat you alive.
If you want to do common numeric operations they might be a FORTRAN code that is battle tested and performance tuned and it usually not hard to call from C, Java, Python or some other language.
Fortran’s numerics model is actually somewhat weaker than C’s; if precise rounding behavior is your main concern C (while still not ideal) is a better option.
The reported confirmed counts have grown by a factor of ~1.5 every day, fitting reasonably well with exponential growth. If this rate continued, in 35 days it would reach the world population. Obviously, in the real world the growth flattens out way earlier, from natural causes, and especially because of the attempts to confine it.
Comparing the total number of cases or deaths to flu outbreaks from previous years does not make much sense at this point, when the numbers are growing rapidly, but in a month they probably are easier to compare. Neither does comparing average counts per week, because the growth is not linear. The data for the progression of flu (or any other disease) outbreaks exists, and comparing to their growth rate would put it better in perspective. Regardless of armchair analysis, the WHO declaration means it's something requiring unusual action, which flu is not.
Something else to consider is that flu outbreaks aren't typically taken very seriously by most people, while the widespread fear and media coverage of the 2019-nCov all but guarantees more serious responses from both the general public and the government.
Consider the recommendations to thoroughly wash one's hands during flu season as a precaution against getting the flu. How many people take that advice seriously? You can bet there'll be a lot more hand washing all around once this coronavirus hits people's local areas, not to mention all the mask wearing that'll happen (though unfortunately, most people will be wearing those dinky surgical masks which will be of dubious effectiveness), and people isolating themselves.
On the other hand, we have vaccines for the flu that are available well in advance of flu outbreaks (though not nearly enough people take them), while we've got no publicly available ones for 2019-nCov. That's another confounding variable that makes it hard to compare the two.
The approach of doing the transition slowly over many years maybe was a mistake here, and another thing making it harder seems to have been little support from the top of the project.
I ported two projects with ~200000 Python-SLOC (about the same size as Mercurial according to sloccount) back in the early 3.x days. Doing this via more or less flag-day conversions within a few months, converting the codebases first to 2to3-able subset, and as a second step later on dropping 2to3 via common dialect of Python 2/3 with six, was not very painful in the end.
> I ported two projects with ~200000 Python-SLOC (about the same size as Mercurial according to sloccount) back in the early 3.x days. Doing this via more or less flag-day conversions within a few months, converting the codebases first to 2to3-able subset, and as a second step later on dropping 2to3 via common dialect of Python 2/3 with six, was not very painful in the end.
Sounds like you used the same method, just over a smaller timeframe: convert to a common 2/3 subset, then drop Python 2 at some later point.