Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ignoring the presence of "(*, *)" in the Fortran program -- which is the equivalent of all of the format string and type complexities in the C, Python, and Julia programs -- is a serious deficiency of the argument here. As soon as you start writing something more complicated than Hello World: The Sequel, those complexities become something that language learners need to engage with.


Agreed. I'm a (very) long time Fortran user, and a more recent C, Python, and Julia user. I'd agree that scanf is a pain. The chomping/parsing bit can be, in all cases, hidden behind a convenience function, which if you like, you can name "read".

FWIW, I've been using Fortran since the mid 1980s, C from the mid 90s, Python from the mid 00s, and Julia since 2012 or so. All of these languages are fine for the example provided, and all are understandable. Each of these languages require some minimal boilerplate around the core algorithm to make it work. Specifics of input and output mechanisms would be language/dialect specific.

Really, each of them is fine for this trivial example. Its when you start getting into serious translation of the underlying math into something the language will understand, that the real difference stand out.

That's when both Fortran and Julia stand out. Both are designed to make this process as painless as possible. Fortran was designed and built almost 70 years ago. IO was very painful then, and while the language itself is very simple and easy to understand, the IO system is still quite similar to the early days.

The "(,)" notation combines units (file descriptors), and formatting. It is not different in intent than the scanf and other functions.

Why Fortran and Julia are so easy to use comes from the expressiveness of the language, the simple mechanism of writing code that looks close to the formulae in question. There is no fiddling with array indexing that doesn't start where the formula indicates. No allocating arrays of arrays to build matrices.

That is, it is simple to go from conception to working code very quickly. In a manner that really does look like the way you would do it on paper.

That's why Fortran is still in use, and growing. And why Julia is catching on quickly.


I don't think 1 based indexing is that useful. A lot of mathematics formulae start from 0 too and 1-indexing makes that more awkward.

Although I agree that some languages like Julia and say MATLAB feel more fluent to express mathematics.


Luckily, Fortran allows easily changing from one to the other. You simply define arrays as A(1:10) or A(0:9) and they will index from 1 or 0.

You can even do arbitrary indexing: fixed like A(-3:3) or dynamically like B(n1:n2). This may sounds useless but it's actually an excellent feature when writing eg convolution kernels.

I don't know of any other languages beyond Fortran and Julia that have arbitrary array indexing as a core feature (you can fake it in C in two lines I guess).


Wow that's incredible and really changes my opinion on Julia's indexing.

It is small but really makes things more elegant. I should definitely try Julia. I've spent a lot of time calculating awkward array index conversion formulae in my head and this could make things a lot more simple especially for weird cases.

I was under the impression that Fortran's only redeeming benefit was that it compiled well for numeric computing routines.


Again, to be clear, Julia does not have arbitrary indexing as in Fortran. It has an external package that can mimic arbitrary indexing of Fortran.


That's not a meaningful distinction. Julia is a powerful enough language that OffsetArrays don't need to be built in to work perfectly. Good languages don't need everything built in.


The comment above states: "...Julia that have arbitrary array indexing as a core feature...". Is an external package written in Julia considered a "core" feature of the Julia language? Why does the Julia community always twist the facts? Why cannot they accept the facts as they are? It would not reduce the value of their efforts, I guarantee you—decency and honesty always prevail.


Julia's docs say that it supports "Julia supports arrays with arbitrary indices": https://docs.julialang.org/en/v1/devdocs/offset-arrays/

So let's break that down.

The base standard library and "idiomatic" Julia are both largely written without the assumption of 1-based indexing, instead dispatching to what you think about as an array-oriented dsl, which is this interface: https://docs.julialang.org/en/v1/manual/interfaces/#man-inte...

It is very normal in Julia to implement such interfaces to do what you want. Julia uses these types of internal interfaces with multimethods in lieu of a lot of compiler intrinsics.

So I would say Julia and Fortran have different implementations of arbitrary indexing.

They are both oriented around core design features of each respective language.


Yes, Julia does support arbitrary indexing because base functions are written in terms of `firstindex`, `eachindex` and `lastindex` etc.

One can see it in the implementation of `filter!` for example https://github.com/JuliaLang/julia/blob/ab4e036b9209967b4273... :


External Julia packages can impliment core features. For another example, look at StaticArrays. Fortran is a brittle old language where everything supported must be built into the language. Julia isn't.


Again, twisting facts by some Julia novice about a topic they have no familiarity with. Fortran does not require features to be built-in. Look at all the intrinsic Fortran modules. But array-based indexing IS built-in to the language. Unlike Julia, you do not have to rely on an external package for it. Perhaps the Julia community's relatively young age has made it so arrogant, Machiavellian, and fact-twisting about other languages to prove their presence here and there. Other programming communities are old enough to put decency and honesty in their core values.


I also find it interesting that you assume I'm a Julia novice. I use Julia for my job, and have a few dozen commits to Julia (and several to packages)


Julia's compiler is made to be extendable. GPUCompiler.jl which adds the .ptx compilation output for example is a package (https://github.com/JuliaGPU/GPUCompiler.jl). The package manager of Julia itself... is an external package (https://github.com/JuliaLang/Pkg.jl). The built in SuiteSparse usage? That's a package too (https://github.com/JuliaLang/SuiteSparse.jl). It's fairly arbitrary what is "external" and "internal" in a language that allows that kind of extendability. Literally the only thing that makes these packages a standard library is that they are built into and shipped with the standard system image. Do you want to make your own distribution of Julia that changes what the "internal" packages are? Here's a tutorial that shows how to add plotting to the system image (https://julialang.github.io/PackageCompiler.jl/dev/examples/...). You could setup a binary server for that and now the first time to plot is 0.4 seconds.

Julia's arrays system is built so that most arrays that are used are not the simple Base.Array. Instead Julia has an AbstractArray interface definition (https://docs.julialang.org/en/v1/manual/interfaces/#man-inte...) which the Base.Array conforms to, and many effectively standard library packages like StaticArrays.jl, OffsetArrays.jl, etc. conform to, and thus they can be used in any other Julia package, like the differential equation solvers, solving nonlinear systems, optimization libraries, etc. There is a higher chance that packages depend on these packages then that they do not. They are only not part of the Julia distribution because the core idea is to move everything possible out to packages. There's not only a plan to make SuiteSparse and sparse matrix support be a package in 2.0, but also ideas about making the rest of linear algebra and arrays themselves into packages where Julia just defines memory buffer intrinsic (with likely the Arrays.jl package still shipped with the default image). At that point, are arrays not built into the language? I can understand using such a narrow definition for systems like Fortran or C where the standard library is essentially a fixed concept, but that just does not make sense with Julia. It's inherently fuzzy.


I wasn't sure. I googled it and it was in the Julia docs but I didn't look at the details to see if it is first-class.

The fortran version is super neat, especially in conjunction with stack allocated multidimensional variable length arrays.


Much of Julia language is written in Julia. So the 'external package' is just an extension you install/use if you wish. A distinction without a difference.


Ada and many BASICs also permit using arbitrary ranges. Ada even takes it further, any range over a discrete type can work including characters and arbitrary enumerations.


Ada has adopted it from Modula-2/Pascal.


A lot of formulae start from 1 too. And in linear algebra, most of them do.

It's like driving on the left side of the road instead of the right side of the road. Inconsistency is what makes it painful.


I am convinced a good fraction of the off by one errors in C like languages comes from 0 indexing. The ith element is found after traversing i+1 elements and that fact leads to mistakes all the time. With 1 indexing the numbers line up.


I agree. There's far less of a cognitive load for developers associated with representing an expression as written, if you don't need to mess with indices, in order to accommodate the language and its vagaries.

One should not need to adjust the way one expresses an algorithm, because of opinionated language design decisions. Put another way, languages designed for scientific/engineering computing won't have a hard/fixed opinion about things that shouldn't be worried about.


Indeed, polynomials, for instance, start with a power of zero. At the same time, there is a reason why the natural numbers begin with 1, and, in practical terms, there is a certain convenience in the count (size) being the same as the last index.


The natural numbers don't start with 1... The convention is inconsistent, but for many mathematicians, especially including logicians, they start with 0. It is best not to use the term "natural numbers" at all.

By the way, your point about polynomials works in favour of the convention that 0 is a natural number.


Try thinking of being the zeroth person in a line. Or your child being in the zeroth year of her life.


How many lions are in my room at the moment? Hopefully zero.


But zero sheep is not a flock.

(Besides, there's uncountably many things you have zero of in you room.)


> But zero sheep is not a flock.

So one isn't a natural number?

> (Besides, there's uncountably many things you have zero of in you room.)

;) Exactly. Zero is the most natural number.


So, on your scale, which is the first natural number?


> Why Fortran and Julia are so easy to use comes from the expressiveness of the language, the simple mechanism of writing code that looks close to the formulae in question

Why would it not in any other languages?


Some other languages have attempts to emulate it (like some matrix classes in c++ and numpy), but it is not that great. You can make it look close in some places at the cost of more complexity elsewhere. It also feels bolted on with more or less opaque data structures manipulated by ad hoc functions. There always are strange edge cases where the library fights against the language. The TKR (type, kind, rank) concepts are at the very core of the variable model in Fortran, and n-dimensional matrices are first class variables just like scalars, and are much more natural to use. Elemental functions are very useful and trivial to implement in Fortran. Coupled with the great array syntax and easy user-defined operators, it makes writing complex numerical expressions very satisfying and elegant. Granted, some other parts of Fortran are far from elegant.

I am not saying it makes other languages unusable (I use numpy arrays quite frequently; Python and Fortran are very complementary), but definitely less natural and elegant.


With Python+Numpy, my issues are or were:

  1. The matrix multiplication syntax used to be bad. But it became possible at some point to write A @ B for the product of A with B. And that seems fine to me.

  2. The code seems more natural when you write `sin(array([1,2,3]))` instead of `np.sin(np.array([1,2,3]))` -- which you can achieve by running `from numpy import *`, but doing this is discouraged by a lot of people.

  3. The convention in Linear Algebra is that indexing starts from 1, not 0. This sometimes makes translating from a language which follows the more usual convention difficult.
Overall though, I think only 3 is unfixable, but this is usually a minor problem. Usually, Python+Numpy doesn't seem any worse than Matlab syntax-wise.

I find the experience of using Pandas to be much worse. Sympy also feels abrasive because of needing to write `m12 = var('v12')`.


Well, ignoring the presence of "(,)" in Fortran is exactly what one should do.

Recall that FORTRAN stands for FORmula TRANslator.

FORTRAN is built for math. It makes math expressions — particularly, linear algebra stuff — easy to write, and running very, very fast. There are much better tools for text processing (there's perl, python, pandas, and everything in between). But for raw math, there's Fortran.

That's why everything is still powered by Fortran.

Yes, from Matlab to your fancy neural network, numpy, scipi, scikit-learn, etc., the actual computational core - BLAS/LAPACK is Fortran-based.

That's why Matlab feels like Fortran. And that's why numpy semantics are different from python's: because it is heavily inspired by Matlab, and, like Matlab, wraps FORTRAN code in a friendly manner.

But in the end, it's FORTRAN all the way down. Even in Julia.

(And that's how I ended up fixing a minor bug in Google's FORTRAN sparse svd package, PROPACK[1], in 2019. Well, PROPACK was written before Google existed, but they have hired the author for a reason. The bug was only in the way an error message displeased the internal memory leak checker; in other words, FORTRAN text I/O was an issue. Stay away from that, and it's the best tool for the job.)

[1] http://sun.stanford.edu/~rmunk/PROPACK/


> But in the end, it's FORTRAN all the way down. Even in Julia.

That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl


Is the recursive comparison against OpenBLAS using ReLAPACK or the normal(?) one? I don't remember the story on building with that or know the different algorithms.

Does Julia not use OpenBLAS at all now?


Julia defaults to OpenBLAS but libblastrampoline makes it so that `using MKL` flips it to MKL on the fly. See the JuliaCon video for more details on that (https://www.youtube.com/watch?v=t6hptekOR7s). The recursive comparison is against OpenBLAS/LAPACK and MKL, see this PR for some (older) details: https://github.com/YingboMa/RecursiveFactorization.jl/pull/2... . What it really comes down to in the end is that OpenBLAS is rather bad, and MKL is optimized for Intel CPUs but not for AMD CPUs, so when the best CPUs are now all AMD CPUs, having a new set of BLAS tools and mixing that with recursive LAPACK tools is either as good or better on most modern systems. Then we see this in practice even when we build BLAS into Sundials for 1,000 ODE chemical reaction networks (https://benchmarks.sciml.ai/html/Bio/BCR.html).


The question was whether the comparison was against OpenBLAS built with recursive LAPACK routines (not the default, I've now checked). I don't know whether that would use the same algorithm.

I don't see great attraction in "flipping" to MKL. (In general I switch BLASes by dynamic linking if necessary.) I'm puzzled by the value of "rather bad" for OpenBLAS, at least for BLAS [1], but it seemed OK against MKL for LAPACK-based R benchmarks I measured on sandybridge. That said, I don't understand why people avoid AMD's BLAS/LAPACK (BLIS/libflame with enhancements intended for merging) on AMD systems. In this context: BLIS studiously avoids Fortran for some reason, even for test the Fortran interface.

I have nothing against Julia, incidentally; I have the original Dylan book.

1. https://git.sr.ht/~fx/blis/tree/performance/docs/Performance...


It doesn't look like you're measuring factorization performance? OpenBLAS matrix-matrix multiplication is fine, it just falls apart when going to things like Cholesky and LU. Also it falls apart on smaller problems, like <100x100 cases.

> not the default, I've now checked

Whatever the Julia default build is doing, so probably not the recursive LAPACK routines then if that's how it's being built. If there's a better default that's worth an issue.

> That said, I don't understand why people avoid AMD's BLAS/LAPACK

There just isn't a BLIS wrapper into Julia right now, and it's much easier to just write new BLAS tools than to build wrappers IMO. It makes it very easy to customize to nonstandard Julia number types too. But I do think that BLIS is a great project and I would like to see it replace OpenBLAS as the default. There's been some discussion to make it as easy as MKL (https://github.com/JuliaLinearAlgebra/BLIS.jl/issues/3).


OK, no, I measured level 3 BLAS, and it sounded like a blanket dismissal of OpenBLAS. I'm surprised at it falling apart at all, but I haven't considered its LAPACK much and I value understanding from measurement. The set of R LA benchmarks I ran on sandybridge in response to theories about the Microsoft R distribution did include Cholesky. (I think the operations are a thin wrapper over LAPACK ones.) In that case OB was actually better than MKL for Cholesky with the versions I had then, so I'm puzzled. Probably I should revisit that sometime.

OB seems to me a better general default than BLIS, though its threading implementation may not be robust. BLIS is even broken on POWER9, and I don't remember whether the dynamic micro-architecture selection ever added for other than x86.

For really small matrices on x86, I'd front BLAS with libxsmm, but I don't remember where it starts winning, or know the threshold for BLIS' non-packed kernels.

Anyway, thanks for the info. I should be embarrassed never to have made time to look seriously at Julia.


Yeah no worries. You might want to join in the community at least because this kind of discussion is par for the course on the Slack. You might find a nice home here!

> though its threading implementation may not be robust

That's one of the main issues I have with it. Its poor threading behavior coupled with a bad default (https://github.com/JuliaLang/julia/issues/33409 one of my biggest pet peeve issues right now) really degrades real user performance.


> Even in Julia.

Unless you're using [Octavian.jl](https://github.com/JuliaLinearAlgebra/Octavian.jl) or such for your linear algebra in place of BLAS. But yes, it is always interesting how many people do not know how much of modern software is built on Fortran!


This is a reasonable argument, but it implies that FORTRAN is in fact not a good language for beginners; almost all beginners care about IO / text processing, and almost none care about linear algebra. (Especially if one is talking about beginners-in-general, as opposed to engineering types who are learning to use programming as a tool.)


Sure, but that still means FORTRAN is an easy language if your intro to programming isn't centered around text processing.

And for many people, it isn't/hasn't been/doesn't have to be.

That's the author's take as well:

>The idea of introductory languages is not necessarily that they are suited to commercial endeavours, but rather that they are (i) easy to implement algorithm in, (ii) readable, and (iii) do not contain things that will confuse and bewilder.


Have you ever thought it was because the tutorials you read were geared towards a type of programmer that had different goals from a computational programmer?


What high performance BLAS do you have written in Fortran? I don't know of one (though I think I could write one with decent performance on simple targets). It seems the rule that this will come up whenever Fortran is pushed, though I don't want to disparage Fortran after so long.


Yeah it's funny because they are all C these days.


Well, C plus assembler, or maybe non-standard machine-specific intrinsics, for the ones I know. They at least seem to need control over prefetching.


As a data processing language, IDL/GDL is even closer to Fortran. It's essentially a vectorized, interpreted, simplified Fortran. In many ways I still prefer it to Python these days.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: