This is not true. In most of the top journals you need at least three other practitioners in your field to read it and sign off on it. The editor finds the appropriate reviewers, manages the process, does some basic format and other types of vetting, and also will accept or reject it based on the reviews from the reviewers.
The reviewers here are the "peers", and generally are expected to be qualified experts in the area that the paper deals with.
> This is not true. In most of the top journals you need at least three other practitioners in your field to read it and sign off on it.
You're misreading xondono as well as me. I think your idea of what peer review is (in practice) is too idealized.
The problem is the word "expert". We're using it to mean different things, and the difference is important. Despite it appearing that way, "expert" is not a binary condition. It is a spectrum. Where along the spectrum requires context to determine the threshold. Ours (xondono, correct me if I misinterpreted), is higher than the one you're using.
Finding appropriate reviewers is a non-trivial task, which is kinda the entire problem. You can have a PhD in machine learning and that does not mean you're qualified to review another machine learning paper. I know, because I've told ACs I'm not qualified for certain works!
The problem is that what is being published is new knowledge. I'll refer to the (very very short) "illustrated guide to a Ph.D." How many people are qualified to determine if that knowledge is new? It's probably a lot fewer than you think. Let's go back to ML. Let's say your PhD and all your work is in Vision Transformers. Does that mean you're qualified to evaluate a paper on diffusion models? Truth is, probably not. Hell, there's been papers I've reviewed where I'm literally 1 of 2 people in the world who are the appropriate reviewers (the other is the main author of the paper we wrote that's being extended).
Hell, most people working on diffusion aren't even qualified to properly evaluate every diffusion paper! Here's a great example, where this work is more on the mathy side of diffusion models and you can look at the reviews[1]. Reviews are 6 (Weak Accept), 9 (Very Strong Accept), 8 (Strong Accept), 8, 6. Reviewer confidence was even low: 2, 4, 3, 3, 4, respectively (out of 5), and confidence is usually over stated.
Mind you, this is the #1 ML conference and these reviews are post rebuttal. There were over 13000 people reviewing that year[2] and they couldn't get people who had 5/5 confidence. This is even for a paper written by 2 top researchers at a top institution...
> The reviewers here are the "peers", and generally are expected to be qualified experts in the area that the paper deals with.
So no. They are "expert" when compared to the general public, but not necessarily "expert" in context to the paper being reviewed.
I hope the physical evidence is enough to convince you, because honestly this is quite common and there's a viewing bias. Most of the time we don't have this data for works that were rejected. But there's plenty of works that were accepted that you can see this. Not to mention (as stated in my original comment), multiple extremely influential works (worthy of a Nobel Prize) have been rejected. Here's a pretty famous example, where it had both been rejected for being "too trivial" (twice) as well as "obviously incorrect."[3] Yet, it resulted in a Nobel and is one of the most cited works in the field. Doesn't sound like these reviews helped the paper become better, sounds more like it was just wasting time.
I reviewed many papers when I was still in academia, I know how it works thank you. Yes I too have declined to review a paper or two because they reached out to the wrong person.
But no, I don't agree with you in your stringent definition of expert. IF someone is in the general area and is aware of the problem you are trying to solve, that is good enough. E.g. someone who is in machine learning and aware of diffusion, has read papers on it, but has not done work on it themselves, is a good enough of an expert to review a diffusion paper.
These papers are supposed to be written to a general enough academic audience that someone like the above is able to understand and critique your work.
Also, if you are as experienced as you claim to be, you should know that conferences are notorious for having FAR weaker peer review than actual journals. That's why many works used have both a conference version and a larger journal version. For conferences, due to the time limits on review, they often don't have enough qualified papers to review them all. There are also no do overs, if you get an unqualified reviewer you can't request another person.
There are even papers published about the poor quality of conference reviews!
> Also, if you are as experienced as you claim to be, you should know that conferences are notorious for having FAR weaker peer review than actual journals.
Yes, but conferences are the primary publishing venue used in computer science and ML. Publishing in NeurIPS, CVPR, or ICML is more prestigious than publishing in JMLR or TMLR
While worse at conferences, I are, the fundamental problems are similar. They exacerbate the problems, but it's still the same problems
And please don't be offended. I'm writing to a general audience. I don't know if you have academic experience or not until you tell me. It seems you agree it would not be appropriate for me to assume otherwise
> You're misreading xondono as well as me. I think your idea of what peer review is (in practice) is too idealized.
I think it is you who has an ideological axe to grind and is missing the forest for the trees (in this case the practical benefits for the drawbacks). Of course the process isn't perfect. Of course it's a spectrum. That's precisely how journals end up with reputations.
If you don't want to play the reputational game, fine, self publish on your website. Protocols such as ipfs and centralized archives such as arxiv make that easier than ever. But just because you choose to reject a process doesn't mean that it isn't of benefit to other people. And it should go without saying that just because something is of benefit to me (in this case as a reader) doesn't mean that it isn't also flawed in some way.
You haven't convinced me, you only made an appeal to authority. It's fine if you don't accept my evidence or reasoning but just appealing to authority or tradition is not an argument that the current system is better than an alternative one.
I made a few claims but I don't believe I made any appeals to authority. That would be of the form "peer review is good because X says so, therefore you are wrong". If you wish to challenge any of the claims I made I am open to it.
The reviewers here are the "peers", and generally are expected to be qualified experts in the area that the paper deals with.