It probably depends a bit on your security posture and how you manage the server. The recommendation for avoiding root tends to be central to the idea of least privilege. By default, grant/allow the least privileges necessary, helps eliminate alot of security surface area, while also being a layer of protection against mistakes. It's easy to be careless with a superuser account, and have a bad day.
And this is naturally in conflict with being productive. As you mentioned, it's easier to be productive if you can just do all of the things all of the time. And operating in environments where a mistake may not be that devastating, or compromise or vulnerability this might be a reasonable tradeoff.
But I've worked in environments where this is too risky. For example:
1. Engineer accidentally pastes the wrong buffer into a terminal. They had accidentally copied some other piece of text.
2. The text happens to contain \nhostname set\n.
3. The terminal as it's spitting out errors, does see a valid command, to change the hostname.
4. That particular system was an HA system, and the process monitor in use, grepping the running processes for the command + args, of which hostname was an argument. And decided the process was no longer running.
5. The cluster seeing the failure, decides to boot another process. But at this time in history, that process was could only handle a single instance. Both instances now decided to conflict with each other.
6. Some part I don't remember about the failover site.
7. A million cell phones can no longer get an IP address.
So it's a question of tradeoffs, but is a generally recommended practice to not login directly to root, and operate with less privileges when not required. And then escalate if granted / required.
This is spot on. One interesting approach I've seen before is that all commands executed as the superuser must be written in file, and the only command accessible via sudo is "please_save_to_audit_log_then_run_it_in_sandboxed_env <file>". For particularly high-risk situations, there might be a second person reading your script before running an approval command that actually lets the script run. Things don't move quickly, but the number of mistakes via typo is certainly reduced.
And this is naturally in conflict with being productive. As you mentioned, it's easier to be productive if you can just do all of the things all of the time. And operating in environments where a mistake may not be that devastating, or compromise or vulnerability this might be a reasonable tradeoff.
But I've worked in environments where this is too risky. For example: 1. Engineer accidentally pastes the wrong buffer into a terminal. They had accidentally copied some other piece of text. 2. The text happens to contain \nhostname set\n. 3. The terminal as it's spitting out errors, does see a valid command, to change the hostname. 4. That particular system was an HA system, and the process monitor in use, grepping the running processes for the command + args, of which hostname was an argument. And decided the process was no longer running. 5. The cluster seeing the failure, decides to boot another process. But at this time in history, that process was could only handle a single instance. Both instances now decided to conflict with each other. 6. Some part I don't remember about the failover site. 7. A million cell phones can no longer get an IP address.
So it's a question of tradeoffs, but is a generally recommended practice to not login directly to root, and operate with less privileges when not required. And then escalate if granted / required.