Does anyone know what the actual "logic bomb" consisted of?
My money is on a crontab that executed a simple set of ssh command attacks on the specified date.
As per the article, to destroy "all data, including financial, securities and mortgage information," it would be as simple as an "rm -rf" across multiple servers. Except for one critical item, he would have to have root access on all those servers.
Either the scope of his potential damage was very small, or Fannie Mae had some terrible security and change management policies in place.
> During this time Makwana had root access to all of the main
> systems, credentials which the company failed to revoke until
> the evening of the day of his layoff.
> His intention was nothing short of replacing the entire financial
> data, including the backups, from all of the company's production
> servers, with zeroes.
> the admin appended malicious code to a legitimate script, leaving
> a page-worth of blank lines between the two in order to avoid
> detection.
> Had this malicious script executed, engineers expect it would
> have caused millions of dollars of damage and reduced if not
> shutdown operations at Fannie Mae for at least one week.
During this time Makwana had root access to all of the main systems
This quote above, and the entire article boggles the mind. I've worked with big and small organizations that protected root on <10 machines like it was the key to preventing aging.
Still, somebody has to have it. If the number of people is too low, you risk them all being unavailable when you really need them. And there isn't a whole lot you can do to stop a malicious actor who already has root.
Not sure this matters much. Most places I've worked have considered local root exploits not worth patching. So if you have a user account, you have root.
Or they have an automated system that has root access to everything (say, to push config changes), and he had access to that system, or he had a way to crash/corrupt their SAN, or they only do black-box testing and not full code reviews (and the next guy to get assigned a bug in that same program found it), or they have code reviews that can be dodged with faked documentation, or...
If he was part of the SA team for their production systems, it's likely he DID have root access to all of them, and that it was legitimate and had business justification. Even if he didn't have explicit root access, it's even more likely he had direct physical access, which is generally less carefully protected by IT policies and is usually all or nothing.
Granted, this isn't just run of the mill data, and Fannie Mae could have certainly had significantly better security policies in force, but I don't think they were below average for a corporation in their security policies.
My money is on a crontab that executed a simple set of ssh command attacks on the specified date.
As per the article, to destroy "all data, including financial, securities and mortgage information," it would be as simple as an "rm -rf" across multiple servers. Except for one critical item, he would have to have root access on all those servers.
Either the scope of his potential damage was very small, or Fannie Mae had some terrible security and change management policies in place.
I cannot decide which to pick.