Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It really should be illegal to have network connected cars. Any software security engineer knows that. Everybody loves network connected new cars. Nobody thinks about the fact that they are "fly by wire"

I fully agree that automotive and medical equipment/IT could greatly benefit from not having security as an afterthought.

However, I don't know any vehicle with connectivity (other than that Jeep Cherokee controversy), which does not have safety critical CAN/FlexRay buses segregated from user facing 'infotainment' systems.

What that means is that the network bus in which your 'compromised' infotainment system is able to operate is completely separate from Engine, ABS, AEB, ESP, Airbags etc.

The solutions vary but there is usually a physical gateway that prevents a passthrough MITM attacks, so for example you cannot simple send a message frame from your infotainment pretending to be an Emergency braking request to your ABS system.



What justification do you have for asserting that the safety critical systems are actually separated/isolated? The people at Jeep probably thought their systems were secure, yet the researchers on the Jeep Cherokee attack demonstrated they were not and claim that they could have simultaneously affected 471,000(!) vehicles [1]. Do you think anybody at a car company would dare to claim that their systems are safe enough to bet the lives of 471,000 people on them and take responsibility if they were wrong? Do you think any engineer would, in good conscience, support such a statement and share that responsibility if asked directly? I doubt you could find a single engineer that would feel even remotely comfortable that their procedures are good enough for 1/100th that number and you would never find an executive who would dare claim such a thing in a legally binding way.

Absolutely zero benefit of the doubt should be given to systems where a single error could cause grievous harm to tens of thousands of lives. Such systems should require an absolutely ironclad public legally-binding positive assertion of security with strict criminal liability for failure (blame can not be shifted) that is independently validated before we should even think of accepting their use. We already accept nothing less for systems that are less dangerous such as nuclear reactor meltdowns and systems that are equally dangerous such as nuclear weapons systems. If systems with failure modes many times worse than the atomic bombings of Japan can not be made safe, then they must not be made. Anything less would be so criminally irresponsible that we lack a word for the magnitude of irresponsibility.

[1] https://www.wired.com/2015/07/hackers-remotely-kill-jeep-hig...


> What justification do you have for asserting that the safety critical systems are actually separated/isolated?

I don't need to justify anything, I am stating they are isolated in all the vehicles I am familiar with. [0] The example from the Jeep Cherokee is a pretty crude architecture defect, the only way that could happen is in vehicles with a single CAN bus, which really is something out of the 1990's.

As constructive criticism you really ought to read the "In Comments" section of the guidelines at the bottom of the frontpage.

[0] https://www.st.com/en/applications/body-and-convenience/auto...


This is wrong. The Cherokee hack existed because both CAN buses in question travel to the infotainment system. The vulnerability came into being by finding a path between them. It became remotely exploitable when they found a way to do the same thing over the onboard cellular modem. So no, your definitive crude architecture defect and reference to the 1990s is a misunderstanding of the actual problem.

This is a common FCA design. CAN HI and CAN LO often go to the same device on different pins. My Jeep has an instrumentation bus as well and all three arrive in one box in multiple places within the vehicle. Friend with a Rubicon Wrangler discovered both engine buses connect to the electric motor to disconnect the sway bar. He discovered this when water got in it, shorted it out, caught the electric motor on fire while he was watching, and subsequently totaled the ECU and TIPM by transmitting electrical noise on both CAN ECU buses.

You’ve assumed one bus goes to one thing to reach your conclusion. That is faulty logic, which is no shortage of ironic given the parts of your comment I’m mostly ignoring in this reply.


Something doesn't parse for me in your second paragraph - CAN HI and CAN LO are 2 parts of the same bus (each bus has 2 wires per the differential spec) and so of course they go to the same device on different pins. Am I missing something about the Jeep architecture?


They’re two separate buses, high rate and low rate, for different purposes throughout the engine and components. Not the literal high and low pins of each bus. High rate, low rate. Think cylinder timing communication speed versus, say, fuel usage. CAN operates at a fixed speed, and I understand high and low rate buses to be a common design (they’re two CAN standards, high is something like a megabit).

My code reader has fetched “CAN HI” before and the code was contextually referring to the high-rate bus, not the physical wire. I’m not a deep Jeep tech, just dangerous electrically and with a spanner, so I may be wrong in how I’m spelling those and I’m following the lead of a trouble code. If you’re nice to a service dealer with a laptop you can get a quite lovely wiring diagram that explains it better.


You state in a different comment chain that bus segregation does not mean no communication, so I do not understand how that is consistent with your claim that they are isolated.

The Jeep Cherokee also did not have only a single CAN bus as evidenced by the diagram on page 8 [1]. The problem was that the radio unit was on both CAN buses.

I ask for justification of your statement because you counter-argued the previous comment that stated: “It really should be illegal to have network connected cars. Any software security engineer knows that.” by stating that the safety critical systems are segregated and thus “the network bus in which your 'compromised' infotainment system is able to operate is completely separate from Engine, ABS, AEB, ESP, Airbags etc.” which implies that solution achieves the desired security properties. I claim the desired security properties are: “safe enough to bet the lives of 471,000 people on them and take responsibility if they were wrong”. I think, and I think most people would agree, that is accurate assessment of the potential consequences of catastrophic failure. That is a consequence comparable to the detonation of a nuclear warhead in a city, so we should apply similar standards to the required level of security. Hardly any software engineer would make a claim that their software can be trusted with an amount of lives within 3 orders of magnitude of that number (note 3 orders of magnitude is 471 which is comparable to an airplane crash, so 3 orders of magnitude less is critical avionics level). Therefore, making a claim that implies that level of security has already been achieved is an extraordinary claim and should thus require equally extraordinary evidence to be believed.

Even with such justification I would still caution anybody else reading it since it would not be a legally binding claim by a liable entity, so even assuming you are commenting honestly and to the best of your knowledge (which I assume you are), and even correctly describing the truth as it is today, there are no consequences for any liable entity if they betray your trust. It should always be up to the liable entity to justify themselves to the people to our satisfaction before deploying systems with such societal-level consequences.

[1] http://illmatics.com/Remote%20Car%20Hacking.pdf


I'd guess the fact that GM hired those guys for Cruise [1] probably suggests they took it seriously and made sure their own cars were isolated.

[1] https://www.detroitnews.com/story/business/autos/mobility/20...


This is exactly the sort of justification that I am cautioning against. Would it be great if they did take the problem seriously and solve it? Of course. But it is not our duty to justify their decisions when the consequences of their decisions may cause grievous harm to thousands. It is their job, the company's, to justify to the satisfaction of engineers and the people that we should trust the lives of thousands to millions on their systems. If anything, it is our job to be skeptical of their claims and grill them ruthlessly before they should even be allowed to consider deploying such systems. I mean, think of any other software system, would you trust anybody's lives to any of them? Would the engineers on those projects agree? Most software engineers I know would be horrified if their systems were used in something responsible for even a single life let alone thousands. The claim that we should trust these companies with hundreds of thousands of lives is an extraordinary claim that they should be required to provide extraordinary evidence for. And, even if we are convinced, they should still be ultimately liable for their choice of decision to make sure they are actually putting their money where their mouth is. It is what we demand from bridge builders, it is what we demand from nuclear reactor designs, it should be what we demand from mass-produced internet-connected safety-critical devices.


Oh boy, are you gonna be mad when you find out that Tesla are allowing the general public to beta test their self driving software on public roads!


I can think of at least one manufacturer that pushes motor performance and battery charging updates over the net.


Teslas are still not drive by wire though.


At least the Model S is. It has remote summon and Autopilot which necessarily means that it has software controlled accelerator, steering, and brakes.


That's not what "by wire" means though. My understanding is the steering wheel and brakes are mechanically attached to the steering and braking mechanisms.


"by wire" means non-physical ways in automotive. So accelerator and possibly brake pedals have only cable connections which carry data.

Even if the brake pedal is connected via an actual wire, emergency brake actuators can work by themselves on the system.

Electric assist motors on the steering column can also overpower a human with ease.

So, actual connections doesn't matter.


The article we're discussing would suggest that the attack surface for any kind of complex machinery is very wide. A car doesn't need to be drive-by-wire to be potentially endangered by malicious code affecting the engine or battery.


The problem is that there is an increasing overlap between physical vehicle control systems, cabin controls and displays, and remote communications. For example, how do you design a system that can automatically call for help in the event of an accident without giving that system access to the sensors that also trigger airbag deployment and similar safety features? How do you build "self-driving" automation (or even less ambitious driver aids that are already in widespread use) without relying on sensors observing the environment around the vehicle, which may be subject to interference or deliberate deception?


All the use cases you described are already implemented in today's cars, bus segregation doesn't mean zero communication.

Gateways can allow for certain messages to pass-through, the point is that they preserve the garantee that certain safety-critical messages can only originate from within a specific network. It's basically a crude MITM mitigation.


> All the use cases you described are already implemented in today's cars, bus segregation doesn't mean zero communication.

That's the problem, though. Once bidirectional communication exists, you can pretty much guarantee there's an exploitable hole in it that can be used to break the firewall.


Data diodes exist and have been used in high security or other risky cases when one circuit or bus absolutely has to be isolated except for one way communication.

In practice it's not typically necessary to go quite that far if a trusted communication processor is able to pass messages or set registers but only ever takes logic control flow input from the "safe" side. Plenty of equipment that wants to broadcast state over radio but never accepts any sort of input back beyond acks or whatever has done this for a long time.


I was talking about two-way communications. Maybe you can securely isolate a bus that has the insecure->secure side reduced to super trivial communication. But I won't trust anything that runs multi-layer software stack and a complex communications protocol; from what I've learned about software security, there's always a bug or a backdoor that can be exploited to elevate access or just break the secure-side components.

(This is, of course, entirely my opinion. I haven't seen the code of these systems, but looking from the outside, nothing I've learned makes me feel like I can trust their security.)


When I saw arbitrary code execution on the NES in Super Mario 3 done with only controller inputs I decided any notion of software security in a system more complex than, say, a microwave oven, wasn't achievable.

I recently heard an "On Star" ad that proclaimed that they can "slow down stolen cars". Consumers are wowed by features that, by definition, are enabled by unwise security decisions. Manufacturers will cash in. IT security in cars will get worse.


Data diodes exist and have been used in high security or other risky cases when one circuit or bus absolutely has to be isolated except for one way communication.

But how can you include adequate firewalls in a car that, for example, receives OTA updates for autonomous driving software? It is inherent in any such system that incoming signals are received and that the software that may be affected by those signals has access to pretty much everything.


These systems are crude but they implement a physical airgap.

You would have to hack a gateway (on-site) or physically access a segregated bus, at which point you could argue that you could also physically tamper with brakes or engine without any software being involved.


Unfortunately, a physical air gap won't prevent a malicious actor from, for example, projecting a misleading image designed to confuse your automated driving systems when they process that image and so prompt an adverse and potentially dangerous reaction. This is not only about the communications channels in the internal architecture, it's a much broader problem than that.


Please read the whole thread, this is going completely off-topic from the original discussion in the start of these comments.

The original claim was that vehicles should never be connected to any network because of unspecified online attacks that could actuate brakes or steering. I explained why this was unfounded.

Adversarial patterns against computer vision based ADAS are only a real issue for system which are not sufficiently redundant . Autonomous systems in particular should also apply a degree of sensor fusion between multiple sources of data such as optical computer vision, radar, long range ultrasound and LIDAR (once it becomes cost effective). If any of those systems provides erroneous data the remaining ones can negate that and allow for a fail-safe behaviour.

If you want my opinion, as someone who has moved away from automotive R&D a few years ago, Tesla's decision of depending too heavily in computer vision systems without addtional sensor redundancy seems like an architectural defect that has already cost lives. Either they are not integrating other sensor data sources or their voting weight appears underestimated.


Please read the whole thread

I really wish people on HN would stop with the "I disagree, so you must have not read everything" comments. I keep seeing these recently, but they are unconstructive, and they are insulting to other participants in the discussion. You might like to consider the alternative possibilities that you weren't clear in your earlier comments and people are responding to what you actually wrote and not what you thought you wrote, or that you might not be fully informed and someone who disagrees with you might simply know something you don't.

The original claim was that vehicles should never be connected to any network because of unspecified online attacks that could actuate brakes or steering. I explained why this was unfounded.

Actually, what you have repeatedly said is that you aren't aware of any vehicles (except the infamous Jeep case) that don't fully isolate infotainment from safety critical systems, which is not the same thing at all.

You also said that the Jeep case was due to a poor architecture that came from the 90s. Even if this is true, the exploit using it to trigger multiple dangerous behaviours remotely was demonstrated in 2015, so apparently the manufacturer was a bit slow on the uptake of what you think they should have been doing for the last 20 years.

Moreover, at least one well-known auto manufacturer is (in)famous for performing OTA updates that can change fundamental car behaviour, so evidently there are still networked vehicles where safety-critical functions and remote communications can not be fully isolated. It doesn't take a genius to extrapolate from this to the not-so-distant future when auto manufacturers are pushing ever more autonomous functionality combined with OTA updates, either.

Then we have all the cars that now have remote controls that do more than just unlock the vehicle, affecting things like environmental controls, or even summoning a driverless vehicle over a short distance (in theory, at least) with some of the newer developments.

Next we have the security systems providing remote access, such as OnStar's Stolen Vehicle Assistance system that someone else already mentioned, which can in some cases interfere with speed or ignition systems remotely.

I think by this point it's safe to say that whatever explanation you think you gave, the evidence doesn't support a conclusion that modern network-connected vehicles are safe because their critical safety-related systems are fully isolated from external influence, which is what really matters here.

Adversarial patterns against computer vision based ADAS are only a real issue for system which are not sufficiently redundant.

And yet not so long ago I read this, which if you're interested in the field you've surely seen as well:

https://www.nassiben.com/phantoms

So again, evidently there are systems out there in production that are "not sufficiently redundant". You might have some personal opinions on what should be happening, but that doesn't mean what actually is happening respects your view.


Tesla’s has over the air updates for it’s autopilot software, you even edit destinations from the infotainment system. So, it’s clearly both on the same network and could be hacked remotely.

The car can even be turned on and summoned remotely. Which means if nothing else it could be remotely driven onto a freeway to cause accidents even if it’s ‘autopilot’ had some kind of independent backup safely system to avoid collisions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: