Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tor Browser 6.0 is released (torproject.org)
329 points by ashitlerferad on May 31, 2016 | hide | past | favorite | 116 comments


I'm excited that they're now based on a Firefox branch that supports Subresource-Integrity. This allows websites to add hashes of externally called resources, like JS or CSS files, to let the browser determine if they've been modified.


I'm very happy to hear this practice has become reality. It just seems so much more convenient to refer to all these JS/CSS blobs with a universally unique ID (a hash), rather than whichever CDN the developer has chosen to use. As an ID, a 32 byte cryptographically secure hash really is practically the same as http://somecdn.com/somejslib/v0.1.js.min, except that only a single ID exists for each "shared object", plus the ID can be derived deterministically from the object it refers to.

Perhaps each CDN could have a symlink in its root directory with the hash of each file it serves, redirecting to that file. Then you'd be able to find any file by its hash as long as you have the CDN host name.

EDIT: I see now this is a security measure (protection against a hostile CDN), rather than a way to avoid dependence on a single CDN. I hope what I suggest is still possible though.


Are there any privacy concerns if ublock origin extension is used with the Tor browser?


Along with a number of other people I've been advocating bundling uBlock Origin by default in Tor Browser[1]. There were some fingerprinting and upstream support issues but most of these have been resolved, and some were philosophically opposed to including an ad blocker or breaking sites.

The main concern with adding uBlock Origin would be flipping bits that make your browser unique and stand out in the crowd. You can evaluate that risk based on your own operational profile.

[1] https://mailman.boum.org/pipermail/tails-dev/2016-January/01...


If fingerprinting is the primary concern then bundling should solve that, right?


Yep - get more people using the same setup and have a larger crowd to hide in.

Altho I think the new security slider in Tor Browser effectively breaks that crowd out into those groups plus any individual whitelisting the user allows

I think it should just default the settings to the highest security level, but understand the design requirement that many users use Tor as their primary browser so need video, JAR's etc.


I'm not an expert in the field, but my understanding is that you can be (partially?) fingerprinted based on the ads you (fail to) download. I also believe such partially discriminating information can be combined with other patterns (time at which you connect, sites visited, etc) to compromise your privacy.

I have no idea if this is problematic in practice, though.


Considering that Tor Browser users would also on the whole want to probably filter ads to improve their already slow latency, I'm hoping the Tor and µBlock devs can get together and standardise µBlock on all Tor Browsers with a preselected list of filters, which should reduce the ability to fingerprint by analysing the downloads of ads, at least within the set of all Tor Browser users.


There's actually a ticket up currently to do exactly that, integrate uBlock on the default Tor install.

https://trac.torproject.org/projects/tor/ticket/17569


> µBlock

Just 'uBlock Origin'. You are probably mixing up the names of µTorrent and uBlock Origin.


Nope. But I did mean uBlock Origin and instead hastily left out the "Origin" part of its name, which can lead to confusion with the uBlock extension maintained by another developer. Most people though seem to prefer uBlock Origin and that's what I use too. The spellings uBlock and µBlock seem to be used interchangeably on different pages and sites although undeniably uBlock seems by far the commoner variant.


I, like Santosh83, remember seeing uBlock or uBlock Origin written with the µ. Can't find it on either of their Github sites now, though.


The u in uBlock stands for micro (µ).


That makes logical sense, thanks.


How can anybody track which ads you failed to download? TOR browser has Noscript on by default, as far as I remember.


It's very easy to correlate UUID-stamped page load with no matching ad load on the backend, no script/flash support needed. If you have the most basic cookie support its even easier to correlate multiple ad blocks across numerous page loads, across sites.


Your traffic to websites goes trough a fixed "circuit" (that changes every few minutes). Within a circuit, the exit node sees all outbound connections you make, or don't make.


It's explained in "design philosophy" under point #5: https://www.torproject.org/projects/torbrowser/design/#philo...


Not necessary if you disable JS, which you absolutely should do if you care about privacy while using Tor.


Anything you do like this to "customize" the default Tor Browser install can reduce your privacy instead, as it provides a factor by which you can be fingerprinted and differentiated from the larger base of users using the Tor Browser as-is.

Source: see the Tor Browser docs on why they enable JS by default (though NoScript is installed and can be configured to block all JS, if you know the risks) https://www.torproject.org/docs/faq.html.en#TBBJavaScriptEna...


There are any number of ways to deanonymize users once javascript is running, not to mention the greater possibility of escaping the sandbox. It's a damned if you do, damned if you don't scenario.

I know they're trying to serve users that can't be expected to play the whitelisting game, but they really should be stricter here. It's already trivial to differentiate a Tor user from a regular one, they might as well set the most secure defaults possible.


Just clicked on this link from Heathrow Airport public wifi and got a certificate warning:

https://twitter.com/roryireland/status/737626851749679105


Airports usually do SSL MITM so they can inspect the traffic. Or they redirect all addresses to a hotspot sign-in page until the user signs in.


It looks like this particular MITM is being done by OpenDNS.

https://support.opendns.com/entries/46060260-FamilyShield-Ro...

They are trying to show a block page because this site is classified as Proxy/Anonymizer by OpenDNS, but they have to MITM the HTTPS connection in order to (try to) do so.

You might be able to avoid this censorship by using a different DNS resolver, depending how the airport wifi is implementing it.


Really wish this was illegal or SSL didn't have such a gaping vulnerability.


There is no gaping vulnerability.

The user will be presented with a certificate error (or denied outright, depending on the browser). If the user decides to add an exception, only then will a handshake succeed and the MITM be possible (in this particular scenario).


If one doesn't understand exactly how TLS works, this is the usual question regarding MITM. If there is "end-to-end encryption", just how exactly is it even possible to do MITM? In an ideal world, it just wouldn't be possible.

With the existing system, certificate "errors" that are really just "warnings" should not be allowed to be bypassed. The fact that certificate errors can be ignored is something that should never have been allowed to take place. Unfortunately with the historical fact that legit certificates were inexcusably expensive to obtain for internal/test projects meant that self-signed certificates became commonplace.

We're waiting for a genius to come up with a new strategy for encryption that doesn't rely on trust being determined by a third party entity (ie: certificate authorities). Letsencrypt is nice and all, but it's still just a free workaround for a system nobody really wants. "It's not possible to do it any other way" is just pseudo-speak for "nobody has invented a better way yet".

Why the strategy for encryption has relied on public/private keys for so long with no real alternative strikes me as odd. After 30+ years, nobody has thought of something else?


If sites use HSTS, browsers won't allow users to bypass the errors. See section 12.1 of RFC 6797:

https://tools.ietf.org/html/rfc6797#section-12.1

So you can encourage people to use HSTS and then get part of that behavior at least for individual sites.


The HSTS preload list (referring primarily to Chrome's, which is also included by other vendors) in particular is something I find really strange. The current domain owner adds the domain to the HSTS preload list. That domain then expires or is released by that owner, without them requesting its removal from the preload list. Then someone else buys the domain without having planned to use SSL/TLS.

The result? Weeks, even months, of not being able to use the domain without encryption, all because someone else previously had the domain added to the HSTS preload list. Removal from the preload list is by request only; there is no automation in place to detect the lack of an HSTS header to mean that the domain is no longer to be considered a participant. Even worse, the request to be removed can take an indeterminate amount of time to be disseminated to end users of the browser. The preload list is not pushed to clients via something like a daily digest; the list is hardcoded into releases of the browser. This means it can take an absurd length of time to see a domain removed from the list, as it depends on every individual user updating the browser to the latest version, and only once the vendor even gets around to updating the hardcoded list in a given release to include your domain's removal.

How such a mechanism was ever acceptable is beyond me. Domain ownership is technically fluid, and yet the implementation was designed in such a way as to assume that domain ownership never changes.


How such a mechanism was ever acceptable is beyond me.

They probably wanted to encourage the new owner to use TLS as well.


That makes little sense. Far more likely they just didn't take into account the fact that not every domain is a long-term "google.com" owned by a single entity for its lifetime.


> "It's not possible to do it any other way" is just pseudo-speak for "nobody has invented a better way yet". ... After 30+ years, nobody has thought of something else?

Not really, I think you've hit the nail on the head. There are plenty of pie-in-the-sky proposals that start from an axiom like "we should replace the centralized CAs with a distributed, decentralized system", but so far I haven't heard of anything that is robust enough to credibly improve on the current architecture.

Off the top of my head, the closest things I can think of to what you're asking for are:

- Namecoin (depends on the Bitcoin blockchain, with everything that implies e.g. huge computational overhead, tight coupling to the volatile Bitcoin economy, and potential attacks from mining cartels etc.)

- The PGP web-of-trust (places additional burden on users who have to decide whether many different intermediaries are "trustworthy"; totally impractical for John Q. Public, in my opinion)

- CACert.org (still relies on a centrally trusted certificate issuer, but crowdsources the identity validation stuff)

If you have better ideas, let's hear them!


Unless of course the WiFi operator is using a MITM certificate issued by a 'legitimate' CA.

When you use XPKI, you are trusting every single CA in the world to certify every single site in the world.

A sane PKI would limit what CAs are permitted to certify (e.g. TÜRKTRUST might be permitted to certify *.tr, but nothing else — not foo.br, bar.gov or baz.co.uk). But XPKI is neither a sane nor a secure PKI.


If a publicly-trusted CA would issue MitM certificates for WiFi operators (specifically for domains that the WiFi operator doesn't own), they'd be out of business in a heartbeat. That would be a severe violation of Baseline Requirements and various root program policies, and no browser vendor would continue trusting such a CA. Both HPKP and other pinning technologies included in e.g. Chrome would make this easily detectable.

This scenario might be plausible for targeted attacks by nation states, but not for something as simple as public WiFi.


> If a publicly-trusted CA would issue MitM certificates for WiFi operators (specifically for domains that the WiFi operator doesn't own), they'd be out of business in a heartbeat.

The fact that Trustwave still has their CA certificate proves you wrong. They provided a sub-CA to one of their customers for the purpose of MITM. They got a slap on the wrist, gave a promist that they'd never do it again, and that was it. Your browser still trusts this company that has demonstrated that they are perfectly willing to compromise the safety of the whole CA system for a sale.

It wasn't WiFi, but that's not really a relevant detail here. Selling certs for the purpose of MITM is.


This happened in 2012 and caused a number of changes to various root programs[1]. Mozilla, for example, is now forcing CAs to disclose any subCAs they sign. This is also from a time when the Baseline Requirements in its current form were not yet part of e.g. Mozilla's root policy (that only happened later in 2012). It also predates HPKP (and possibly the pinning mechanism Chrome used before that for their own domains, I'm not sure of the timeline). A lot has changed since that incident, and I have no doubt that the reaction to something like this would be different today, especially if a CA does this knowingly.

[1]: https://wiki.mozilla.org/CA:Communications#February_17.2C_20...


Ok, so you think BlueCoat will be out of business because of this? Surveillance and censorship is BlueCoat's business, and now they're a CA.

https://news.ycombinator.com/item?id=11781915


The question here isn't whether BlueCoat would go out of business, but rather Symantec. I don't like Symantec, nor do I like BlueCoat. That being said, we have no evidence that this certificate was used for misissuance. Symantec claims that BlueCoat never had access to the private key. The certificate was disclosed to Mozilla's root program. I have no idea why Symantec would agree to sign a certificate for an organization like BlueCoat - the PR damage has to be devastating. It would make even less sense for them to sign such a certificate to allow BlueCoat to use it with their MitM devices, because something like that is more or less guaranteed to be detected sooner or later, with HPKP and other pinning technologies. They would essentially kill their CA business, and I doubt any amount of money BlueCoat would be willing to pay would cover their entire CA business profits.


Not what you were replying to, but SSL does provide for MITM outside of the options offered in your response. I.e. https://mitmproxy.org


As a user of mitmproxy, I assure you it is not achieving MITM by exploiting any weakness of SSL. mitmproxy requires its certificate to be trusted by the target (i.e. installed in the root store) to silently intercept SSL'd traffic. Invalid certificate warnings will display otherwise. This is fully conformant to SSL's threat model.


No?

The link you posted is indeed a MITM proxy for SSL, but it will generate certificate errors, as my grandparent said. Users will know the MITM attack is going on (unless the website doesn't use HSTS and the attacker has stolen/bought a signing key from a CA registered in your device's trust store).


The correct thing to do in this case (if you are worried about security) is to allow the MITM SSL handshake and use a secure VPN for your Internet access. Third parties with the certificate keys would only be able to see encrypted traffic between you and the VPN provider.


I've seen captive portal redirection until login, but I've never seen an SSL MITM like this deployed officially at any airport. I would bet this guy logged in to somebody's pineapple: https://www.wifipineapple.com/


Captive portal redirection frequently causes SSL errors in my experience.


Southwest's in-air wifi does this. Pretty scary looking warning message in Chrome, won't even let you go to Google cause of HSTS.


In all likelihood it's just trying to show you the Arqiva page.


It's most likely a captive portal. Try entering a non-secure site like captive.apple.com and see if it redirects to a wifi portal.


Could anyone elaborate why tracking protection was removed?

https://trac.torproject.org/projects/tor/ticket/17167


As mentioned in the ticket, Tor is opposed to filter lists: https://www.torproject.org/projects/torbrowser/design/#philo...

Firefox's “Tracking Protection” is one of these.


That's stupid and inconsistent, I read the ticket and philosophy section. At the same time they add 3rd party NoScript extension that blocks JS, suggest to turn it on to 'safe' level that breaks 80% of websites and a lot of functionality on other websites, and they remove core functionality that blocks tracking, improves security and privacy and breaks less sites...


Presumably because it's normally off by default?


This support ticket, which is linked to from the ticket you linked to, appears to say that the reasoning is related to semantics of security related to the security slider:

https://trac.torproject.org/projects/tor/ticket/17898


Might be the only one here, but I'm looking forward to a day when Tor Browser is Servo based.


What would be the advantages for Tor Browser?


too many to list.


> But for a while now Disconnect has no access to Google search results anymore which we used in Tor Browser. Disconnect being more a meta search engine which allows users to choose between different search providers fell back to delivering Bing search results which were basically unacceptable quality-wise. While Disconnect is still trying to fix the situation we asked them to change the fallback to DuckDuckGo as their search results are strictly better than the ones Bing delivers.

Ouch


Tor/Mozilla still haven't gotten the FBI to reveal how their pulled of their most recent Tor attack have they (not the CMU one, but the kiddy-fiddlers one).

Many have put forth those users were using flash, plugins or were convinced to download and execute something, but we don't know do we? And until developers do, the Tor Browser bundle could have a vulnerability that could compromise its main purpose.


Firefox exploit similar to the 2013 attack[0]

The zerodium price list has Firefox 0day at $30k[1] a pop - compared to $100k+ (today ~$1M) for Chrome

The long term solution for Tor Browser is to build on Chromium + Containers/VM + Isolating proxy

[0] https://www.wired.com/2013/09/freedom-hosting-fbi/

[1] https://www.wired.com/2015/11/heres-a-spy-firms-price-list-f...


It wouldn't be particular hard to sandbox Tor Browser on Linux and use namespaces such that the browser itself has limited ability to fingerprint its host or learn its host's IP.


> The long term solution for Tor Browser is to build on Chromium + Containers/VM + Isolating proxy

Surely text-mode gopher would also be more secure? (Only half-joking)


> Surely text-mode gopher would also be more secure?

I guess early nineties, late 80ies network C code is wonderful for security! On the plus side it let's less to audit. And you rewrite it in a secure language.


I often wonder why Tor is bundled within Firefox, would it not be easier to release an app that changes the proxy settings / network routing at a system level?

That would let you use your preferred browser, rather than being forced to use the browser Tor chose to bundle.


Tor is not enough to ensure even mediocre privacy. Just proxying everything through Tor would give people a very false sense of security (and break UDP amongst others).

There are VMs that do that but still wouldn't recommend that to someone that doesn't understand tor and networking. Even inside such a VM you'd still use Torbrowser.

It combines Tor and app-level security/privacy measures in an accessible way.


Tor is not enough to ensure even mediocre privacy.

I'm curious why you believe this, outside a few watering hole attacks, and the (now-patched) CMU attack. Given a known-good entry guard, where is Tor broken?


The parent commenter is referring to application-layer attacks, which is why the Tor Project deprecated things like TorButton in favor of a dedicated Tor Browser, and why they discourage things like having a router that sends all traffic over Tor by default (because random applications will reveal trackable identifiers!).

https://www.usenix.org/legacy/event/leet11/tech/full_papers/...

https://www.torproject.org/projects/torbrowser/design/

There's probably a more specific statement from the Tor Project that I'm forgetting at the moment that sets forth the idea that you should only use Tor with Tor-aware client software (that controls what privacy leakages may occur at the application layer).


Not broken. Exit nodes. One has to assume all unencrypted TCP traffic will be recorded and potentially modified (MTIMed).

Consider we have that system wide tor proxy instead of the torbrowser bundle. Now exit node operators get all your TCP traffic. It's fine if one knows what they're doing[0], but if that was the default way for the average user to get on Tor? A privacy and security disaster IMO. Not only would we not provide mediocre privacy, we'd actually endanger people.

[0] and you've got proper stream isolation, which I'm not sure how possible it is system-wide with unmodified software


Tor itself is not broken, but all it does is anonymize your traffic and therefore does not protected against exposure that comes form other sources. It does not prevent against things like browser fingerprinting (and other fingerprinting methods, see other comments in this thread), vulnerabilities caused by various content types (Flash/JS/Java) and a host of other issues an inexperienced user would not be aware of.


I suggest you look into the fact that browsers now cater to web app capabilities before they cater to user privacy.


It is likely that some other application on your system will phone home and in essence make you a named user... think email, iCloud synchronization, anything you do on Windows 10, etc.

In theory by bundling Tor with a browser that has sane defaults and then sand boxing that from the rest of your applications, one can isolate specific communications to Tor with lower potential exposure.


There is so much more to staying anonymous in the web than just the transport layer. With all the tracking cookies you have in your browser, it doesn't matter if you use Tor or not. There's an array of privacy-enhancing extensions shipping with the browser bundle and you generally use a clean browser profile with it as well.


Tor is not bundled with a browser. Tor Browser is, but Tor Browser is not Tor, it's a browser with Tor built-in and extra privacy features, such as NoScript.


The problem is that many user settings and add-ons would defeat the privacy protections in Tor. For example, Javascript and Flash, as well as many other browser features.

By bundling a separate browser, Tor can provide sensible defaults to protect users.


The Tor Browser Bundle doesn't ship the standard version of Firefox! Several privacy-enhancing patches are applied to the ESR release of Firefox for TBB releases. This is because the Mozilla Foundation refused to accept the Tor Project's commits to enhance privacy in the browser.

There are also tickets in Tor's TRAC issue tracker for Chrome. Once again, using Chrome securely would require several patches to source.


> This is because the Mozilla Foundation refused to accept the Tor Project's commits to enhance privacy in the browser.

Actually, I'm pretty sure this is untrue. I'm reasonably certain we're actively working with the Tor Browser developers to get their patches merged into core (but preffed off) so that they don't have to maintain a stack of patches on top of Firefox.

(Disclaimer: Mozilla employee)


My guess is so it makes it harder to fingerprint people. If I remember correctly the advice is to not resize the browser window, etc.


This.

The default tor browser ensures most tor users look identical so malicious services cannot finger print individual users. It disables a small group of firefox features which make finger printing extremely trivia (RPC Chat, GPU access).

In most cases of people being de-anonimized on TOR they're normally running an alternative browser, or out of date TORbrowser.


> out of date TORbrowser.

isn't everyone using TOR today using tomorrow's out of date TORbrowser? Meaning that traffic today can be recorded and analysed for vulnerabilities tomorrow.


No.

Its really hard to open an RPC chat session on packet logs. Or request GPU diagnostic information after the connection is terminated.

Most finger printing isn't just write/response times. Latency is a bad indicator of individuality. It's a lot more in depth and requires actively speaking to that browser and noting what features it does/doesn't present, how those features are unique, and how long certain tasks take to process.

Each individual piece of data is small (generally, some browser features make ID trivial), and common. But building up several can give you some confidence in an identity.



Maybe check out Whonix


There's a tor gateway and a version with two vms


Also you wont be able to access .onion sites.


I don't think this is true. See https://www.torproject.org/docs/faq.html.en#AccessHiddenServ... any SOCKS4a capable browser should work.


You don't need torbrowser to resolve onion sites. Onion sites work fine through the SOCKS proxy, and this is often done with IRC clients.


How do you deal with corp computers that quarantine tor.exe with something like McAfee? I can copy in a renamed version of tor.exe but I'm not sure what file(s) to change to use the new filename.


Boot to a live CD/USB based OS.

Keep in mind that even outside of a corporate environment, Windows is not private.


In alternate world, where Tor is a dominantly used browser, How will hypothetical google works ? ( i.e. Autonomous content discovery ? )


There's nothing hypothetical about tor search engines. A small list can be found here: http://www.thehiddenwiki.net/deep-web-directories-search-eng...


Tor Browser... You mean Loki?


It's still super slow, as in the UI (especially the scrolling) on OS X on a Retina Macbook (Early 2015). I wonder if it's my DownThemAll add-on.


Def shouldn't install any addons.


>OS X on a Retina Macbook (Early 2015).

Why do apple users do this? Do you really not know what's under the hood of your macbook? Honest question.


Because that's how apple indicates the 'version' or 'revision' of the model, instead of just numbering they give the 'produced from' date. Kinda like car manufacturers distinguish the different versions of a given model (i.e. the 2012 Tesla model S and 2016 Tesla model S).

Also, since you can't tell from looking at the laptop (they are all identical since the first retina macbook from 2012), you must go to the 'about this mac' menu, which gives you the model name and revision date. In my case this is "Macbook Pro (Retina, Mid 2012)".

When I used to build my own PC's, I could tell you the brand, model, revision etc of every component inside, but since just about every internal part of an Apple laptop (motherboard, SSD, memory etc) is custom fabricated, there is no point in trying to remember what's inside. I know my laptop has the 'second fastest' i7 I could choose from in that time, and the 16GB memory option, but don't ask me which specific i7 or which DDR type I have, I don't know and honestly I no longer care.


Are you referring to the omission of the exact specs? I don't think there's much variation if any among the first-gen 12" Retina MacBooks, and with Macs usually saying the size+year+class is enough to get an idea of the specs or look them up.

Or are you saying it's because of weak hardware? Other, arguably more taxing software runs fine, but TorBrowser suffers from an uncanny slowdown.


Isn't a lot of this longer routing of packets?


They're referring to application performance, specifically UI, not network latency.


Correct. I gave up trying to figure it out, wiped my settings, redownloaded TorBrowser and it is fine now, even after reinstalling my add-ons.


Too late to edit but I agree with the other poster about not installing add-ons, if you're serious about privacy. However I just use TorBroswer as a quick proxy to bypass my network's blacklists.


I have absolutely no idea.

I can tell you my last computer was a 4 x 1GB Kingston HyperX on a Q9450 Core 2 Quad with a Gigabyte GA-X48-DQ2 motherboard, and it's been a few years.

I only know my Mac has 16GB of RAM, and that's part of the beauty of it.


Sure I do. So, likely, do you and most other people reading it, or at least have a general idea. So what's the point of typing out my specs? Should I also type out the specs of my Nexus 6P or just tell you I have a Nexus 6P?


I don't memorize the specs in Apple products, so I actually have no idea what's under your laptop's hood.

I can tell you that my thinkpad is an i7 with 16GB of memory, so if it ran slow on my machine I'd blame the browser.

My point was that you spend just as many keystrokes to tell the reader it's a late 2015 model macbook pro as you would to type out "i5/8GB" or whatever, but the phrase "late 2015 macbook" means literally nothing to some readers.


For worse (or better, but not really), phones are a lot more customized than laptops. A Nexus 6P that uses a Snapdragon 810 might have it set up differently a HTC One M9. I remember some reviews were talking about how the 6P wasn't throttling as hard as other 810-powered phones.

An i7-6700K will be and perform the same as any other 6700K, save for poor cooling and aftermarket overclocking.

Unless the model does something special with the components (it shouldn't) then just typing out the components is what matters, not whether (or when) Acer, Apple, or ASUS designed the chassis.


That's an official model name.


Because it makes more sense to specify a single model/year than exhaustively list all the under-the-good configurations which others can quite easily look up?


That makes no sense. He's only got one laptop, why would he exhaustively list all the configurations?


It might be the name of his model, mine is: MacBook Pro (Retina, Mid 2012). It might tell you more than "2.6 GHz Intel Core i7"


It's one of the most common laptop brands. Specs are easily looked up, and even without looking it up this tells people it's a relatively new laptop, so specs are probably at least half decent.


It has a Core M though.


I don't see a problem here. The exact model of CPU, memory and other chipsets are irrelevant to me. As long as I can work comfortably on it. I also don't care what's under the hood of my car. I'm technical enough to find out and study it and discuss it, but I'd like to focus on other things. The Macbook is just a tool to me. And one I'm very satisfied with. So I'm very happy there are only a few models which can accurately be referenced by using the model and year.


> I also don't care what's under the hood of my car.

Well, if you're a Mac user then presumably the hood of your car is welded shut.


This might surprise you, but very few people even look under the hoods of their cars. This even includes non-Mac users.


There used to be a free software advocacy slogan explicitly asking "would you buy a car with the hood welded shut?".

An intuition behind that analogy, I think, is that people will appreciate the importance of being allowed to do car maintenance even if they don't do it: that non-car-tinkerers will still see the value in having a hood that they can open. For example, they'll want to be able to choose others to perform the maintenance or not be dependent on the manufacturer in an emergency. Maybe sufficiently Apple-like car companies will succeed in changing that intuition or have already changed it quite a bit?


When you're discussing performance, then the hardware that's ostensibly underperforming is relevant.

To use your analogy: If our cars are the same model year but mine is faster and you're wondering why or just complaining about it, maybe you'd care what's under the hood then.


Why would it matter what's under the hood? I care much more about what I can do than what's inside.


Because we're specifically talking about performance issues, and the issue could very well be what's inside. In this case, the model of computer is given so that we know what's inside.


It's a shame Firefox still lacks sandboxing even in version 45, even though they've been promising it for version 42 or 43. I hope Mozilla does end up making a new browser in Rust, so Tor could use a safer platform.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: