I promise there are people who can't figure out how to do it.
And again, the point of the lock on the door where you keep the porn is not to be robustly impenetrable to entry by a motivated 16 year old with a sledgehammer, it's only to make it obvious that they're not intended to go in there.
Depends on how much people want the hidden content. People in Eastern Europe, regular people, noch tech wiz kids, know how to use torrent and know about seed ratios etc. At least it was so ca 5 years ago. People can learn when the thing matters to them.
Regular people want to get things done, the tinkering is not a goal for them in itself and they gravitate to simple and convenient ways of achieving things, and don't care about abstract principles like open source or tech advantages or what they see as tinfoil hat stuff. But if they want to see their favorite TV series or movie, they will jump through hoops. Similarly for this case.
I disagree. Giving fake info adds noise to the mechanism, makes it useless. Ultimately I'm inclined to believe that privacy through noise generation is a solution.
If I ever find some idle time, I'd like to make an agent that surfs the web under my identity and several fake ones, but randomly according to several fake personality traits I program. Then, after some testing and analysis of the generated patterns of crawl, release it as freeware to allow anyone to participate in the obfuscation of individuals' behaviors.
> You might want to take a look at differential privacy
Differential privacy is just a bait to make surveillance more socially acceptable and to have arguments to silence critics ("no need to worry about the dangers - we have differential privacy"). :-(
It may often times be trickier than that - content often mixed of course. My 10 y/o hit me with a request yesterday to play Among Us where the age verification system wanted my full name, address, email, AND the last 4 digits of my SSN. I refused.
The bad actor still gets ROI, eg 'paid', for another bit of user data.
Making the overall system less useful is good. However, not allowing a company to profit, and giving fake info still allows for that, is paramount. EG, even with fake info, many metrics on a phone are still gamed and profitable.
That's why they're collected, after all. For profit.
> I disagree. Giving fake info adds noise to the mechanism, makes it useless.
There's no such thing as useless info. Companies will sell it, buy it, and act on it regardless of how true it is. Nobody cares if the data is accurate. Nobody is checking to see if it is. Filling your dossier with false information about yourself won't stop companies from using that data. It can still cost you a job. It can still be used as justification to increase what companies charge you. It can still influence which policies they apply to you or what services they offer/deny you. It can still get you arrested or investigated by police. It can still get you targeted by scammers or extremists.
Any and all of the data you give them will eventually be used against you somehow, no matter how false or misleading it is. Stuffing your dossier with more data does nothing but hand them more ammo to hit you with.
FedEx, I believe, have stated they will refund all consumers who paid them the tariffs, which they then paid to the gov. Nothing yet about the fees also incurred by consumers to pay the tariffs, but there are at least two class actions filed already on this subject, IIRC.
Grocery stores track their customers very extensively and cash purchases are fairly rare. I'm very confident that Costco, for example, knows everything that every member has bought from them since the tariffs started.
Indeed. And the concept of passing any refund on is just untenable. My example is to highlight how unreasonable such an expectation is.
And while this specific tariff situation is silly, and annoying, it's been going on forever. There were cases of tariffs on lumber from Canada, with presidents of all stripes. Some were fought, won in court, and nary a person questioned "where is the refund for the consumer".
It could be, but are vendors actually upgrading kernels along with firmware updates? In my experience it's more like, ship 5+ year old kernel and then forget it forever.
> It could be, but are vendors actually upgrading kernels along with firmware updates?
Certainly the big guys like IBM/RedHat are putting effort into maintaining their legacy trees.
> In my experience it's more like, ship 5+ year old kernel and then forget it forever.
I think that's the case with smaller vendors, like the teams that produce a custom kernel for the newest ARM single board kit. Once most of their inventory is sold they have little incentive to dedicate engineering bandwidth to updates. (And there's always the community effort to pick up the slack.)
So long as they keep up with patches that can be fine, but newer kernels also have useful feature improvements. If nothing else, performance tends to improve over time.
In practice upgrading kernel can easily cause performance regressions and cause multiple other issues (reduced battery life) so there's a lot of risk for zero reward for an OEM to do that.
After all, they're on the hook for not breaking users already working devices and don't get anything by risking lawsuits and recalls.
I'll grant that changes leave the possibility of regressions, but that's true for minor patches too, so you already need a lab set up to catch those regressions, and if you've got a lab set up to catch regressions and engineers who can fix them, then you might as well take the bigger upgrades too.
A couple times a year I get the joy of reading Kernel Newbies release notes for new kernels. And just being so delighted at all the amazing improvements happening. So many won't affect me or won't be big changes. But often there are amazing new capabilities and options too that do entice. Performance wins keep landing. Compatibility with other devices expands. Improvement is ongoing & continual. https://kernelnewbies.org/Linux_6.17
It just takes my breath away, is existentially scary, to hear folks be ok with being totally stuck on place. With devices that use open source but which are so fundamentally dead, are tuned out from that amazing growing goodness, are shovelware devices thrown over the wall that never improves while time marches forward.
This would be totally unacceptable for anyone on a computer. But somehow on consumer devices, it's just expected and accepted that everything is just stuck where it is, that it's ok to be so so so much worse than what everyone else is doing. It sounds like such a miserable existence, and I think it's just gobsmacking that such apathy & pass giving gets a break.
Will some things break sometimes? Honestly I think that's way overblown a concern, but yes, some, a tiny bit of things will break. Especially at first. I tend to think the risk is generally quite small. And often with engineering, the way to deal with your hard parts is to keep doing them. If anything, I expect that the risk of staying where we are is huge: we have thousands or perhaps millions of different kernel trees out there, bespoke special magical trees, for various devices, forked from magical point in time vendor BSPs, with special magical changes that we have to keep perpetuating, while integrating important fixes. This all sounds ridiculously unstable and risky, and inordinately costly to maintain. It seems reckless and dangerous to stick to this absurd course, to do everything so badly, at such human cost. Getting the fricking drivers upstreamed, maybe getting rid of this ridiculous anti-support GKI layer that apparently does no good and only makes it easier to be bad at updating & negligence & caring: that would reduce risk. That would decrease society from having to test these thousands or millions of kernels, and let us create a known predictable focus for our energies and tests, rather than this madcap batshit infinite vendor diversity that GKI has only sort of tamed.
This is such a shit situation to be in, and Android brings shame to computing, and if it's going to be so bad at Linux, it ethically doesn't deserve to have Linux. It is breaking the pact of what Linux can and should mean, and betraying consumers, by letting itself be a product that rots into obsolescence like this. This is a techno-spiritual sin, and it is a mortal sin.
Google found, on their 100,000+ machine Linux desktop fleet that sticking with “stable” releases and doing major upgrades periodically was far more work than rolling releases.
Here: 20 years of desktops and laptops basically installing the latest kernel asap on Gentoo then Arch. I did break stuff, especially Gentoo, a lot... but out of those I maybe got hit once by a kernel regression?
I don't think people realize how long it takes for the kernel to eventually catch up with your hardware.
My one year old framework laptop motherboard STILL don't have properly implemented usb-c PD apis in the kernel today. Imagine if I took a 5y old kernel?
That is news to the millions of linux users who upgrade their kernels regularly, and suffer zero consequences.
It's cowardice and FUD, in my view, to clutch to such old versions. It's just bad practice and bad engineering, and a crock of scary tales to make other people (& the people doing this) think their bad engineering & absurd self-injuring time-wasting practices are good, actually.
A lot of devices need to change their expectations around what qualification means, for a lot of systems & devices where requalification is such a pain that a kernel upgrade is a daunting task.
If you look at what generates cash, it's corp to corp. That's across most industries. While there are markets that are consumer mostly, LLMs have immense and enormous business facing revenue potential. The consumer market is a gnat in comparison.
I imagine MFM drives from 1985 might be a bit different from drives that are billions of times more data dense today. Back then, the drive didn't even control track width, the controller card did. And it was exposed to the OS.
I remember turning my "20MB", yes MB drive into a 30MB drive by messing with the track width. Of course, this was the time when people had Commodore 64 300baud modems, and would overclock them and get 450baud out of them.
In my computer club, we wrote a little piece of software to see which of us could get the highest bandwidth on a modem, one was even capable of just over 500baud!
After ranking, we all agreed to "trade down", so the guy with the fastest modem swapped his with the owner of the local Punter BBS. Everyone else traded so we still had the same ranking. That way, the BBS would always be able to support everyone at max speed, and everyone would still be "lucky" in terms of "next fastest modem".
I did read, but so ingrained is calling the "controller" MFM, that I literally thought it was referencing the standard, which I think was ST-506 (this was in 1983, so the timing seems to be right?).
EG, I literally thought of the controller and encoding as differing things, both separately called MFM. Ah well, it only took 40 years to discover differently.
I think is great, if there are no ramifications when skilled people unlock it.
There's just too much hacking going on, malicious behaviour, to allow uneducated masses to have root on a phone. I've seen so many people just not understanding the outcome of their actions. You'd get people rooting because some shady app lied about why, and just wanted control.
And we don't need more botnets. And it's why banks sometimes throw a fit.
So if a recompile does the trick, and no downside, then it'd be fine.
Lots of freedoms have downsides that are outweighed by the upsides, I'm absolutely unconvinced that the line lands on the far side of allowing you to control your phone.
Profit does not drive all. There are other valuable things besides money. A healthy society must regulate shortsighted profit-seeking and power-seeking. That's what these conversations are for.
Clause, maybe, is a junior DEV.
Not a release engineer.
reply