Seems to me we're going to have to let the anti-encryption mob have their way until things go wrong—bigtime. No amount of expert advice will convince them until they witness firsthand the negative consequences of weakening encryption.
It's only afterwards and as a consequence some highly
newsworthy disasters occur such as a child abduction or political sex scandal involving a high profile politician come to light that the lay public will get the message that weak encryption is effectively no encryption.
In the meantime criminals will be early adopters of more sophisticated messaging such as steganography.
Would be nice, but you know they'll carve out exceptions for themselves or use "unauthorized" messaging channels regardless with no consequences. It is _always_ "rules for thee, not for me" with politicians.
I think there’s no turning back in this kind of laws. What has been lost is lost. In France a lot of public databases were leaked recently. It cannot be undone
> until they witness firsthand the negative consequences of weakening encryption.
They won't be affected.
The hitherto invisible but very real wall between social classes is just going to become more visible for "First World" civilians the way it's been in "lesser" countries for decades already.
Actual "criminals" have always been able to get around all the restrictions ever put in place since the dawn of civilization, it's just the common folk that get trodded on and kept in their place.
In most cases I think the revelation of a scandal involving a high-profile politician would be a good thing. (That is, better than it remaining secret.)
To be fair, the EU governments led the way to an unencrypted future with TETRA and the broken TEA1 encryption scheme. They're just giving back freedom and openness to the people now. /s
"Security researcher Ross Anderson reported in 1994 that "there was a terrific row between the NATO signal intelligence agencies in the mid-1980s over whether GSM encryption should be strong or not. The Germans said it should be, as they shared a long border with the Warsaw Pact; but the other countries didn't feel this way, and the algorithm as now fielded is a French design."
The murdering of French lawmakers is something we frequently celebrate here in France.
Your profile suggests that you’re in Israel, where groups like the Irgun are celebrated as national heroes. Violent struggle against perceived oppressors shouldn’t be an unfamiliar concept.
You are correct, the Irgun are credited along with two other organisations as being the physical protectors of our people during a time when we were more typically known for being slaughtered. However, very few people here are extremists that celebrate the Irgun. Quite the opposite, the Irgun is famous internationally because they were the violent exception to the norm.
Israel was founded by leftists, only in the late 1970s did Israel turn to the right. The Irgun was certainly not representative of those values which are typically associated with our people.
Fascinating development. That means much, much greater control over ink deposition.
No doubt a potential worry for currency producers. Inkjets that have control over the physical build up of ink structure would pose an even greater threat of counterfeiting.
No doubt mints can introduce countermeasures to detect such threats but I'd suggest this tech (if perfected) will likely be too good for humans to detect a forgery at a glance. Reckon machine readers will become the order of the day, that's if physical paper/plastic currency continues to exist.
Stats—the number killed per annum through failure—will determine the true effectiveness of this change.
In the meantime, I'll continue to rely on the physical movement of full atoms for my braking. Electrons without an accompanying nucleus tend to be more fickle.
I'd agree with this assessment. Moreover, if developers were to stick with the eminently satisfactory CUA (IBM's Common User Access) interface standard and further regularize that then things would be much easier. https://en.wikipedia.org/wiki/IBM_Common_User_Access
If developers want to experiment with various UI configs then let them but keep a CUA in the background that can be called upon by machines and humans alike. (Unfortunately, ergonomics has never been a strong point for developers.)
Ah how things have changed. When I was learning electronics we mainly dealt with radio and TV circuits and just about the first lesson one learned was to keep leads short (reduce unwanted inductance) and use decoupling capacitors everywhere.
I recall some years later a young graduate engineer coming into my office with a rather involved circuit consisting of 30/40 TTL ICs and complaining that he'd double checked the circuit and it still didn't work. I took one look at his device then went to the draws of capacitors and handed him a handful of 0.1uF ceramic caps and told him to put them between the ICs' PS rail pins to ground which he did and to his amazement the circuit worked immediately.
He stood in amazement that I should have such insight so as to fix the problem at first glance.
How such critical knowledge can get lost in university training these days just amazes me.
My university made us use really crappy power supplies and dev boards. Nothing worked unless you first put a large bulk capacitor on the power supply's output, and small capacitors close to the components.
Also I got bitten by parasitics in capacitors very early in my career: capacitors of different face value will resonate with each other to effectively kill the decoupling network at a specific frequency (resulting, for me, in an amplifier with a nice hole in its frequency response).
Incidentally, in my post below on the MIT RadLab series I mention Vol 23. On p183 parasitic oscillation is mentioned. Also, I recall when working in the now defunct RCA prototype lab, one of the main cure-alls for parasitic oscillations was to place a ferrite bead on a transistor lead (between it and the PWA). It often worked wonders.
You learned when analogue circuitry was the norm. I learned when digital circuitry was simple enough that you could readily take something apart and understand it.
Now, EE courses often start with cad, simulations, digital electronics, and you end up with people building ziggurats atop an ocean of incomprehension.
It’s exactly the same thing with software.
I don’t scorn people for this, rather I see myself as fortunate for having learned in a time when the more fundamental knowledge was still worth learning - and that’s the rub - for a vast majority, it simply isn’t worth the time or energy to explore the full stack, when there’s so much to learn atop it.
"You learned when analogue circuitry was the norm. I learned when digital circuitry..."
What's not taught properly these days is that ALL electronics is analog at the physical/circuit level.
For you digital types that's OSI Model Layer 1 — Physical layer (look it up on Wiki). Nothing in electronics works unless that's working properly—ICs, tunnel diodes, transistors, inductors, resistors, capacitors, cables and antennas are all analog devices at that level. That includes the heart of the most advanced digital ICs. For example, the upper clock speeds in processors are limited by transit times/electron mobility, inter-electrode and stray capacitances, unwanted inductance, etc.—all of which are analog effects and they must be accounted for.
Like it or not, the physical analog world is alive and well! The Noughts & Ones Brigade unfortunately seems to have forgotten that fact.
> you end up with people building ziggurats atop an ocean of incomprehension.
Everyone does. There's probably a layer below for everyone but the most theoretical physicists. I don't know where the leaks in electronics engineering's abstractions are, but I'm pretty sure they exist.
"…I never was someone who could stop asking “why?”"
When a kid of about four I found a pair of WWII headphones and took them to my father who pulled out the iron diaphragm and showed me magnetism at work—somehow some magical force was pulling the diaphragm back into the headphone with seemingly nothing in between. Absolutely fascinated, I wanted to know what this invisible 'magic' was. Many decades later every time I look at my fridge magnets I still ask the same question and I don't believe I'm much closer to the truth!
Sure, there are the simple answers everyone's taught, then there's QFT but even that doesn't tell me exactly what's going on. And why does alpha have the value it does, and why exactly does c = 1/(μ0ε0)^1/2? Not knowing and not being able to figure these questions out is, at times, infuriating.
For me, solace of sorts can be found in engineering—I can build an electronic circuit and end up with a tangible working device. On the way I'll curse my electrons for making so much noise that they sound like ball bearings rattling around in an empty oil drum but I'll eventually calm down and apply Johnson–Nyquist to shut them up (well, a little bit anyway).
Yes, exactly those questions, and others of the family. Things we can describe yet cannot yet explain - and “yet” is doing some heavy lifting there. Perhaps some things cannot be explained from our reference frame.
I got sucked into the infinite perspective vortex from the cosmology angle - a grandmother with a vast collection of pulp sci fi and clear skies over her canal boat. And yes, magnets, what is this devilry and why will nobody tell me how it works?
Upon discovering said imponderables I moved to the woods instead - building infrastructure of all varieties from the ground up, playing with hydropower, that sort of thing - and of course EE gets jammed in all over the place, and when I can get away with something simple and analogue, I do.
I build things because the alternative is spending tracts of time staring into the abyssal fact that explanation is always ultimately internal to the system being explained.
> How such critical knowledge can get lost in university training these days just amazes me.
It will probably have been taught.... but very briefly. Before going go back to analysing circuit schematics, where connections between components don't show resistance or inductance, and the capacitance of two parallel capacitors sums.
This is why lab exercises are important. I remember first building some actual TTL circuits on bread board, I learned very quickly that this whole digital stuff is a lot uglier and messier than on paper or in the simulator.
With sharp rise times, synced up to a common clock, even after soldering in a whole bunch of capacitors, you can still stick a probe pretty much anywhere and see switching spikes all over the place, from power rails to completely unrelated signals that are supposed to be stable. Using actual TTL, there was another funny lesson what this weird "fanout" value in the datasheet meant.
A similar lesson I learned that way (and a very memorable one :-)) was about flyback diodes.
Ah, but that may well be because of your scope probe's leads! The sharper the edge the more likely that will happen. That's what those shitty little springs are for that come with your scope probe: you disconnect the ground wire and put that spring on the naked scope probe pin around the ground collar. Then where you want to measure you use the pin to go to the signal and the little spring to reach the nearest ground. Presto: clean signal (or at least, much cleaner). Also, make sure to tune your probe (that's what the little plastic screwdriver with metal tip is for, there is a small trimmer in the probe you can reach through a hole and that is critical at high frequencies) and avoid probes with switchable 1/10 like the plague, over time the switches go lame and then you'll be tracking all kinds of weird gremlins.
This is just reminding me of the time I played with an oscilloscope, touched the probe against my finger and found my body was antenna picking up mains frequency.
(I imagine analogue RF board-level simulation is a lot more expensive than digital-logic board-level simulation. Might have been impractical way back when, such that we only used to have the digital-logic kind. But we certainly have both kinds today.)
I've struggled to find a proper introductory guide to stuff like this. Moving from pre-made Adafruit boards to my own PCBs was very tough to navigate; every guide I came across assumed you knew all sorts of stuff that the EEs writing them probably committed to deep memory decades earlier.
I found Phil's lab content [1] [2] indispensable for just this. Phil is a great communicator and gives in-depth explanations, so I didn't just watch most of his youtube, but also bought his mixed signals course and was very happy with it.
Phil also recommends this lecture in one of his videos [3], which is still one of my all time favourite lectures ever.
I have an MSEE from a top university (from 20 years ago), this topic unfortunately is not really taught. The theory and analysis is taught, but the practical implications were not. I connected the dots in my first job out of school where some very talented gray beards taught me how the real world works. Which brings me to my point that EE really is a trade. It takes schooling at the beginning and in most cases a degree or two, but there is critical knowledge that you learn in the real world after school; and there are levels analogous to apprentice, journey man, and master.
I can see how that happens when people come at things from a conceptual digital side first.
It probably doesn't help when you have a circuit diagram that while topologically correct doesn't show the relative positioning between components. The first time I saw all the decoupling caps rendered in a single chain on the side of the diagram I was mightily confused. It seemed like utter nonsense until I realised where they actually went.
"The first time I saw all the decoupling caps rendered in a single chain on the side of the diagram I was mightily confused…"
If you've read my other comments here you'll realize I'm concerned that these days EE training doesn't place a strong enough emphasis on shielding, ground loops, decoupling and such that it ought to. For any electrical/electronic engineer these are critical concepts.
By way of stressing that I'd like to take a sojourn into history and refer you to probably the greatest set of electronic engineering books ever produced: the MIT Radiation Laboratory Series — a massive 28 volume set written nearly 80 years ago to document electronics and microwave/radar research done during WWII.
Anyone seriously interested in electronics should be aware of this series. Yes, it's dated, heavily weighted towards vacuum tube technology (although klystrons and magnetrons are still current), and it lacks modern semiconductor tech, however this truly remarkable set contains a huge amount of information that's still very relevant today. Moreover, whilst it covers the topics in depth it does so at a level that can be easily understood by undergraduates (explanations are more general than today's very specialized textbooks).
Here you'll find links to the Internet Archive where the volumes can be downloaded. Specifically, I would refer you to Volume 23 - Microwave Receivers, — Chapter 6 Intermediate Frequency Amplifiers p155. Now turn to p182 and read 6-10 Practical Considerations.
This section on decoupling, shielding etc. is just as applicable to today's high speed digital circuits as it was back in WWII. Sure it needs updating but the fundamentals of screening and decoupling have not changed. What's important here is that these physical (analog) effects are set by the fundamental laws of physics, and circuits that do not take them into account will fail to work correctly.
This is utter nonsense. Just ask the layouter where they will be placed. (at the output of the voltage regulator or where he will find empty space on the board, completely missing their function).
Where your schematics is bad, the layout will be also bad.
Right, Purdue Pharma—the sleasebag Sacklers—were unethically pushing OxyContin (oxycodone) but the unethical tactics that Merck adopted in marketing its NSAID Vioxx seems to have been forgotten. Vioxx was withdrawn from the market and Merck paid out billions in law suits.
If you've time watch this YouTube video on Merck and the Vioxx scam (if you weren't aware of the facts you'd think you were in Palermo/Mafia territory): https://m.youtube.com/watch?v=K0GrFnOpJoU
"Why are COX-2 drugs like Celebrex still prescription only ?"
Why? Because Celebrex (celecoxib) is a dangerous drug which can cause irreparable harm (heart attacks and related) if taken for long periods. In fact, its sister drug Vioxx (rofecoxib) was banned and Merck had to pay billions in damages. There's more here: https://news.ycombinator.com/item?id=47835635#47862704
Whilst Celebrex is safer than Vioxx it still has the same side effects profile as the latter.
I'd also recommended you watch the YouTube video in the link on Vioxx, it demonstrates the dangers of COX-2 drugs shouldn't be underestimated.
It's only afterwards and as a consequence some highly newsworthy disasters occur such as a child abduction or political sex scandal involving a high profile politician come to light that the lay public will get the message that weak encryption is effectively no encryption.
In the meantime criminals will be early adopters of more sophisticated messaging such as steganography.
reply