To be fair I prefer the Chinese models censorship (yes, seriously) because if you ask certain topics they just don't answer instead of giving skewed answers.
The radios on the supported devices can't access the microphone, GNSS, etc.
GrapheneOS has never supported a device without an isolated cellular radio since that isolation was in place even with the initial Nexus 5 and Galaxy S4. However, some of the devices prior to Pixels did have Broadcom Wi-Fi/Bluetooth without proper isolation similar to laptops/desktops. Nexus 5X was the initial device with proper isolation for Wi-Fi/Bluetooth due to having SoC provided Wi-Fi from Qualcomm. Pixels have avoided this issue for integrating Broadcom Wi-Fi/Bluetooth. Nexus devices left this up to companies like LG, Huawei, etc. and anything not done for them by Qualcomm tended to have security neglected. Qualcomm has taken security a lot more seriously than other SoC vendors and typical Android OEMs for a long time and provides good isolation for most of the SoC components.
Don't believe everything you read about smartphone security and especially cellular radios. There are many products with far less secure cellular radios which are far less isolated but rather connected via extremely high attack surface approaches including USB which are claiming those are better. A lot of the misconceptions about cellular come from how companies market supposedly more secure products which are in reality far worse than an iPhone.
I cannot imagine a way to connect a cellular modem that provides a smaller surface area than USB ACM. There is no direct memory access and no way for the modem to directly access other devices.
Could you perhaps elaborate on what the more-secure alternative to USB ACM would be?
You are not looking at right places. Github repo counts have been high since 2020 because there are companies & individuals who run fork scripts. So AI cant match the numbers.
But on product hunt, the amount of projects is First week of Jan: 5000+, Entire Jan 2018: 4000 approx.
This is such a stupid argument. A very significant amount of code never makes it into the public sphere. None of the code I've written professionally in the last 26 years is publicly accessible, and if someone uses a product I've written they likely don't care if it was written with the aid of an LLM or not.
Not to mention agent capabilities at the end of last year were vastly different to those at the start of the year.
Even if a portion of software is not released to the general public, you'd still expect an increase in the amount of software released to the general public.
Even if LLMs became better during the year, you'd still expect an increase in releases.
Please don’t get my hopes up. Adaptable people like me will outcompete hard in the post-engineering world. Alas, I don’t believe it’s coming. The tech just doesn’t seem to have what it takes to do the job.
> And the jobs which will remain will be impossible to get.
Exactly my thoughts lately ... Even by yesterday's standards it was already very difficult to land a job and, by tomorrow's standards, it appears as if only the very best of the best will be able to keep their jobs and the ones in a position of decision making power.
It's brilliant at recapitulating the daya it's trained on. It can be extremely useful. But it's still nowhere close the capability of the human brain, not that I expect it to be.
Don't get me wrong I think they are remarkable but I still prefer to call it LLM rather than AI.
Some of the things we consider prerequisites of general intelligence (what we usually mean by when we talk about intelligence in these contexts) - like creativity or actual reasoning, are not present at all in LLMs.
An LLM is a very clever implementation of autocomplete. The truly vast amount of information we've fed it provides a wealth of material to search against, the language abstraction allows for autocompleting at a semantic level and we've add enough randomness to allow some variation in responses, but it is still autocomplete.
Anyone who has used an LLM enough in an uncommon domain they are very familiar with has no doubt seen evidence of the machine behind the curtain from faulty "reasoning" where it sometimes just plays madlibs to a complete lack of actual creativity.
> I call it a "bullshit generator" because it generates output "with indifference to the truth".
And if we follow the link we find he's referring to LLMs:
> “Bullshit generators” is a suitable term for large language models (“LLMs”) such as ChatGPT, that generate smooth-sounding verbiage that appears to assert things about the world, without understanding that verbiage semantically. This conclusion has received support from the paper titled ChatGPT is bullshit by Hicks et al. (2024).
No one thinks the database, orchestration, tool, etc. portions of ChatGPT are intelligent and frankly, I don't think anyone is confused by using LLM as shorthand not just for the trained model, but also all the support tools around it.
I wasn't thinking about their data store or other infrastructure. I was thinking about the layers added for reasoning and other functions that modify or guide the output of the LLM.