Or lucky! I had a great time during mine because my advisory was amazing. However, my cohort mates, many of whom I'd say are much smarter/intelligent than I, got stuck with terrible mentors.
Ha! You just made me remember how much I used JabRef (open source bibtex reference app) back in 2004 when I did my PhD.
It was the best/worst 4 years of my life. I studied overseas (uk), met my future wife and got a PhD that really wasn't useful for much to me. Fortunately it was under a scholarship.
The other day I (well, the AI) just wrote a Rust app to merge two (huge, GB of data) tables by discovering columns with data in common based on text distance (levenshtein and Dice) . It worked beautifully
An i have NEVER made one line of Rust.
I dont understand nay-sayers, to me the state of gen.AI is like the simpsons quote "worst day so far". Look were we are within 5 years of the first real GPT/LLM. The next 5 years are going to be crazy exciting.
The "programmer" position will become a "builder". When we've got LLMs that generate Opus quality text at 100x speed (think, ASIC based models) , things will get crazy.
Human minds are built to find patterns, and you should be careful not to assume the rate of improvement will continue forever based on nothing but a pattern.
Just the fact that even retail quality hardware is still improving at local LLM significantly is still a great sign.
If AI quality remained the same, and the cost for local hardware dropped to $1000, it would still be the greatest thing since the internet IMO.
So even if the worst happens and all progress stops, I'm still very happy with what we got.
I'm not all that impressed with "AI". I often "race" the AI by giving it a task to do, and then I start coding my own solution in parallel. I often beat the AI, or deliver a better result.
Artificial Intelligence is like artificial flavoring. It's cheap and tastes passable to most people, but real flavors are far better in every way even if it costs more.
At their current stage, this feels like the wrong way to use them. I use them fully supervised, (despite the fact that feels like I’m fighting the tools), which is kind of the best of both worlds. I review every line of code before I allow the edit, and if something is wrong, I tell it to fix it. It learns over time, especially as I set rules in memories, and so the process has sped up, to the point that this goes way faster than if I would have done that myself. Not all tasks are appropriate for LLMs at all, but when they are, this supervised mode is quite fast, and I don’t believe the output to be slop, but anyways I feel like I own every line of code still.
The happy path for me is with erlang, due to the concurrency model the blast radius of an error is exceptionally small, so the programming style is to let things crash if they go wrong. So, really you are writing the happy path code only (most of the time). Combine this approach with some very robust tests (does this thing pass the tests / behave how we need it to?) then you’re close to the point of not really caring about the implementation at all.
Of course, i still do, but i could see not caring being possible down the road with such architectures..
Home made food is better than anything you can buy too. Im 40 but I still drive 30 minutes to my parents once a week for dinner because the food they make feels like the elixir of life compared to the slop I can buy in trader joes, Costco, or most restaurants.
The overall trend in AI performance will still be up and to the right like everything else in computing over the past 50 years, improvement doesn't have to be linear
Because if you don't know the language or problem space, there are footguns in there that you can't find, you won't know what to look for to find them. Only until you try to actually use this in a production environment will the issues become evident. At that point, you'll have to either know how to read and diagnose the code, or keep prompting till you fix it, which may introduce another footgun that you didn't know that you didn't know.
This is what gets me. The tools can be powerful, but my job has become a thankless effort in pointing out people's ignorance. Time and again, people prompt something in a language or problem space they don't understand, it "works" and then it hits a snag because the AI just muddled over a very important detail, and then we're back to the drawing board because that snag turned out to be an architectural blunder that didn't scale past "it worked in my very controlled, perfect circumstances, test run." It is getting really frustrating seeing this happen on repeat and instead of people realizing they need to get their hands dirty, they just keep prompting more and more slop, making my job more tedious. I am basically at the point where I'm looking for new avenues for work. I say let the industry just run rampant with these tools. I suspect I'll be getting a lot of job offers a few years from now as everything falls apart and their $10k a day prompting fixed one bug to cause multiple regressions elsewhere. I hope you're all keeping your skills sharp for the energy crisis.
Before LLMs, I've watched in horror as colleagues immediately copy-paste-ran Stack Overflow solutions in terminal, without even reading them.
LLM agents are basically the same, except now everyone is doing it. They copy-paste-run lots of code without meaningfully reviewing it.
My fear is that some colleagues are getting more skilled at prompting but less skilled at coding and writing. And the prompting skills may not generalize much outside of certain LLMs.
I don't want exciting. I want a stable, well-paying job that allows me to put food on the table, raise a family with a sense of security and hope, and have free time.
I seem to remember doing it in SQL (EDIT_DISTANCE) 20ish years ago. While I wouldn't say it worked beautifully, I also didn't need to make a single line of Rust :) also no more than 2 line s of SQL were needed.
Edit_distance uses pure levenstein which is quadratic, so for tables of 500k rows and 20+ columns each it will slowdown to a crawl. Without going into a lot of detail, I needed this to work for datasets of that size. So a lot of "trick" optimization and pre-processing has to be done.
Otherwise simple merges in pandas or sql/duckdb would had sufficed.
Years of school (reading, calculus etc) to get to the point of learning basics of set theory.
One day to learn basic SQL based on understanding the set theory.
Maybe few weeks of using SQL at work for ad hoc queries to be proficient enough (the query itself wasn't really complex).
For the domain itself I was consulting experts to see what matters.
I'm not sure that time it would take to know what to prompt and verify the results is much different.
Fun fact - management decided that SQL solution wasn't enerprisely enough so they hired external consultants to build a system doing essentialy that but in Java + formed an 8 people internal team to guide them. I heard they finished 2 years later with a lot of manual matching.
Let me explain the naysayers, they know "programmer" has always meant "builder" and just because search is better and you can copy and paste faster doesn't mean you've built anything.First thing people need to realize is no proprietary code is in those databases, and using Ai will ultimately just get you regurgitated things people don't really care about. Use it all you want, you won't be able to do anything interesting, they aren't giving you valuable things for free. Anything of value will still take time and knowledge. The marketing hype is to reduce wages and prevent competition. Go for it.
- Users don't have to pay to post links/stories
- Users have to pay to comment on links/stories
- Users have to pay to "upvote" comments. Downvotes don't exist
- Each link "lives" a certain amount of time before it is locked.
- After lock time, users who posted the link get "paid" a % of the collected $ comments/upvotes. Comments that are upvoted also earn $ proportionally to the upvotes.
Hashcash was conceived to solve automated spam/email. Participating in a discussion must cost something, that's the only way bots and spam will get partially stopped. Or, if they start to optimize to get "the most votes", then so be it, their content will increase in quality.
Paying users for their posts is what killed YouTube, Twitter Facebook, Instagram... You will only get shitty ragebait comments. Not to mention that you have to link some bank account with your full name, etc.
This sounds like a platform that has no appeal to the average person, and an incredible appeal to people wishing to launder money or use money to run an influence campaign. Deliberately determining popularity proportionally to the amount of money spent is little different than advertising, but this would be under the false premise of "someone thought this was important/valuable enough to pay money to suggest I see it".
If this were to exist today, I know I would be incredibly critical of it.
Every election I see internet-connected gym machines have the leaderboards spammed with right wing messages because some people don’t have to work and just spin all day.
It seems like that would lead to a proliferation of ragebait, deliberately controversial posts, and overly simplistic articles to attract the greatest amount of comments. I frequently see deeply technical high-value posts on HN with very few comments but each thread about politics ends up getting hundreds of comments.
No? I have recommended Freestyle sugar free soda as a way to replace heavy CocaCola consumption. Here in Mexico it's a big problem, and I helped me get out of the addiction. ( add Allulose to the soda to add the sweet)
Dr Shore device has been decades in development. It's been all the rage in r/tinnitus , r/tinnitusresearch and T. Facebook groups. Still according to people that have tried it, it's no silver bullet.
I've had tinnitus for 25+ years and followed a lot of science. At some point some Brasil researchers found a drug that reduced tinnitus volume as a secondary effect. There wrote papers about it, but unfortunately, nothing came of it.
I was recently in the lookout for a new laptop. I wanted something BEEFFY! Specs wise but 13 inch at most.
I literally couldn't find anything on the PC side. I wanted an x86 because I prefer Linux Mint as my OS (didn't care about windows) , but it was impossible to find a good laptop with good GPU , more than 64gb ram and decent build materials (ive got a thinpad and the platic build is just terrible. The screen bends when pulling it to open the laptop).
So, if settled for a 128gb ram M4 max Macbookpro. It has been pretty solid so far. I'm a power user, so the RAM is used quite a lot (one of the reasons I wanted x86/Linux was to avoid virtualization overhead in docker/podman).
Macs are way more expensive than other laptops, but their level of tech sophistication is miles ahead of anyone.
I am a longtime Windows user and it brings me absolutely no joy to report that the M4 I am forced to use for work runs the Rust compiler a good bit faster than the big fancy gaming PC I just got with a 9800X3D.
Rust literally compiles ~4x faster on WSL than on the Windows command line, on the same hardware, so try that and see. Also set up the mold or wild linker as well as sccache, although sccache is OS agnostic so you can use it on macOS too. Make sure your code is on the WSL side not on /mnt/c which is the Windows side though, that will kill compilation speed.
That has not been my experience at all; I get pretty much the same times on the same machine on Linux and Windows. Something weird has happening to that person. Someone mentioned Defender, and that could certainly be it, as I have it totally disabled (forcibly, by deleting the executable).
You shouldn't have to go through all these extra steps just to squeeze out the same performance you would get by just installing [Other OS].
At some point I realized I was spending hours at a time trying to 'fix' Windows and decided to give Mac a try right around the time that Apple Silicon came out. It was a night and day difference.
CrossOver lets me play most single player games just fine on my M4 Pro, and personally I found multiplayer gaming taking too much of my time and emotional energy anyway.
WSL is fantastic - apart from the fact that you need to clear it intermittently via disk compression. I use it for work and it's great until you get something incredibly frustrating, like needing a pass-through for your hardware.
The one thing I can say with my macbook as someone who's switched from a decade of windows, is that stuff tends to just work, minus window swithcing.
I'd wager that's more likely due to Windows than the hardware. Like sure the hardware does play a part in that but its not the whole story or even most of it.
My C++ projects have a python heavy build system attached where the main script that runs to prepare everything and kick off the build, takes significantly longer to run on Windows than Linux on the same hardware.
Afaik a lot of it is ntfs. It’s just so slow with lots of small files. Compare unzipping moderately large source repos on windows vs. POSIX, it’s day and night.
A big part of it is that NT has to check with the security manager service every time it does a file operation.
The original WSL for instance was a very NT answer to the problem of Linux compatibility: NT already had a personality that looked like Windows 95, just make one that looks like Linux. It worked great with the exception of the slow file operations which I think was seen as a crisis over Redmond because many software developers couldn’t or wouldn’t use WSL because of the slow file operations affecting many build systems. So we got the rather ugly WSL2 which uses a real Linux filesystem so the files perform like files on Linux.
I don't know about ugly. Virtualization seems like a more elegant solution to the problem, as I see it. Though it also makes WSL pointless; I don't get why people use it instead of just using Hyper-V.
Honestly, just cause it's easier if you've never done any kind of container or virtual os stuff before. It comes out of the box with windows, it's like a 3 click install and it usually "just works". Most people just want to run Linux things and don't care too much about the rest of the process
Try adding your working directory to the exclusions for windows defender, or creating a Dev Drive instead in settings (will create a separate partition, or VHD using ReFS and exclude it from Windows defender). Should give it a bit of a boost.
Apple buries this info but the memory bandwidth on the M series is very high. Doubly and triply so for the Pro & Max variants which are insanely high.
Not much in the PC line up comes close and certainly not at the same price point. There's some correlation here between PCs still wanting to use user-upgradable memory which can't work at the higher bandwidths vs Apple integrating it into the cpu package.
They don't bury it. It's literally on the spec page these days. And LPCAMM2 falls somewhere between the base M and Pro CPUs while still being replaceable.
The new MacBook Neo is a less than half the memory bandwidth of the base model MacBook Air.
This shouldn't be surprising. macOS has a faster filesystem+VFS than Windows, and the single thread perf of the M4 beats most PC cpus. I'm not sure what linker rust uses, but the apple native ld64/ldPrime is also pretty fast as far as linkers go.
Windows is also slow enough at forking, that clang has "in-process CC1" mode because of it.
This is how opinions differ. IMO plastic is better than aluminium. It is robust (if done right), lighter and doesn't have good thermal conductivity (which makes laptop usage possible, MacBooks can be uncomfortable for lap usage if too hot).
I have an Air. Maybe active cooling prevents it from getting too hot. With the Air, the metal body is kind of the heatsink.
I can configure my Snapdragon plastic laptop such that the fan doesn't turn on, so the body being metal isn't a requirement for not turning on the fan...
It's almost as if they weren't lying when they said dropping it in the phone was a waterproofing measure. I guess people aren't dropping their laptops in pools all the time.
I think the 14" and Air might get a little warmer, but I can't recall a time I've felt heat from my 16" M4 Pro, fan sound is rare. On my 13" Intel, it was comically easy to cook my balls and the fans were at max constantly
> plastic is better than aluminium. It is robust (if done right), lighter and doesn't have good thermal conductivity (which makes laptop usage possible
> ive got a thinpad and the platic build is just terrible. The screen bends when pulling it to open the laptop
Damn. I was at IBM in the early 2000s and for many decades you used to be able to beat people to death with IBM hardware, including Thinkpad laptops and model M keyboards.
They built a reputation on that and silently replaced the plastic with crap abs. Thinkpads have been garbage since 2012. Not specs wise but build quality wise. Spec wise it’s always been a beefy machine.
> I wanted something BEEFFY! Specs wise but 13 inch at most.
One thing to bear in mind is bezels are a lot thinner than they were a few years ago.
~7 years ago, my daily driver was a Latitude E7270 - a 12.5 inch ultrabook with dimensions of 215.15 mm x 310.5 mm x 18.30 mm, 1.24 kg, 14.8 inch body diagonal
Today, an XPS 14 has dimensions 209.71 mm x 309.52 mm x 15.20mm, 1.36 kg, 14.7 body diagonal - and a 14-inch screen.
The 12.5 inch segment hasn't disappeared - it's just turned into the 14-inch segment.
The same is also true within the Macbook line. The 14" Pro is smaller and nearly 2lbs lighter than the first 13" unibodies. I have my 2009 college laptop on a shelf as a memento and it feels pretty chunky. This hasn't changed much in the M-series though, and the M5 is slightly heavier than the M1.
Something I miss from the Windows side is sub-kg machines, at least since Apple discontinued the 12" Macbook. It makes a surprisingly big difference when traveling, especially with Asian carriers that have hard carry-on limits. The Thinkpad X1 Carbon is a fantastic form factor, though the older Intel chips run incredibly hot. I repurposed that as a garage/workshop Linux machine. Unfortunately, the price differences between Mac/Windows also disappear when you start looking at those higher-end machines.
My Sony Vaio Z from 2009 or 2010 looks at your Dell in contempt: 13.1" FullHD screen at 314mm x 210mm (we'll pretend the thickness does not matter ;)) and 1.36kg. Vaio TT was even smaller footprint.
But even in 2018, you could get an X1 Carbon at 1.13kg and 323mm x 217mm x 15.5mm.
One thing Apple seems to do very well compared to other vendors is make all their hardware available in all markets on release. Companies like Dell, Asus, Lenovo, they have a confusingly large array of models, and they never release the best ones worldwide, or it takes so long to get to New Zealand that I already gave up and bought an Apple computer instead.
I, too, am a dinosaur, but touchscreens on removable screens/tablets are the way to go!
My friend, just imagine: Slide screen out of laptop, it's a standalone tablet. Connect some wires to it and you have an oscilloscope. Do some diag. Connect USB buses to it, and read some codes. Carry it around in your garage and take photos of your stuff, the images get recognized by AI and you've updated your garage inventory, it's uploaded to your Homebox running on a mac mini in a shelf somewhere. It has a built in cellular and you can be out in a park taking a picture of a baby owl, mark it with GPS, upload.
When you are done roaming the world loading in data and snapping pics, sit back down, connect the tablet to a keyboard, or even a thunderbird cable for your external display and peripherals, and write up some code or a report. Then in the evening, go play some games, all on the same computer.
You might want to actually click the links and spent a couple of minutes before typing comments. This is not a laptop with a touch screen - it's a tablet with a kickstand and detachable keyboard.
That's just a broken, compromised Windows laptop. A true "master of nothing" device. Windows is a miserable tablet OS and a tablet that uses a kickstand makes it a pain to use in desktop mode.
I accidentally got a pair of ThinkPads that happened to have touch screens, and I absolutely love the touch screen, often it's easier than the touchpad or keyboard nub.
I'm not the person you're replying to, but I do have a 64GB machine that I'd been planning to bump up to 128 right around the time the prices went through the roof. My uses are:
- VMs, I'm leaning on them more and more for sandboxing stuff I'm working on, both because of the rise in software supply chain threats, and to put guardrails around AI agents.
- Local LLMs experimentation, even pretty big MoE models (GPT OSS 120b) run pretty usably (~10 tokens/sec) with the latest tooling on a 16GB GPU and a lot of system memory.
- Even compared to a fast NvME drive, it's super nice to load a big dataset into memory and just process it right there, compared to working off of the disk.
Yeah, I have a 64GB M1 Max and can run local models pretty well. I bought it on release and even now it never feels slow. I may upgrade just because I want to move to the 14” since I travel more now.
You mean all of that running all the time is 70gb?
I tried freecad + blender with 8 mil sculpt model + prusaslicer, but that was only 11gb, so I added pycharm + steam and cyberpunk 2099 and that was 19gb.
The language server for many things I work on sits at 28gb per copy.. I work for twitch, our code base is not small for the website. Moving all engineers to min spec 48gb.
I'll do stuff all day prototyping data analysis approaches that will fill ram with a pandas cross join.
I put my4 into thermal shutdown 2x in the last month and hard locked it due to swap use 3 times in the last month. I keep records so I can talk with IT about or dev machine specs. Apparently you can't run 30 concurrent yarn builds on a 3gb codebase... Who knew.
This isn't a works on my box competition I'm glad your workloads are that small, you can be a lot more efficient than me. I'm also lucky I bought all this ram before it became absurdly expensive.
It doesn't negate that I'm constantly over 64gb and that I'm super happy I have 128+ on my machines.
most people who are into graphics processing e.g video-games, 3d for films/entertainment industry etc need these "PRO-workstation" machines, or doing fluid mechanics
if your work is around data | software engineering (web backends etc) like me - a MacBook Air tends to be sufficient
Yeah, good luck with that at current RAM prices though. DDR5 RDIMMs are going for $20/gb+ right now which means 1tb is $20k, and that's with fairly conservative pricing too.
I've been looking at building a high memory workstation recently but the RAM prices are just prohibitive. Best option atm for 1tb+ seems to be to go back a couple gen and buy DDR4, you can get 1tb at under $5/gb right now. But obviously you're giving up some performance in the process.
>I literally couldn't find anything on the PC side. I wanted an x86 because I prefer Linux Mint as my OS (didn't care about windows) , but it was impossible to find a good laptop with good GPU , more than 64gb ram and decent build materials
Maybe ROG Flow 13 ? It's more like hybrid laptop, and geared toward "gaming"(because it's usually the gamer market that demand high performance), but nothing prevents you to use it as business machine.
It's also top of the line asus laptop, so i expect decent build quality.
> decent build materials (ive got a thinpad and the platic build is just terrible. The screen bends when pulling it to open the laptop).
Thinkpads don't show off their build materials like Apple does. I've had several over the years, variously made of magnesium alloy and carbon fibre.
Screen bending is not a great metric of 'decent build'. My Thinkpads have suffered people stepping on them, being dropped etc, and I think the lid flexibility is partly why it has survived all this time - they often use carbon fibre on the back of the screen.
ASUS ROG G14 is as close as you're going to get on the x86 side of things or that new chonk of a surface with the Ryzen 395+ and 128gb of ram. both are like $2500+
I have the chonk. 10/10 would chonk again. I miss the 12" MacBook form factor for an email/web/dumb terminal machine, though. Would love something like that with great Linux support. Bonus points for cellular.
You do have 13" options, though 14" is much wider. If I was going for 13" workstation, I'd go for Asus ProArt PX13 with Ryzen AI Max 395 (if I got that right, there might be a plus somewhere) and 128GiB of RAM. They've got ROG Flow X13 with older hardware or Z13 with same hw as above, but that's a tablet computer instead.
At 14", thin-and-light gaming computers like Asus G14 or Razer Blade 14 look decent, or some of the workstation models from Lenovo or HP.
Still, for me, at 13/14", portability and battery are most important, so I am going with Thinkpad X1 Carbon atm (next gen should again allow 64GiB of RAM).
For a someone looking to switch from a M-series MacBook to a Thinkpad, which one would you recommend? Preferably not of a diminished quality, so I can daily-drive Ubuntu without missing Apple.
Sweet summer child. I was once opinionated and driven as you are now. I remember when I got out of college, I also thought like that of the mediocre clock punchers.
Now at my 45 years, I couldn't care less for whatever grand objective the current company I work for has. I exchange my knowledge and time for hard cash, and let the owners , ceo and whatnot run with their grandiose vision.
Funny enough, the more I got into this mindset, I slept better, made more money, and got more autonomy.
Once directors and CxOs know that you are completely aligned with the business goals and ignore everything else that “doesn’t make the beer taste better”, they trust your judgement and basically leave you alone.
When I started reading all these news, the thought that came to my mind is: how sweet of these companies to try this, but unfortunately I am sure that other countries advancing AI like China (deepseek, GLM, etc) or Russia, or whoever WILL have their companies' AI at their disposal
Unfortunately, this is the new arms race, race to the moon, and all that together.
reply