Hacker Newsnew | past | comments | ask | show | jobs | submit | toddmorey's commentslogin

Look I'm speaking here as a career designer:

I think design as a "signaling function" for determining the quality of a thing was already broken. It was already possible to put up an impressive-looking site for anything; already possible to to dupe people with cheap product wrapped in fancy packaging.

Movies with insane budgets that spend forever in production are often still terrible. One of my favorite songs was written by the artist in a hotel room on a Sunday afternoon.

One thing to consider: if it's cheap and immediate to wrap any content in design, it can now also be cheap and immediate to customize the design of content. Maybe we can finally return to a user-focused internet like the one that was promised to us by browser custom style sheets.

Finally, I can see democratizing design in this way will make more content more pleasant to look a (which is a win). And we'll also make better decisions with design out of the decision matrixes it doesn't belong in (another win).


I know for most people that the big surprise here is sustained search ad revenue in the face of AI. But I’m super curious on margins because I thought for sure offering so much free AI inference would be so insanely expensive it harmed margins.

No one is losing money on inference these days. Google's vertical integration means that they have some of the lowest inference costs in the industry in any event.

Microsoft recently announced changes to copilot because, apparently, it was losing money on inference.

They were charging a flat rate per query no matter how many tokens it consumed. People naturally got very good at writing prompts that used as many tokens as possible.

They were loosing money giving absurdly generous agentic usage on expensive models to people with $10 to $40 flat rate subscriptions.

They weren't selling inference.


Conjecture, but the wording "limited subset" rarely turns out to be good news. Usually a provider will say "less than 1% of our users" or some specific number when they can to ease concerns. My guess is they don't have the visibility or they don't like the number.

I feel for the team; security incidents suck. I know they are working hard, I hope they start to communicate more openly and transparently.


“Less than 1% of our users” means 10k affected users if you have 1 million users. 10k victims is a lot! Imagine “air travel is safe, only a subset of 1% of travellers die”


I've been part of a response team on a security incident and I really feel for them. However, this initial communication is terrible.

Something happened, we won't say what, but it was severe enough to notify law enforcement. What floors me is the only actionable advice is to "review environment variables". What should a customer even do with that advice? Make sure the variable are still there? How would you know if any of them were exposed or leaked?

The advice should be to IMMEDIATELY rotate all passwords, access tokens, and any sensitive information shared with Vercel. And then begin to audit access logs, customer data, etc, for unusual activity.

The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.

I know there is a huge fog of uncertainly in the early stages of an incident, but it spooks me how intentionally vague they seem to be here about what happened and who has been impacted.


Seriously. Why am I reading about this here and not via an email? I've been a paying customer for over a year now. My online news aggregator informs me before the actual company itself does?


Please remember that this is the same company that couldn't figure out how to authorize 3rd party middleware and had, with what should be a company ending, critical vulnerability .

Oh and the owner likes to proudly remind people about his work on Google AMP, a product that has done major damage to the open web.

This is who they are: a bunch of incompetent engineers that play with pension funds + gulf money.


This industry's favored idiot children.


I just deleted my account. Their laid-back notice just is not worth it anymore. I will hold them accountable using my cash. You can get out with me. Let their apologies hit your spam filter. They need to be better prepared to react to the storm of insanity that comes with a breach or they lose my info (lose it twice, I guess..)


Says they emailed affected customers...


Via the incident page:

> Environment variables marked as "sensitive" in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed. However, if any of your environment variables contain secrets (API keys, tokens, database credentials, signing keys) that were not marked as sensitive, those values should be treated as potentially exposed and rotated as a priority.

https://vercel.com/kb/bulletin/vercel-april-2026-security-in... as of 4:22p ET


The “sensitive” toggle is off by default. I’m curious about the rationale, what's the benefit of this default for users and/or Vercel?

https://vercel.com/docs/environment-variables/sensitive-envi...


Simpler for vibe coders.


Ok but it's not the original intent: that default exists since at least 2020: https://web.archive.org/web/20201130022511/https://vercel.co...


Sensitive environment variables are environment variables whose values are non-readable once created.

So they are harder to introspect and review once set.

It’s probably good practice to put non-secret-material in non-sensitive variables.

(Pure speculation, I’ve never used Vercel)


I have used Vercel though prefer other hosts.

There are cases where I want env variables to be considered non-secure and fine to be read later, I have one in a current project that defines the email address used as the From address for automated emails for example.

In my opinion the lack of security should be opt-in rather than opt-out though. Meaning it should be considered secure by default with an option to make it readable.


How does the app read the variable if it can't be read after you input it? Or do they mean you can't view it after providing the variable value to the UI?


They mean the latter. Very unclear how that translates to meaningful security.


You could have a meaningful wall between administrative/deployment interface backends and the customer server backends - only the latter get access to services that have the private keys to decrypt the at-rest storage of secure variables, and this may be fully isolated to different control planes. So it becomes write-but-not-read.

But that's just a bare-minimum defense-in-depth. The fact that an attacker was able to access the insecure variables, and likely the names of secure variables, is still horrifying.


I agree / hope that’s what they meant. It seems disingenuous, though, to describe it as unreadable, since obviously something has to read it to bake it into the deploy. And given their apparent lack of effective security boundaries in one area, why should we assume that they’ve got the deploy system adequately locked down?

It’s not like I had a ton of trust in them before, but now they’ve lost almost all credibility.


Last year Vercel bungled the security response to a vulnerability in Next's middleware. This is nothing new.

https://news.ycombinator.com/item?id=43448723

https://xcancel.com/javasquip/status/1903480443158298994


Security is hard and there are only three vendors I trust: AWS, Google and IBM ( yes IBM ). Anything else is just asking for trouble.


Having worked both public and private, I can agree with this.

Google in particular has been staggeringly good, and don't sleep on IBM when they Actually Care.


Oracle too


Oracle? Oracle?

The Oracle that published an announcement that said "we didn't get hacked" when the hackers had private customer info?

The Oracle that does not allow you to do any security testing on their software unless you use one of their approved vendors?

The Oracle that one of my customers uses where they have to turn off the HR portal for 2 weeks before annual performance evaluations because there is no way to prevent people from seeing things?

The only reason Oracle isn't having nightmarish security problems published every other week is because they threaten to sue anyone that does find an issue.

Oracle is a joke in every conceivable way and I despise them on a personal level.


I love a good cathartic rant


> The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.

This and because it's so convenient to click some buttons and have your application running. I've stopped being lazy, though. Moved everything from Render to linode. I was paying render $50+/month. Now I'm paying $3-5.

I would never use one of those hosting providers again.


Looking at linode, those prices get you an instance with 1Gb of ram and a mediocre CPU. So you are running all of your applications on that?


Personal projects/MVPs/small projects? Absolutely. For what I'm running, there's no reason to need anything beyond that.

The point is, I used to just throw everything up on a PaaS. Heroku/Render, etc. and pay way more than I needed to, even if I had 0 users, lol.


> Looking at linode, those prices get you an instance with 1Gb of ram and a mediocre CPU. So you are running all of your applications on that?

I ran a LoB webapp for multiple companies on a similar setup. Turns out 1GB of RAM is insufficient to run even the most trivial Java webapps, like Jenkins, but is more than sufficient for even non-trivial things using Go + PostgreSQL.

Your stack may be slow, not the machine.


Most of my services run with 1vCPU and 512Mb of ram. You don't need huge specs for most normal applications.


For $3.5, Hetzner gives 2 vCPU, 4GB RAM, 40 GB SSD, and 10 TB of bandwidth.


Pretty oversold iirc, but then again, that's the same for Linode


Do you mean these are shared instances, and the stated resources are not actually available?


how much work should the GP do to migrate if Linode is good enough, to potentially save up to $1.50/month (or spend 50 cents more)?


If you're only paying $3-5 on Linode then your level of usage would probably be comfortably at $0 on Vercel.


It could be $0 on Render too, but then there's going to be a 3 minute load time for a landing page to become visible, lol. So if you don't want your server to sleep, you're going to have to pay $20/month.

Does Vercel do the same?


No, I run several small websites on Vercel for free for years, always served static pages very quickly


Static pages, sure. But what do you do if you want a contact form or something? Yeah, you can use services like formspree, but then you may end up paying $20/month for that alone. Perhaps I'm just ignorant.


Render offers free static sites that are served via a CDN and load instantly: https://render.com/docs/static-sites


When I said landing page, I had contact forms and more in mind, not documentation sites.

But that is news to me. Interesting. Although for static sites, I always use Netlify or even GitHub pages.


No.


Repeating a prior comment I've made about this[0]: I run a rust webserver on a €4 VPS from hetzner that serves 300M (million) requests a day.

From what I can figure out, Vercel charges "$0.60 per million invocations" [1], which would cost me $180 per day.

[0] https://news.ycombinator.com/item?id=47611454 [1] https://vercel.com/docs/functions/usage-and-pricing#invocati...


I run a Rust webserver on a literal Pi3 in my basement and I think I managed to bench it up >1000 rps for standard loads. And that includes a bunch of tanvity querying as well.

I suspect I could do 3000+ rps with some tuning and a more modern CPU or hetzner VPS, but there's some fun cachet from running on an old Pi while there's still headroom.


Makes sense considering the quality of Vercel's security response and customer communication.


What if they have an actual back-end with long-running processes and scheduled tasks?


exactly people paid the premium so somebody else's OAuth screwup wouldn't become their Sunday. and here we are.


Completely agreed. At minimum they should be advising secret rotation.

The only possibility for that not being a reasonable starting point is if they think the malicious actors still have access and will just exfiltrate rotated secrets as well. Otherwise this is deflection in an attempt to salvage credibility.


Yeah, given there insane pricing I think the expectations can be higher. Although I know it is impossible to provide 100% secure system, but if something like that happens, then the communication should at least be better. Don’t wait until you have talked to the lawyers... inform your customers first, ideally without this cooperate BS speak, most vercel customers are probably developers, so they understand that incidents like this can happen, just be transparent about it


Welcome to the show.

While a different kind of incident (in hindsight), the other week Webflow had a serious operational incident.

Sites across the globe going down (no clue if all or just a part of them). They posted plenty of messages, I think for about 12 hours, but mostly with the same content/message: "working on fixing this with an upstream provider" (paraphrased). No meaningful info about what was the actual problem or impact.

Only the next day did somebody write about what happened. Essentially a database running out of storage space. How that became a single point of failure, to at least plenty of customers: no clue. Sounds like bad architecture to me though. But what personally rubbed me the wrong way most of all, was the insistence on their "dashboard" having indicated anything wrong with their database deployment, as it allegedly had misrepresented the used/allocated storage. I don't who this upstream service provider of Webflow is, but I know plenty about server maintenance.

Either that upstream provider didn't provide a crucial metric (on-disk storage use) on their "dashboard", or Webflow was throwing this provider under the bus for what may have been their own ignorant/incompetent database server management. I guess it all depends to which extend this database was a managed service or something Webflow had more direct control over. Either way, with any clue about the provider or service missing from their post-mortem, customers can only guess as to who was to blame for the outage.

I have a feeling that we probably aren't the only customer they lost over this. Which in our case would probably not have happened, if they had communicated things in a different way. For context: I personally would never need nor recommend something like Webflow, but I do understand why it might be the right fit for people in a different position. That is, as long as it doesn't break down like it did. I still can't quite wrap my head around that apparent single point of failure for a company the size of Webflow though.

/anecdote


On the subject of metrics, better user-facing metrics to understand and debug usage patterns would be a great addition. I'd love an easier way to understand the ave cost incurred by a specific skill, for example. (If I'm missing something obvious, let me know.)

Baking deeper analytics into CC would be helpful... similar to ccusage perhaps: https://github.com/ryoppippi/ccusage


This is useful if you want to keep an eye on what claude's actually doing behind the scenes: https://github.com/simple10/agents-observe


My taxes are rather complex, so I ran the same exercise to see if Claude agreed with my accountant. An automated second opinion, so to speak. Spent about 6 minutes analyzing all the PDFs and basically nailed it perfectly in one shot.

My only point here is it sure seems the same activity / use case can have wildly different results across sessions or users. Customer support and product development in the age of non-deterministic software is a strange, strange beast.


What does nailing mean when you ask whether it agreed with your accountant?


Given the same inputs but not provided the results (output) from our accountant, did it come to the same conclusions or have good analysis as to why it differed?

Obviously, accounting is "spreadsheet math" intensive, so Claude wrote some python scripts for that which kept the math very stable. But there were some complex nuances that had taken the accountant and I quite a bit of work to track down and clarify. Claude quickly had a very accurate read on the situation and knew all the right clarifying questions.

I'm not yet ready to ever sign a return that's been entirely AI prepared, but I left the exercise pretty impressed.


Which AI does your accountant use?


> it works by simulating a trackpad swipe with a large amount of velocity

Damn, that's rather clever.


I'm a little afraid of the failure modes, frankly. Clever, but that seems like it would b likely to exercise some under-tested timing situations. I'm not familiar with that API, so take the hunch with a grain of salt.


It's also hilarious that it works this way


I'm surprised others didn't pick it up sooner https://news.ycombinator.com/item?id=36938663


Also with this approach, you actually have a real collection and it's fun to collect things.

My son has autism and viewed his Netflix homepage as his personal curated collection. But then, of course, Netflix renegotiates licensing deals and entire seasons or shows just go away. And it really crushes him because it's like they were stolen from his personal collection.

So now when I hear him play, the super villain trying to destroy the world is always named Reed Hastings.


> So now when I hear him play, the super villain trying to destroy the world is always named Reed Hastings.

That is absolutely hilarious and it totally sounds like a villain's name


I understand his frustration : I have a similar issue with video games - Xbox gamepass games sometimes leave the service. So I built an app that takes all my games across the various gaming services ( steam etc ) including the Xbox gamepass ones, and it grabs them from the achievements ( games I have played ) on top of the catalog ( available games )should they have left the catalog

That way games that are gone remain and I have a Netflix like interface to view all my games past and present


It is interesting that Netflix alone gets blamed, as opposed to the parties they are negotiating with.


Netflix is ultimately responsible for what they put on the platform, for delivering a consistent product to their users, and for setting expectations.

Netflix is exceptionally shitty at letting people what is leaving their platform and when, and even letting them know when the shows they saved or were in the middle of watching have been removed. Netflix has been around for ages but we still have to depend on third party websites to tell us what's coming/leaving. Some items will have a "leaving soon" banner on the thumbnail, but that's only good for shows netflix decides to push at you. There's no section or search that will find all that stuff (searching for "leaving soon" will show you some of them)


They can only deliver things that are possible to deliver. There is nothing they can do to negotiate a forever licensing deal with a content provider, other than buying the content provider, which is also not possible unless they jack up prices 100x and somehow still keep all their users.


Netflix chose to negotiate revocable licenses to save money and draw in users, so it does seem valid to assign blame to Netflix for signing such contracts.


> it's like they were stolen from his personal collection

They were.


There are some easy optimizations wins for this page but none of the top ones are framework related. Maybe with the faster build times they can easily optimize images and 3rd party dependencies. As someone else pointed out, nearly half that data is unoptimized images.

For the curious, google's current homepage is a 200kb payload all in, or about 50 times smaller.


Who remembers sprite sheets? Does that give my age away?

I did an optimization pass for a client once where I got rid of a ton of the sprites but didn't have the energy to redo it all, so it just had huge sections that were blank.

Super snappy loading afterwards though.


Yes, good times! With http2/3 they don't really matter anymore though, you get similar benefits from request pipelining.


Spriting is actually harmful for performance except in specific HTTP-1 scenarios.


Doesn't McMaster Carr still use sprites? Is that like the one optimization they managed to get wrong?


Looks like it, but isn't this site famous for being a "classic" storefront?

Some CMSs would auto-generate sprites. If you are showing most of them, it's still a positive, I'd assume. And, if it ain't broke, don't fix it.


I indeed remember.

HTTP 2+ (supported by every web browser) obviates sprite sheets.

They were a useful hack, but still a hack.


Question: since they've rebooted their approach to AI... have they given up on open models? There's no mention of open source or open weights or access to the models beyond their hosted services.


Alexandr Wang on Twitter [0] mentioned open source plans:

"this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions. incredibly proud of the MSL team. excited for what’s to come!"

https://x.com/alexandr_wang/status/2041909388852748717


So the answer is: no. lol. Remember Llama 4 Behemoth, and how we were supposed to get more great models from it?


This may be too large to run locally anyway. Maybe they will distill down some smaller open versions later.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: