I think design as a "signaling function" for determining the quality of a thing was already broken. It was already possible to put up an impressive-looking site for anything; already possible to to dupe people with cheap product wrapped in fancy packaging.
Movies with insane budgets that spend forever in production are often still terrible. One of my favorite songs was written by the artist in a hotel room on a Sunday afternoon.
One thing to consider: if it's cheap and immediate to wrap any content in design, it can now also be cheap and immediate to customize the design of content. Maybe we can finally return to a user-focused internet like the one that was promised to us by browser custom style sheets.
Finally, I can see democratizing design in this way will make more content more pleasant to look a (which is a win). And we'll also make better decisions with design out of the decision matrixes it doesn't belong in (another win).
I know for most people that the big surprise here is sustained search ad revenue in the face of AI. But I’m super curious on margins because I thought for sure offering so much free AI inference would be so insanely expensive it harmed margins.
No one is losing money on inference these days. Google's vertical integration means that they have some of the lowest inference costs in the industry in any event.
They were charging a flat rate per query no matter how many tokens it consumed. People naturally got very good at writing prompts that used as many tokens as possible.
Conjecture, but the wording "limited subset" rarely turns out to be good news. Usually a provider will say "less than 1% of our users" or some specific number when they can to ease concerns. My guess is they don't have the visibility or they don't like the number.
I feel for the team; security incidents suck. I know they are working hard, I hope they start to communicate more openly and transparently.
“Less than 1% of our users” means 10k affected users if you have 1 million users. 10k victims is a lot! Imagine “air travel is safe, only a subset of 1% of travellers die”
I've been part of a response team on a security incident and I really feel for them. However, this initial communication is terrible.
Something happened, we won't say what, but it was severe enough to notify law enforcement. What floors me is the only actionable advice is to "review environment variables". What should a customer even do with that advice? Make sure the variable are still there? How would you know if any of them were exposed or leaked?
The advice should be to IMMEDIATELY rotate all passwords, access tokens, and any sensitive information shared with Vercel. And then begin to audit access logs, customer data, etc, for unusual activity.
The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.
I know there is a huge fog of uncertainly in the early stages of an incident, but it spooks me how intentionally vague they seem to be here about what happened and who has been impacted.
Seriously. Why am I reading about this here and not via an email? I've been a paying customer for over a year now. My online news aggregator informs me before the actual company itself does?
Please remember that this is the same company that couldn't figure out how to authorize 3rd party middleware and had, with what should be a company ending, critical vulnerability .
Oh and the owner likes to proudly remind people about his work on Google AMP, a product that has done major damage to the open web.
This is who they are: a bunch of incompetent engineers that play with pension funds + gulf money.
I just deleted my account. Their laid-back notice just is not worth it anymore. I will hold them accountable using my cash. You can get out with me. Let their apologies hit your spam filter. They need to be better prepared to react to the storm of insanity that comes with a breach or they lose my info (lose it twice, I guess..)
> Environment variables marked as "sensitive" in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed. However, if any of your environment variables contain secrets (API keys, tokens, database credentials, signing keys) that were not marked as sensitive, those values should be treated as potentially exposed and rotated as a priority.
There are cases where I want env variables to be considered non-secure and fine to be read later, I have one in a current project that defines the email address used as the From address for automated emails for example.
In my opinion the lack of security should be opt-in rather than opt-out though. Meaning it should be considered secure by default with an option to make it readable.
How does the app read the variable if it can't be read after you input it? Or do they mean you can't view it after providing the variable value to the UI?
You could have a meaningful wall between administrative/deployment interface backends and the customer server backends - only the latter get access to services that have the private keys to decrypt the at-rest storage of secure variables, and this may be fully isolated to different control planes. So it becomes write-but-not-read.
But that's just a bare-minimum defense-in-depth. The fact that an attacker was able to access the insecure variables, and likely the names of secure variables, is still horrifying.
I agree / hope that’s what they meant. It seems disingenuous, though, to describe it as unreadable, since obviously something has to read it to bake it into the deploy. And given their apparent lack of effective security boundaries in one area, why should we assume that they’ve got the deploy system adequately locked down?
It’s not like I had a ton of trust in them before, but now they’ve lost almost all credibility.
The Oracle that published an announcement that said "we didn't get hacked" when the hackers had private customer info?
The Oracle that does not allow you to do any security testing on their software unless you use one of their approved vendors?
The Oracle that one of my customers uses where they have to turn off the HR portal for 2 weeks before annual performance evaluations because there is no way to prevent people from seeing things?
The only reason Oracle isn't having nightmarish security problems published every other week is because they threaten to sue anyone that does find an issue.
Oracle is a joke in every conceivable way and I despise them on a personal level.
> The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.
This and because it's so convenient to click some buttons and have your application running. I've stopped being lazy, though. Moved everything from Render to linode. I was paying render $50+/month. Now I'm paying $3-5.
I would never use one of those hosting providers again.
> Looking at linode, those prices get you an instance with 1Gb of ram and a mediocre CPU. So you are running all of your applications on that?
I ran a LoB webapp for multiple companies on a similar setup. Turns out 1GB of RAM is insufficient to run even the most trivial Java webapps, like Jenkins, but is more than sufficient for even non-trivial things using Go + PostgreSQL.
It could be $0 on Render too, but then there's going to be a 3 minute load time for a landing page to become visible, lol. So if you don't want your server to sleep, you're going to have to pay $20/month.
Static pages, sure. But what do you do if you want a contact form or something? Yeah, you can use services like formspree, but then you may end up paying $20/month for that alone. Perhaps I'm just ignorant.
I run a Rust webserver on a literal Pi3 in my basement and I think I managed to bench it up >1000 rps for standard loads. And that includes a bunch of tanvity querying as well.
I suspect I could do 3000+ rps with some tuning and a more modern CPU or hetzner VPS, but there's some fun cachet from running on an old Pi while there's still headroom.
Completely agreed. At minimum they should be advising secret rotation.
The only possibility for that not being a reasonable starting point is if they think the malicious actors still have access and will just exfiltrate rotated secrets as well. Otherwise this is deflection in an attempt to salvage credibility.
Yeah, given there insane pricing I think the expectations can be higher. Although I know it is impossible to provide 100% secure system, but if something like that happens, then the communication should at least be better. Don’t wait until you have talked to the lawyers... inform your customers first, ideally without this cooperate BS speak, most vercel customers are probably developers, so they understand that incidents like this can happen, just be transparent about it
While a different kind of incident (in hindsight), the other week Webflow had a serious operational incident.
Sites across the globe going down (no clue if all or just a part of them). They posted plenty of messages, I think for about 12 hours, but mostly with the same content/message: "working on fixing this with an upstream provider" (paraphrased). No meaningful info about what was the actual problem or impact.
Only the next day did somebody write about what happened. Essentially a database running out of storage space. How that became a single point of failure, to at least plenty of customers: no clue. Sounds like bad architecture to me though. But what personally rubbed me the wrong way most of all, was the insistence on their "dashboard" having indicated anything wrong with their database deployment, as it allegedly had misrepresented the used/allocated storage. I don't who this upstream service provider of Webflow is, but I know plenty about server maintenance.
Either that upstream provider didn't provide a crucial metric (on-disk storage use) on their "dashboard", or Webflow was throwing this provider under the bus for what may have been their own ignorant/incompetent database server management. I guess it all depends to which extend this database was a managed service or something Webflow had more direct control over. Either way, with any clue about the provider or service missing from their post-mortem, customers can only guess as to who was to blame for the outage.
I have a feeling that we probably aren't the only customer they lost over this. Which in our case would probably not have happened, if they had communicated things in a different way. For context: I personally would never need nor recommend something like Webflow, but I do understand why it might be the right fit for people in a different position. That is, as long as it doesn't break down like it did. I still can't quite wrap my head around that apparent single point of failure for a company the size of Webflow though.
On the subject of metrics, better user-facing metrics to understand and debug usage patterns would be a great addition. I'd love an easier way to understand the ave cost incurred by a specific skill, for example. (If I'm missing something obvious, let me know.)
My taxes are rather complex, so I ran the same exercise to see if Claude agreed with my accountant. An automated second opinion, so to speak. Spent about 6 minutes analyzing all the PDFs and basically nailed it perfectly in one shot.
My only point here is it sure seems the same activity / use case can have wildly different results across sessions or users. Customer support and product development in the age of non-deterministic software is a strange, strange beast.
Given the same inputs but not provided the results (output) from our accountant, did it come to the same conclusions or have good analysis as to why it differed?
Obviously, accounting is "spreadsheet math" intensive, so Claude wrote some python scripts for that which kept the math very stable. But there were some complex nuances that had taken the accountant and I quite a bit of work to track down and clarify. Claude quickly had a very accurate read on the situation and knew all the right clarifying questions.
I'm not yet ready to ever sign a return that's been entirely AI prepared, but I left the exercise pretty impressed.
I'm a little afraid of the failure modes, frankly. Clever, but that seems like it would b likely to exercise some under-tested timing situations. I'm not familiar with that API, so take the hunch with a grain of salt.
Also with this approach, you actually have a real collection and it's fun to collect things.
My son has autism and viewed his Netflix homepage as his personal curated collection. But then, of course, Netflix renegotiates licensing deals and entire seasons or shows just go away. And it really crushes him because it's like they were stolen from his personal collection.
So now when I hear him play, the super villain trying to destroy the world is always named Reed Hastings.
I understand his frustration : I have a similar issue with video games - Xbox gamepass games sometimes leave the service. So I built an app that takes all my games across the various gaming services ( steam etc ) including the Xbox gamepass ones, and it grabs them from the achievements ( games I have played ) on top of the catalog ( available games )should they have left the catalog
That way games that are gone remain and I have a Netflix like interface to view all my games past and present
Netflix is ultimately responsible for what they put on the platform, for delivering a consistent product to their users, and for setting expectations.
Netflix is exceptionally shitty at letting people what is leaving their platform and when, and even letting them know when the shows they saved or were in the middle of watching have been removed. Netflix has been around for ages but we still have to depend on third party websites to tell us what's coming/leaving. Some items will have a "leaving soon" banner on the thumbnail, but that's only good for shows netflix decides to push at you. There's no section or search that will find all that stuff (searching for "leaving soon" will show you some of them)
They can only deliver things that are possible to deliver. There is nothing they can do to negotiate a forever licensing deal with a content provider, other than buying the content provider, which is also not possible unless they jack up prices 100x and somehow still keep all their users.
Netflix chose to negotiate revocable licenses to save money and draw in users, so it does seem valid to assign blame to Netflix for signing such contracts.
There are some easy optimizations wins for this page but none of the top ones are framework related. Maybe with the faster build times they can easily optimize images and 3rd party dependencies. As someone else pointed out, nearly half that data is unoptimized images.
For the curious, google's current homepage is a 200kb payload all in, or about 50 times smaller.
Who remembers sprite sheets? Does that give my age away?
I did an optimization pass for a client once where I got rid of a ton of the sprites but didn't have the energy to redo it all, so it just had huge sections that were blank.
Question: since they've rebooted their approach to AI... have they given up on open models? There's no mention of open source or open weights or access to the models beyond their hosted services.
Alexandr Wang on Twitter [0] mentioned open source plans:
"this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions. incredibly proud of the MSL team. excited for what’s to come!"
I think design as a "signaling function" for determining the quality of a thing was already broken. It was already possible to put up an impressive-looking site for anything; already possible to to dupe people with cheap product wrapped in fancy packaging.
Movies with insane budgets that spend forever in production are often still terrible. One of my favorite songs was written by the artist in a hotel room on a Sunday afternoon.
One thing to consider: if it's cheap and immediate to wrap any content in design, it can now also be cheap and immediate to customize the design of content. Maybe we can finally return to a user-focused internet like the one that was promised to us by browser custom style sheets.
Finally, I can see democratizing design in this way will make more content more pleasant to look a (which is a win). And we'll also make better decisions with design out of the decision matrixes it doesn't belong in (another win).
reply