The economist in me immediately asks: Where is the financial incentive to do this? Just the same way the programmer would ask what the stack is. Some possibilities:
1) Money laundering - large content farm someone can argue makes xyz in revenue to hide an alternate source of revenue.
2) Ad fraud - leading up podcast charts or SEO results to attract clicks to sell ads. Bot farms could also be making clicks to pretend sell ads as well.
3) Attempt to dominate the niche for sale of knitting products. Or to pretend to dominate it so they can sell their the business later at a larger multiple.
4) Test the waters of a much bigger engine for doing 1-3 above in an innocuous hidden subject, before they do it with elections or some other more profitable field. Regulatory waters as well - seeing what they can get away with.
Feel free to brainstorm more incentives for making something like this.
I don't understand your question, are you asking what's the financial incentive to AI-generate thousands of podcasts a week? Isn't it obviously the income from streams and/or ads?
Are they? Or do they think they are listening to something real?
I've enjoy reading alt-history at times. However I can only enjoy this when it is clear that this isn't real history. Often one of the more enjoyable parts is authors notes of how real history differs.
I have heard some human written songs that really sounded real and tugged at the heart strings - until I found out it was fiction, and then I was offended. The key here is that it showed someone good (to modern ideals - they all considered themselves good Christians) existed in a timeline where they where we know almost nobody was good.
the bitch of it, though, is that it doesn't only work if people listen to it. it also works if a bunch of AI bots can convincingly fake people listening to it. and, of course, those types of bots exist and have financial incentive to continue faking it, too.
at some point, these two competing interests are going to find out that they're paying each other to stare at each other's dwindling profits, but my bet is that it's going to be a while yet before that wake up call. and it will be an even longer churn into something else because no one is going to admit that they were funneling money into nonsense for years. they're going to "adjust strategies" to "modernize against changing markets" for "new potential growth". all shit that takes a long time to do because it's a half measure aimed at saving face to investors. so it'll work for a long time just based on the momentum of bullshit. =/
they said, podcasts had 12 million downloads. 750k weekly at the moment.
They get people listening. And when you download you don't know it will be crap AI slop.
I now get a bunch of this in youtube - just endless drivel about some theme I am interested in. They create so much crap it's hard to see which one is real. I started banning the accounts that are making AI crap, but there are so many now.
Podcast network is an established and proven business model. You spend money to make episodes, you make money from ads. You make a bunch of different podcasts with a bunch of different target demos to reach a wider range of listeners and this grows your revenue and makes it more consistent. It's not complicated.
The specific incentives for starting a slop network are the promises of increased margins via reduced production costs (don't have to pay any pesky creative types) and more rapid growth via reduced production time (you can theoretically produce an episode in about the time it takes to listen to one, perhaps less).
I explored starting an AI slop network a few years ago. The tech wasn't quite ready at the time. My motivation was far more base: watching numbers go up.
Nevada makes it much harder to sue corporate officers when they do malfeasance. Wyoming has tons of privacy perks for the officers (similar to cayman island accounts). “Perks” though also convert into signaling for the intent of the founders.
Tried DeepSeek V4 Pro and Flash on Open Router and they worked fine - flash might have actually produced a better result, but also the same prompt across different inference providers produced the same result. Then tried DS4 Pro again via tinfoil.sh and got the same design but littered with random Chinese characters in the code. Tinfoil pegs prompt data as private / not trained on. Do know know DS4 providers that are verifiably private and not training on your prompts and outputs?
There is no insinuation. I am looking for both private and quality inference. I am not sure why Tinfoil had the characters. I will keep trying them, but it’s an issue if not resolved.
My push for trying local was the wildly unpredictable but systematic performance of large models like Opus and ChatGPT. It feels like at different times of day or week they are getting degraded beyond belief. I don’t know if it is deliberate, a function of demand, or a function of the models themselves. We are all learning the shape of this space by trying. I need to be able to rely on consistent performance - and maybe that means putting some harness of benchmarks between models and maybe it means between different inference providers, and maybe local.
Children learn by playing because not much is expected of the outcome in play. Improvement happens when you can play. When AI has a play environment to learn with reinforcement. When entrepreneurs are allowed to try and fail and do better. Doctors learn by practicing under supervision, or on corpses, until they can do it for real. No straight line goes up without a jiggle in the beginning.
Go to Open Router, ask your own in investigative prompt that meets your needs to all the top open models. See how they do. Then notice if you can run any of those locally. Repeat at least once a month.
> From a very early age, perhaps the age of five or six, I knew that when I grew up I should be a writer.
I think "what one wants to be" is a fashion and depends on the era. Today's children want to be youtuber or content creator. I grew up in consuming youtube and social media so I consider those mediums to be more captivating and allows for vivid storytelling captivating dominant senses.
Claude has become practically unusable for Pro users in the past few days. The Opus 4.7 blew through an entire 5 hour limit in one question and didn’t even finish answering it. Zero value delivered.
Opus 4.6 is giving 2, maybe 3 questions before blowing through the Pro 5 hour limit as well. We are forced to use Sonnet which makes the same mistakes over and over and then to start trying with other companies. To make matters worse, it reuses old code as we try to survive between credit expiry so it re-introduced issues into the code with the limited credits, that we had already fixed on our own and with other models.
Anthropic in just a few days has gotten me to try GLM 5.1, the new Kimi, and back to OpenAI. OpenAI also seems to introduce new bugs without being carefully micromanaged. The advantage Claude has is that the models are more careful and can refactor code instead of leading to bloat as they go. But the throttling happening now is breaking things and making the entire subscription unusable. I really hope they fix it soon.
I'm starting to think I've been A/B tested, because this was my experience for almost a year with Claude ever since I tried it for coding. Meanwhile, my coworkers seemed to be able to use it for long periods of time without getting rate limited.
One interesting variable is that I'm located in Vietnam while my coworkers are located in Norway and Europe.
To work around this issue I used Claude for coding with a Copilot subscription which was much cheaper and had virtually no rate limiting.
Copilot gives you some set amount of credits each month, but you can also pay as you go if you run out of credit which is much better than the 5 hour window crap claude code would give me.
The only opus model available now on copilot for some reason is 4.7 and it costs 7.5x tokens, while everything else is 1x, 0.33x or free.
But I switched to using GPT 5.4 medium for a month or so which I find very reasonable.
My personal LLM coding stack is now OpenCode, Claude Sonnet for ideation on spec with OpenWhispr for voice-to-text, GLM-5.1 for the orchestrating loop, GLM-4.7 for coding, and DeepSeek R1 for review and validation. It works much, much better than the Claude Code setup I have at work for substantially less money to boot.
At this rate I fully anticipate being able to run a comparable stack on a 128GB Mac Studio using quants of newer-generation distilled OSS models in a year or two. Being able to ramble to a computer for an hour about features and technical philosophy then have it build a nearly-working app for $50 is an exciting feeling. There's still a long tail of productionization and fixing what the model didn't adhere to but it's still incredible.
Im locked in for a year of claude pro, I encountered the same issues as you a couple weeks ago, Id get like one solid plan done and really really hope it was a 1 shot because that was legit all i was gonna get out of it for those 5 hours, and it would be ~10% of weekly usage to really make me feel scared to hit send.
I got the 20$ gpt tier, and now i just use claude to craft MD plan docs instead, and then i hand them off to gpt 5.4 and it has been working great. can do about 4x as much work or so based on my feelings(not accurate). if i have just small simple stuff to do i might still fire those off with sonnet and that seems plenty viable, but as soon as its an opus tier task i swap to this workflow.
Little annoying as now im kinda trying to manage a .claude/ and an .opencode/ folder but i kinda just have the .opencode/ stuff reference the .claude/ stuff so its a little less bleh.
I've been keeping within my usage because ive been in a funk a bit, but when i was slightly more worried id sorta just juggle whether claude or gpt would handle writing some initial tests as it did seem to kinda be imbalanced otherwise. seems like gpt just spam resets weekly usage throughout the week anyway so its prolly nbd.
I wouldn't be surprised if folks start complaining to California government agencies like the Department of Consumer Affairs, and they take it seriously.
There is a lot of political capital to be earned by appearing to be "tough" on AI companies.
> Claude has become practically unusable for Pro users in the past few days. The Opus 4.7 blew through an entire 5 hour limit in one question and didn’t even finish answering it
Glad I’m not the only one!
I’ve been limited so often this week I’ve setup half a dozen token compression tools in my workflow and had to do a crash course in token optimization.
Of course, it seems to only slightly delay the inevitable and doesn’t really solve the problem.
I have to guess that they're compute limited somewhere or the new models are incredibly overusing tokens, so I guess you need to wait for new data centers to come online?
If you grow up in that environment (restricted by government in some areas and liberated in others) you’ll start seeing systems very differently. The game plays differently with different rules.
They had Pravets computers and robotic arms in rural classrooms in places that didn’t have traffic lights, or English teachers. Chess and Math competitions as well, were accessible everywhere. Those were all self-feedback mechanisms that are cheap but allow an interested individual to iterate infinitely to reach advanced levels. Even if only a tiny subset of any population has the cognitive surplus to meddle with programming and math, they had easy access to fulfill that and be found. In the US, schools enable that with sports, which monetize as entertainment venues. In the Eastern Block they had that with brains. As soon as the stupid restrictions on travel were lifted, the brains knew to leave the other restrictions and immigrate to places that reward cognitive surplus.
Intelligence builds with reinforcement learning on context that gives you feedback - which makes it easy to iterate on. If you’re not making those types of games/tools/systems available to kids, you are going to lose that generation to more attention grabbing stuff like Youtube or sports.
>Even if only a tiny subset of any population has the cognitive surplus to meddle with programming and math, they had easy access to fulfill that and be found.
This is exactly 100% not true. Source: I grew up behind the Iron Curtain. Why some people are so ready to glamorize poverty and restrictions, I don't even understand.
Not every school had computers, and those which do, often had the fear of something being broken as the main guiding principle. Sure, some teachers were understanding and gaining their trust you could get some time for experiments. But it was rare. In a school "where there was no traffic lights" you would definitely find no "robotic arms" really (I can't even guess where this sci-fi bs came from). And you would rather only allowed to press spacebar when told so under close supervision.
Getting a computer at home wasn't easy either. That DIY culture appeared from the need more than from fun, but it wasn't available for all anyway. Knowing how-to is a barrier in itself for a kid, but try getting all necessary parts at first. Those were societies of constant "defitsit", and one needed connections and/or good money to obtain even simple things. On my block there were exactly 1 kid with self-built computer and you would need to fight for his favors. And anyway those machines were often more like primitive gaming consoles with very limited programming possible.
So in fact majority of late-socialism programming enthusiats grew in families where parents could bring their children to the work and let them play with computers there. Which is minority of minority.
I wrote from personal experience. In 1992 in a fisherman town we had a robotic arm and Pravetz 8 and 16 computers with the 5 inch floppy disks. We had to use Basic to program the arm and it was only doing basic movements. The teacher had a 16 year old who was assisting with the lab and you did have to ask for permission to do stuff.
I'm glad you were that lucky! I was lucky too, my father had computer at work. Maybe that's why we met here. I guess it would be better if written with 'me' and not 'they'
The fun fact was that the 16 year old that passionately administered the lab was also hitting on any female students who went in there, essentially chasing them away. I suspect the number of techies would double if it wasn’t for all the bad behavior.
I was fortunate in that Internet cafes started happening and I could volunteer to administer networks and troubleshooting for them while getting PC time for free. I also maintained PCs for friends with businesses who could afford one. So the Pravetz sparked my curiosity but the real growth happened on begged and borrowed time from other peoples computers.
>ROBKO 01 is an anthropoid robot manufactured in Bulgaria by BAS (Bulgarian Academy of Sciences) and produced by the Medical Equipment Factories. It is an analog based on the manufactured in the USA Armdroid 1000. The two robot arms are completely the same, except some minor differences in the mechanics and drive circuitry.
> Why some people are so ready to glamorize poverty and restrictions, I don't even understand.
> Not every school had computers, and those which do, often had the fear of something being broken as the main guiding principle
People glamorize exotic places they don't know, and you're doing exactly this here: I grew up in the 90s in the suburb of Paris (not in a poor neighborhood) and we didn't have a single computer in school until. And even later in high school in the early 2000, we had few computers in dedicated rooms the teacher had to book in advance and often not all computer worked.
The West was much better that the eastern block in many aspects, but it wasn't the land of unlimited abundance some people from the East believed it was.
I didn't mention Paris, or even West in general though. Made zero comparisons. The whole text is about the place where I lived. So I'm not sure how did I manage to glamorize something
When you say “Not every school had computers” as a rebuttal without realizing that pretty much no school in a bunch other countries elsewhere in Europe had computers at the time.
1) Money laundering - large content farm someone can argue makes xyz in revenue to hide an alternate source of revenue.
2) Ad fraud - leading up podcast charts or SEO results to attract clicks to sell ads. Bot farms could also be making clicks to pretend sell ads as well.
3) Attempt to dominate the niche for sale of knitting products. Or to pretend to dominate it so they can sell their the business later at a larger multiple.
4) Test the waters of a much bigger engine for doing 1-3 above in an innocuous hidden subject, before they do it with elections or some other more profitable field. Regulatory waters as well - seeing what they can get away with.
Feel free to brainstorm more incentives for making something like this.
reply