I also find these features annoying and useless and wish they would go away. But that's not because LLMs are useless, nor because the public isn't using them (as daishi55 pointed out here: https://news.ycombinator.com/item?id=44479578)
It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
I am a huge AI supporter, and use it extensively for coding, writing and most of my decision making processes, and I agree with you. The AI features in non-AI-first apps tend to be awkward bolt-ons, poorly thought out and using low quality models to save money.
I don't want shitty bolt-ons, I want to be able to give chatgtp/claude/gemini frontier models the ability to access my application data and make api calls for me to remotely drive tools.
> It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.
And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.
And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.
And either way, all the people responsible for making all your technology worse every day will continue to get richer.
It’s not spot on. Buying and using all of these products is a choice.
The last is especially egregious. I don’t want poorly-written (by my standards) books cluttering up bookstores, but all my life I’ve walked into bookstores and found my favorite genres have lots of books I’m not interested in. Do I have some kind of right to have stores only stock products that I want?
The whole thing is just so damn entitled. If you don’t like something, don’t buy it. If you find the presence of some products offensive in a marketplace, don’t shop there. Spotify is not a human right.
Of course you can opt out. People live in the backwoods of Alaska. But if you want to live a semi normal life there is no option. And absolutely people should feel entitled to a normal life.
If these things are genuinely so universally hated won't they just be.. capitalism'd out of existence? People will stop engaging with them and better products will win
What book store will stock AI slop that no-one wants to buy?
No, because “better products” won’t exist. That’s the complaint: every company is rushing to throw AI into their stuff, and/or use it to replace humans.
They’re not trying to satisfy customers: they’re answering shareholders. Our system is no longer about offering the best products, it’s about having the market share to force people to do business with you or maybe two other equally bad companies that constantly look for ways to extract more money from people to make shareholders happy. See: Two choices of smartphone OS, ISP regional monopolies or duopolies, two consumer OSes, a handful of mobile carriers, almost all available TVs models being “smart TVs” laden with spyware…
(I’m speaking from the US perspective, this may not be as pronounced elsewhere.)
Outside of a monopoly the best way to extract more money from people is to offer a better product. If AI is being forced and people do hate it, they'll move towards products that don't do that
What happened to Windows Recall being enabled by default? Surely it was in Microsoft's best interest to force it on people. But no, they reversed it after a huge backlash. You see this again and again
Of your examples, ISPs are the only one I can see that's hated without other options. Most people are quite happy with Windows/Mac/Android/iOS/Mint Mobile/Smart-TV-With-No-Internet-Access
Part of the problem is that some of these services have enormous upfront costs to work at all.
It's fun to say "let's go write a complete replacement for Microsoft Office" or the Adobe suite or what have you, but that has a truly astonishing upfront cost to get to a point where it's even servicing 50% of the use cases, let alone 95 or 99%.
Or there's other examples where it's not obvious there's sufficient interest to finance an alternative - how many people are going to pay for something that replicates solely the old functionality of Microsoft Paint or Notepad, for example.
You might be conflating capitalism (owning things like factories) with consumerism (buying things like widgets).
If all of the factory owners discover a type of widget to sell that can incidentally drive down wages the more units they move, it's unlikely for consumers to be provided much choice in their future widgets.
I'm a bookseller who often uses Ingram to buy books wholesale when I'm not buying direct from publishers. I've used them for their distribution service since opening 5 years ago because they are the only folks in town who can help bootstrap a very small business with coverage of all the major publishers (in the U.S.). They're great at that, for a small cut in revenue.
Six-plus months ago they put a chatbot in the bottom right corner of their website that literally covers up buttons I use all the time for ordering, so that I have to scroll now in order to access those controls (Chrome, MacOS). After testing it with various queries it only seems to provide answers to questions in their pre-existing support documentation.
This is not about choice (see above, they are the only game in town), and it is not about entitlement (we're a tiny shop trying to serve our customers' often obscure book requests). They seemed to literally place the chatbot buttons onto their website with no polling of their users. This is an anecdotal report about Ingram specifically.
Opting out is easy, we can just stop using products from Microsoft, Apple, Meta and Google. Of course, for many that also means opting out of their job, which is a great way to opt out of a home, a family, healthcare, dental care and luxuries like food.
I don't think it's entitlement to make a well-mannered complaint about how little choice we actually have when it comes to the whims of the tech giants.
The OP's point is that increasingly, we don't have that choice, for example, because AI slop masquerades as if it were authored by human beings (that's, in fact, its purpose!), or because the software applications you rely on suddenly start pushing "AI companions" on you, whether you want them or not, or because you have no viable alternatives to the software applications you use, so you must put up with those "AI companions," whether you want them in your life or not.
AI shit is usually not advertising as such. It's made to look like it was made a human. So I would have to consider this product carefully beforehand, or to return it after buying. That's a hassle. I don't want to spend productive time on this nonsense. For all I care, say it hurts the GDP.
How is this hard to understand? You’re completely missing the point. You’re basically saying if you get a spam text, don’t read it. If you get spam email, don’t read it. If you see an ad modal popup on a website, close it. It’s all still super annoying just like these AI features screaming “use me! click me! type to me!” all over the place in the UI.
I actually use the AI books that litter kindle unlimited to teach my daughter how to differentiate and be more sophisticated. I think a feature of all this is it inculcates a lot of people to AI spew. If it were isolated to the elite and the unscrupulous alone people would be a lot more vulnerable. By saturating the world with it, people get a true choice by being able to recognize it when they see it and avoid the output. It’s not like all our surfaces are not covered in enshittification as it is, another dose of it won’t make it meaningfully worse. And I know a lot of non English speakers that really appreciate the AI writing assistants built into email, the ai summaries built into search. Assuming no one finds them beneficial because it litters an already littered experience is a bit close minded. Many people otherwise challenged in some way. Summaries help dyslexics get through otherwise intractable walls of text, multi modal glasses help the vision impaired, witting assistants help bilingual workers level the playing field. Just because these don’t apply to you doesn’t mean it’s bothersome. (Now should you be able to disable it? Maybe, but as the author points out that’s a product choice made for financial reasons and there’s a market of products that make a different choice - don’t like google? Don’t feel so entitled that every service be free and pay for kagi)
Probably no one enjoys AI books though. I did my best at devils advocate on that above.
> I don’t want poorly-written (by my standards) books cluttering up bookstores
It's ridiculous to compare bad human books with bad AI books because there many human books which are life-changing, but there isn't a single AI book which isn't trash.
For consumer pproducts, sure, don't buy them. For people in office based careers, they may not get a choice when their company rolls out copilot, or management decide to buy an ai helpdesk agent, or a vendor pushes ai slop into the next enterprise software version.
How is that different from not liking other technology choices one’s employer makes? I could write a book about how much I hate our expense tool. But it’s never occurred to me that I am entitled to have a different one.
The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.
Also, everyone who requires these sophisticated models now needs to send everything to the gatekeepers. You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
This aggregation of power and centralisation of data worries me as much as the shortcomings of LLMs. The technology is still not accurate enough. But we want it to be accurate because we are lazy. So I fear that we will end up with many things of diminished quality in favour of cheaper operating costs — time will tell.
We run our own LLM server at the office for a month now, as an experiment (for privacy/infosec reasons), and a single RTX 5090 is enough to serve 50 people for occasional use. We run Qwen3 32b which in some benchmarks is equivalent to GPT 4.1-mini or Gemini 2.5 Flash. The GPU allows 2 concurrent requests at the same time with 32k context each and 60 tok/s. At first I was skeptical a single GPU would be enough, but it turns out, most people don't use LLMs 24/7.
If those smaller models are sufficient for your use cases, go for it. But for how much longer will companies release smaller models for free? They invested so much. They have to recoup that money. Much will depend on investor pressure and the financial environment (tax deductions etc).
Open Source endeavors will have a hard time to bear the resources to train models that are competitive. Maybe we will see larger cooperatives, like a Apache Software Foundation for ML?
It's not just about smaller models. I recently bought a Macbook M4 Max with 128GB RAM. You can run surprisingly large models locally with unified memory (albeit somewhat slowly). And now AMD has brought that capability to the X86 world with Strix. But I agree that how long Google, Meta, Alibaba, etc. will continue to release open weight models is a big question. It's obviously just a catch-up strategy aimed at the moats of OpenAI and Anthropic, once they catch up the incentive disappears.
Even Google and Facebook are releasing distills of their models (Gemma3 is very good, competitive with qwen3 if not better sometimes.)
There are a number of reasons to do this: You want local inference, you want attention from devs and potential users etc.
Also the smaller self hostable models are where most of the improvement happens these days. Eventually they'll catch up with where the big ones are today. At this point I honestly wouldn't worry too much about "gatekeepers."
Pricing for commodities does not allow for “recouping costs”. All it takes is one company seeing models as a complementary good to their core product, worth losing money on, and nobody else can charge more.
I’d support an Apache for ML but I suspect it’s unnecessary. Look at all of the money companies spend developing Linux; it will likely be the same story.
"Maybe we will see larger cooperatives, like a Apache Software Foundation for ML?"
I suspect the Linux Foundation might be a more likely source considering its backers and how much those backers have provided LF by way of resources. Whether that's aligned with LF's goals ...
"Just" is doing a lot of heavy lifting there. It definitely helps with getting data but actually training your model would be very capital intensive, ignoring the cost of paying for those outputs you're training on.
What the parent poster means is that you can use the API to generate many question/answer pairs on which you then train your own model. For a more detailed explanation of this and other related methods, I can recommend this paper: https://arxiv.org/pdf/2402.13116
You don't understand what Gigachad is talking about. You can buy API credits to gain access to a model in the cloud, and then use that to train your own local model though a process called distilling.
Quants: Unsloth Dynamic 2.0, it's 4-6 bits depending on the layer.
RAM is 96 GB: more RAM makes a difference even if the model fits entirely in the GPU: filesystem pages containing the model on disk are cached entirely in RAM so when you switch models (we use other models as well) the overhead of unloading/loading is 3-5 seconds.
The Key Value Cache is also quantized to 8 bit (less degrades quality considerably).
This gives you 1 generation with 64k context, or 2 concurrent generations with 32k each. Everything takes 30 GB VRAM, which also leaves some space for a Whisper speech-to-text model (turbo & quantized) running in parallel as well.
> The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.
The scale issue isn't the LLM provider, it's the power grid. Worldwide, 250 W/capita. Your body is 100 W and you have a duty cycle of 25% thanks to the 8 hour work day and having weekends, so in practice some hypothetical AI trying to replace everyone in their workplaces today would need to be more energy efficient than the human body.
Even with the extraordinarily rapid roll-out of PV, I don't expect this to be able to be one-for-one replacement for all human workers before 2032, even if the best SOTA model was good enough to do so (and they're not, they've still got too many weak spots for that).
This also applies to open-weights models, which are already good enough to be useful even when SOTA private models are better.
> You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
I dispute that it was not already a problem, due to the GDPR consent popups often asking to share my browsing behaviour with more "trusted partners" than there were pupils in my secondary school.
But I agree that the aggregation of power and centralisation of data is a pertinent risk.
"how much will they charge us for prioritised access to these resources"
For the consumer side, you'll be the product, not the one paying in money just like before.
For the creator side, it will depend on how competition in the market sustains. Expect major regulatory capture efforts to eliminate all but a very few 'sanctioned' providers in the name of 'safety'. If only 2 or 3 remain, it might get realy expensive.
It's like talking into a void. The issue with AI is that it is too subtle, too easy to get acceptable junk answers and too subtle for the majority to realize we've made a universal crib sheet, software developers included, perhaps one of the worst populations due to their extremely weak communications as a community. To be repeatedly successful with AI, one has to exert mental effort to prompt AI effectively, but pretty much nobody is willing to even consider that. Attempts to discuss the language aspects of using an LLM get ridiculed as 'prompt engineer is not engineering' and dismissed, while that is exactly what it is: prompt engineering using a new software language, natural language, that the industry refuses to take seriously, but is in fact an extremely technical programming language so subtle few to none of you realize it, nor the power that is embodied by it within LLMs. They are incredible, they are subtle, to the degree the majority think they are fraud.
Isn't "Engineering" is based on predictability, on repeatability?
LLMs are not very predictable. And that's not just true for the output. Each change to the model impacts how it parses and computes the input. For someone claiming to be a "Prompt Engineer", this cannot work. There are so many variables that are simply unknown to the casual user: training methods, the training set, biases, ...
If I get the feeling I am creating good prompts for Gemini 2.5 Pro, the next version might render those prompts useless. And that might get even worse with dynamic, "self-improving" models.
So when we talk about "Vibe coding", aren't we just doing "Vibe prompting", too?
> LLMs are not very predictable. And that's not just true for the output.
If you run an open source model from the same seed on the same hardware they are completely deterministic. It will spit out the same answer every time. So it’s not an issue with the technology and there’s nothing stopping you from writing repeatable prompts and promoting techniques.
> If you run an open source model from the same seed on the same hardware they are completely deterministic.
Are you sure of that? Parallel scatter/gather operations may still be at the mercy of scheduling variances, due to some forms of computer math not being associative.
By "unpredictability", we mean that AIs will return completely different results if a single word is changed to a close synonym, or an adverb or prepositional phrase is moved to a semantically identical location, etc. Very often this simple change will move you from "get the correct answer 90% of the time" (about the best that AIs can do) to "get the correct answer <10% of the time".
Whenever people talk about "prompt engineering", they're referring to randomly changing these kinds of things, in hopes of getting a query pattern where you get meaningful results 90% of the time.
What you're describing is specifically the subtle nature of LLMs I'm pointing at; that changing of a single word to a close synonym is meaningful. Why and how they are meaningful gets pushback from the developer community, they somehow do not see this as being a topic, a point of engineering proficiency. It is, but requires an understanding of how LLMs encode and retrieve data.
The reason changing one word in a prompt to a close synonym changes the reply is because it is the specific words used in a series that is how information is embedded and recovered by LLMs. The 'in a series' aspect is subtle and important. The same topic is in the LLM multiple times, with different levels of treatment from casual to academic. Each treatment from casual to formal uses different words, similar words, but different and that difference is very meaningful. That difference is how seriously the information is being handled. The use of one term versus another term causes a prompt to index into one treatment of the subject versus another. The more formal the terms used, meaning the synonyms used by experts of that area of knowledge, generate the more accurate replies. While the close synonyms generate replies from outsiders of that knowledge, those not using the same phrases as those with the most expertise, the phrases used by those perhaps trying to understand but do not yet?
It is not randomly changing things in one's prompts at all. It's understanding the knowledge space one is prompting within such that the prompts generate accurate replies. This requires knowing the knowledge space one prompts within, so one knows the correct formal terms that unlock accurate replies. Plus, knowing that area, one is in a better position to identify hallucination.
Who's saying that the model stays the same and the seed is not random for most of the companies that run AI? There is no drawback to randomness for them.
Predictable does not necessarily follow from deterministic. Hash algorithms, for instance, are valuable specifically because they are both deterministic and unpredictable.
Relying on model, seed, and hardware to get "repeatable" prompts essentially reduces an LLM to a very lossy natural language decompression algorithm. What other reason would someone have for asking the same question over and over and over again with the same input? If that's a problem you need solve then you need a database, not a deterministic LLM.
The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself
If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software
> The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself
conceding that this may be the case, there are entire categories of problems that i am now able to approach that i have felt discouraged from in the past. even if the code is wrong (which, for the most part, it isn't), there is a value for me to have a team of over-eager puppies fearlessly leading me into the most uninviting problems, and somehow the mess they may or may not create makes solving the problem more accessible to me. even if i have to clean up almost every aspect of their work (i usually don't), the "get your feet wet" part is often the hardest part for me, even with a design and some prototyping. i don't have this problem at work really, but for personal projects it's been much more fun to work with the robots than always bouncing around my own head.
I agree with the general gist of this piece, but the awkward flow of the writing style makes me wonder if it itself was written by AI…
There are open source or affordable, paid alternatives for everything the author mentioned. However, there are many places where you must use these things due to social pressure, lock-in with a service provider (health insurance co, perhaps), and yes unfortunately I see some of these things as soon or now unavoidable.
Another commenter mentioned that ChatGPT is one of the most popular websites on the internet and therefore users clearly do want this. I can easily think of two points that refute that:
1. The internet has shown us time and time again that popularity doesn’t indicate willingness to pay (which paid social networks had strong popularity…?)
2. There are many extremely popular websites that users wouldn’t want to be woven throughout the rest of their personal and professional digital lives
I don't think this is true. A lot of people had no interest until smartphones arrived. Doing anything on a smartphone is a miserable experience compared to using a desktop computer, but it's more convenient. "Worse but more convenient" is the same sales pitch as for AI, so I can only assume that AI will be accepted by the masses too.
People didn't even want mobile phones. In The Netherlands, there's a famous video of an interviewer asking people on the street ca. 1997 whether they would want a mobile phone. So not even a smartphone, just a mobile phone. The answer was overwhelmingly negative.
>there's a famous video of an interviewer asking people on the street ca. 1997 whether they would want a mobile phone. So not even a smartphone, just a mobile phone. The answer was overwhelmingly negative.
So people didn't want to be walking around with a tether that allowed the whole world to call them where ever they were? Le Shock!
Now if they'd asked people if they'd like a small portable computer they could keep in touch with friends and read books, play games, play music and movies on where ever they went which also made phone calls. I suspect the answer might have been different.
Is the problem really the phone, or everything but the actual phoning capability? Mobile phones were a thing twenty years ago and I didn't recall them being pulled out at the slightest gap in the conversation. I feel like the notifications and internet access caused the change, not the phone (or SMS for that matter).
Interesting you should say that. I found a Substack post earlier today along those lines [0].
I almost never take my phone with me, especially when with my wife and son, as they always have theirs with them, although with elderly parents not in the best of health I really should take it more.
But it's something I see a lot these days, in fact, the latest Vodafone ad in the uk has a bunch of lads sitting outside a pub and one is laughing at something on his phone. There's also a betting ad where the guy is making bets on his phone (presumably) while in a restaurant with others!
I find this normalized behaviour somewhat concerning for the future.
As a kid I had Internet access since the early 90s. Whenever there was some actual technology to see (Internet, mobile gadgets etc.) people stood there with big eyes and forgot for a moment this was the most nerdy stuff ever
Yes, everyone wanted the internet. It was massively hyped and the uptake was widespread and rapid.
Obviously saying “everyone” is hyperbole. There were luddites and skeptics about it just like with electricity and telephones. Nevertheless the dotcom boom is what every new industry hopes to be.
I was there. There was massive skepticism, endless jokes about internet-enabled toasters and the uselessness and undesirability of connecting everything to the internet, people bemoaning the loss of critical skills like using library card catalogs, all the same stuff we see today.
In 20 years AI will be pervasive and nobody will remember being one of the luddites.
I was there too. You’re forgetting internet addiction, pornography, stranger danger, hacking and cybercrime, etc.
Whether the opposition was massive or not, in proportion to the enthusiasm and optimism about the globally connected information superhighway, isn’t something I can quantify, so I’ll bow out of the conversation.
Toasters in fact dot need internet and jokes about them are entirely valid. Quite a lot of devices that dont need internet have useless internet slapped on them.
I've seen this bad take over and over again in the last few years, as a response to the public reaction to cryptocurrency, NFTs, and now generative AI.
It's bullshit.
I mean, sure: there were people who hated the Internet. There still are! They were very clearly a minority, and almost exclusively older people who didn't like change. Most of them were also unhappy about personal computers in general.
But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was, and people were making businesses based on it left and right that didn't rely on grifting, artificial scarcity, or convincing people that replacing their own critical thinking skills with a glorified autocomplete engine was the solution to all their problems. (Yes, there were also plenty of scams and unsuccessful businesses. They did not in any way outweigh the legitimate successes.)
By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public. And a huge reason for that is how much it is being pushed on them against their will, replacing human interaction with companies and attempting to replace other things like search.
>But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was,
>By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public.
It is absolutely wild how people can just ignore something staring right at them, plain as day.
ChatGPT.com is the 5 most visited site on the planet and growing. It's the fastest growing software product ever, with over 500M Weekly active users and over a billion messages per day. Just ChatGPT. This is not information that requires corporate espionage. The barest minimum effort would have shown you how blatantly false you are.
What exactly is the difference between this and a LLM hallucination ?
ChatGPT is the 5th most-visited website on the planet and growing quickly. that’s one of many popular products. Hardly call that unwilling. I bet only something like 8% of Instagram users say they would pay for it. Are we to take this to mean that Instagram is an unpopular product that is rbi g forced on an unwilling public?
My 75 year old father uses Claude instead of google now for basically any search function.
All the anti-AI people I know are in their 30s. I think there are many in this age group that got use to nothing changing and are wishing it to stay that way.
Would you like your Facebook feed or Twitter or even Hacker News feed inserted in between your work emails or while you are shopping for clothes on a completely different website?
If you answer no, does that make you an unwilling user of social media? It’s the most visited sites in the world after all, how could randomly injecting it into your GPS navigation system be a poor fit?
I downloaded a Quordle game on Android yesterday. It pushes you to buy a premium subscription, and you know what that gets you? AI chat inside the game.
I'm not unwilling to use AI in places where I choose. But let's not pretend that just because people do use it in one place, they are willing to have it shoved upon them in every other place.
If I want to use ChatGPT I will go and use ChatGPT myself without a middleman. I don't need every app and website to have it's own magical chat interface that is slow, undiscoverable and makes the stuff up half the time.
I actually quite like the AI-for-search use case. I can't load all of a company's support documents and manuals into ChatGPT easily; if they've done that for me, great!
It has some kind of ChatGPT integration, and I tried it and it found the answer I was looking for straight away, after 10 minutes of googling and manual searching had failed.
I looked for the right term but force-feeding is what it is. I yesterday also changed my default search engine from Duckduckgo to Ecosia as they seem the only one left not to provide flaky AI summaries.
In fact I also tried the communication part - outside of Outlook - but people don't like superficial AI polish
currently run duckduck, but dont get the summeraries as my phone browser sets individual domain conditions, and duck duck will return results without having java,cookies,dom enabled
But why are the CEOs insisting so much on AI? Because stock investors prefer to invest on anything with "AI inside". So the "AI business model" would not collapse , because it is what investors want. It is a bubble. It will be bubbly for a while, until it isn't.
It is not just that. Companies that already have lots of users interacting with their platform (Microsoft, Google, Meta, Apple ...) want to capture your AI interactions to generate more training data, get insights in what you want and how you go about it, and A/B test on you. Last thing they want is someone else (Anthropic, Deepseek ...) capturing all that data on their users and improve the competition.
Yeah literally every new tech like this has literally everyone investing in it and trying lots of silly ideas. The web, mobile apps, cryptocurrencies, doesn't mean they are fundamentally useless (though cryptocurrencies have yet to make anything successful beyond Bitcoin).
I bet if you go back to the printing press, telegraph, telephone, etc. you will find people saying "it's only a bubble!".
But that's exactly the problem with proprietary software. It's not force-feeding you anything, it's working exactly as intended.
Software is loyal to the owner. If you don't own your software, software won't be loyal to you. It can be convenient for you, but as time passes and interest changes, if you don't own software it can turn against you. And you shouldn't blame Microsoft or it's utilities. It doesn't owe you anything just because you put effort in it and invested time in it. It'll work according to who it's loyal to, who owns it.
If it bothers you, choose software you can own. If you can't choose software you own now, change your life so you can in the future. And if you just can't, you have to accept the consequences.
Companies didn't ask your opinion when they offshore manufacturing to Asia. They didn't ask your opinion when they offshore support to call centers in Asia. Companies don't ask your opinion, they do what they think is best for their financial interest, and that is how capitalism works.
Once upon a time, not too long ago, there was someone who would bag your groceries, and someone who would clean your window at the gas station. Now you do self-checkout. Has anyone asked for this? Your quality of life is worse, the companies are automating away humanity into something they think is more profitable for them.
In a society where you don't have government protection for such companies, there would be other companies who provide a better service whose competition would win. But when you have a fat corrupt government, lobbying makes sense, and crony-capitalism births monopolies which cannot have any competition. Then they do whatever they want to you and society at large, and they don't owe you, you owe them. Your tax dollars sponsor all of this even more than your direct payments do.
IPv6 adoption is actually limited by network effect and infrastructure transition costs, not lack of end-user benefits - unlike AI, which faces a value perception problem.
That value (of one company) is from speculative investment. I don't think it negates that the field has a perception problem.
After seeing something like blockchain run completely afoul/used for the wrong things and embraced by the public for it, I at least agree that AI has a value perception problem.
Given everyone and their mother is putting AI in to their products it makes me wonder how that revenue breaks down between people incidentally paying for it versus deliberately paying for it versus being subsidized by VC. Obviously ultimately all this revenue is being collected at a massive loss but I wonder if that carries on down the value chain.
I don’t think I’m trying to make that argument but thanks for putting it in my mouth. I do pay (or via employment get paid access) for a lot of products that have AI features that I don’t care about so from personal experience I know that at least some of the value chain is incidental.
Huh? I’ve been programming for 20 years now and LLMs/GenAI have replaced search and StackOverflow for me - I’d say that means they are pretty good! They are not perfect, not even close, but they are excellent when used as an assistant and when you know the result you’re expecting and can spot its obvious errors.
I mostly agree with TFA, with one glaring exception: The quality of Google search results has regressed so badly in the past years (played by SEO experts), that AI was actually a welcome improvement.
Yes, that's the next logical step. The only silverlining is Google currently has less of a moat than last time in the technology in question, so some upstart could always be on their heels in a Kagi-esque way.
People don't know how to search, that's it. Even the HN population.
Every time this gets posted, I ask for one example of thing you tried to find and what keywords you used. So I'm giving you the same offer, give me for one thing you couldn't find easily on Google and the keywords you used, and I'll show you Google search is just fine.
Allright, had this recently since i keep forgetting luks commands.
How do you set up an encrypted file on linux that can be mounted and accessed same as a hard drive.
(note: luks, a few commands)
You will see a nonsensical ai summarization, lots of videos and junk websites being promoted then you'll likely find a few blogs with the actual commands needed. Nowhere is there a link to a manual for luks or similar.
This in the past had the no-ad straightforward blogs as first links, then some man pages, then other unrelated things for the same searches that i do now and get garbage.
FWIW, when I put <<linux create file image encrypted file system>> into Google (this was the first thing I tried, though without knowledge that it might be a tricky case I might have been less careful picking keywords) I get what look like plausible results.
At the top there's a "featured snippet" from opensource.com, allegedly from 2021, that begins with: create an empty file (this turns out to mean a file of given size with no useful data in it, not a size-0 file), then make a LUKS volume using cryptsetup, etc.
First actual search result is a question on Ask Ubuntu (the Stack Exchange site dedicated to Ubuntu) headed "How do I create an encrypted filesystem inside a file?" which unless I'm confused is at least the correct question. Top answer there (from 2017) looks plausible and seems to be describing the same steps as the "featured snippet". A couple of other links to Ask Ubuntu are given below that one but they seem worse.
Next search result is a Reddit thread that describes how to do something different but possibly still of interest to someone who wants to do the thing you describe.
Next search result is a question on unix.stackexchange.com that turns out to be about something different; under it are other results from the same site, the first of which has a cryptsetup-based recipe that seems similar to the other plausible ones mentioned above.
Further search results continue to have a good density of plausible-looking answers to essentially the intended question.
This all seems fairly satisfactory assuming the specific answers don't turn out to be garbage, which doesn't look very likely; it seems like Google has done a decent job here. It doesn't specifically turn up the LUKS manual, but then that wasn't the question you actually asked.
Having done that search to find that the relevant command seems to be cryptsetup and the underlying facility is called LUKS, searches for <<cryptsetup manual>> and <<luks documentation>> (again, the first search terms that came to mind) look to me like they find the right things.
(Google isn't my first-choice search engine at present; DuckDuckGo provides similar results in all these cases.)
I am not taking any sides on the broader question of whether in general Google can give good search results if one picks the right words for it, but in this particular case it seems OK.
I asked Google that exact question, and I got an AI summary that looks alright? Please verify if those steps make sense, I pasted them into a text service, it's too much for an HN comment: https://justpaste.it/63eiz
That wasn't the question. The complaint is the poster can't find anything on Google because the results are now so poor, and your response is "but here's some AI generated slop, which may or may not make any sense."
Is "How do you set up an encrypted file on linux that can be mounted and accessed same as a hard drive." literally what you put into the search bar? if so, that's the problem.
try "mount luks encrypted file" or "luks file mount". too many words and any grammar at all will degrade your results. it's all about keywords
edit: after trying it myself i quickly realized the problem - luks related articles are usually about drives or partitions, not about files. this search got me what i wanted: "luks mount file -partition -filesystem"
i found this article[1], which is in german (my native tongue), but contained the right information.
Google is nearly useless for recipes. Try finding a recipe for beef bourguignon. They exist, but with huge prefaces and elaboration that mean endless scrolling on a phone, all in the name of maximizing time spent on page (which is a search ranking criteria).
I've also heard a 3rd-hand claims that not authors of those recipes vett what they've written. E.g., what the true prep / cooking times are.
I still find online recipes convenient, but I don't blindly trust details like cooking time and temperature. (I mean, those things are always subject to variability, but now I don't trust the times to even be in the right ballpark.)
Happily, there are some cooks that I think deserve our trust, e.g. Chef John.
I feel an urge to build personal local AI bots that would be personal spam filters. AI filtering AI, fight fire with fire. Mostly because the world OP wants is never coming back. Everything will be AI and it's everywhere.
I also feel an urge to build spaces in the internet just for humans, with some 'turrets' to protect against AI invasion and exploitation. I just don't know what content would be shared in those spaces because AI is already everywhere in content production.
I noticed that some of his choices contributed to his problem. I haven't been forced into accepting AI (so far) while I've been using duckduckgo for search, libreoffice, protonmail, and linux.
even ddg has integrated AI now and while it can be disabled, the privacy aspect seems to mean that ddg regularily forgets my settings and re-enables the ai features.
maybe i'm doing something wrong here, but even ddg is annoying me with this.
> Before proceeding let me ask a simple question: Has there ever been a major innovation that helped society, but only 8% of the public would pay for it?
In my European country you have to pay a toll to use a highway. Most people opt to use them, instead of taking the old 2-lane road that existed before the highway and is still free.
I think there’s a difference between the tool that helps you do work better and the service that generates the end result.
People would be less upset if ai is shown to support the person. This also allows that person to curate the output and ignore it if needed before sharing it, so it’s a win/win.
Just a quick quibble…the subtitle of the article calls this problem tyranny.
Tyranny is a real thing which exists in the world and is not exemplified by “product manager adding text expansion to word processor.”
The natural state of capitalism is trying things which get voted on by money. It’s always subject to boom-bust cycles and we are in a big boom. This will eventually correct itself once the public makes its position clear and the features which truly suck will get fixed or removed.
I agree copilot for answering emails is negative value.
But I find Google AI search results are very useful, can't see how they will monetise this, but can't complain for now.
Excellent Frank Zappa reference in The Famous Article is "I'm the Slime"[1].
The thing that really chafes me about this AI, irrespective of whether it is awesome or not, is emitting all of the information to some unknown server. To go with another Zappa reference, AI becomes The Central Scrutinizer[2].
I predict an increasing use of Free Software by discerning people who want to maintain more control of their information.
Why do people who attempt to critique AI lean on the "no one wants this, everyone hates this" instead of just making their point. If your arguments are strong you don't need to wrap them in false statistics.
But this is one thing that Gen AI is genuinely good at, constructing computer programs under close human supervision. It's also the most profitable (but not enough to justify valuations) Also, it may be a big thing here but its pretty niche in the larger scheme of things
The article is about it encroaching in the domain of human communications. Mass adoption is the only way to justify the incredible financial promises.
I use Claude at least weekly to help write documents for me. And I’m a good writer, who spent a lot of time and energy getting that way. I have a friend who is a terrible writer who I do proofreading for. He uses chatgpt and it’s made a world of difference for him in getting things accomplished and communicating what he wants.
I think there are lots of valid arguments against llm usage, but it’s extremely tiring to here how it’s not useful when I get so much use out of it.
This guy calls himself honest broker but his articles are just expressions of status anxiety. The kind of media the he loves to write about is becoming less relevant and so he lashes out at everything new from AI to TikTok.
I’ve observed the opposite—not enough people are leveraging AI, especially in government institutions. Critical time and taxpayer money are wasted on tasks that could be automated with state-of-the-art models. Instead of embracing efficiency, these organizations perpetuate inefficiency at public expense.
The same issue plagues many private companies. I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine. There's a lot of resistance from the public. They're probably thinking that their job will be saved if they refuse to adopt AI tools.
> I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine.
What I have seen is employees spending days asking the model again and again to actually generate the document they need, and then submit it without reviewing it, only for a problem to explode a month later because no one noticed a glaring absurdity in the middle of the AI-polished garbage.
You're describing incompetence or laziness—I’ve encountered those kinds of people as well. But I’ve also seen others who are 2-3 times more productive thanks to AI. That said, I’m not suggesting AI should be used for every single task, especially if the output is garbage. If someone blindly relies on AI without adding any real value beyond typing prompts, then they’re not contributing anything meaningful.
Yeah, no you cant see that yet. What you see is comparison between own super optimistic imagined idea of useful AI with either reality or even knee jerk "goverment is stupid and wastful becauce Musk said so".
The thing is, though, that time wasn’t wasted. It was spent fully understanding what they were actually trying to say, the context, the connotations of various different phrasings etc. It was spent mapping the territory. Throwing your initial, unexamined description into a prompt might generate something that looks enough like the email they’d have written, but it’s not been thought through. If the 10 minutes’ thought spent on the prompt was sufficient, the final email wouldn’t be taking days to do by hand.
I also find these features annoying and useless and wish they would go away. But that's not because LLMs are useless, nor because the public isn't using them (as daishi55 pointed out here: https://news.ycombinator.com/item?id=44479578)
It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
petekoomen made this point recently in a creative way: AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)
I am a huge AI supporter, and use it extensively for coding, writing and most of my decision making processes, and I agree with you. The AI features in non-AI-first apps tend to be awkward bolt-ons, poorly thought out and using low quality models to save money.
I don't want shitty bolt-ons, I want to be able to give chatgtp/claude/gemini frontier models the ability to access my application data and make api calls for me to remotely drive tools.
You outsource most of your own decision-making process to AI?
Brother your brain is gonna turn into soup.
Meanwhile you aren't even using AI and you hallucinated the word "outsource" in their comment.
Xss3 is paraphrasing. As CuriouslyC wrote:
> "I am a huge AI supporter, and use it extensively for [...] most of my decision making processes"
> It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.
And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.
And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.
And either way, all the people responsible for making all your technology worse every day will continue to get richer.
Your may agree or disagree with the OP, but this passage is spot-on:
"I don’t want AI customer service—but I don’t get a choice.
I don’t want AI responses to my Google searches—but I don’t get a choice.
I don’t want AI integrated into my software—but I don’t get a choice.
I don’t want AI sending me emails—but I don’t get a choice.
I don’t want AI music on Spotify—but I don’t get a choice.
I don’t want AI books on Amazon—but I don’t get a choice."
It’s not spot on. Buying and using all of these products is a choice.
The last is especially egregious. I don’t want poorly-written (by my standards) books cluttering up bookstores, but all my life I’ve walked into bookstores and found my favorite genres have lots of books I’m not interested in. Do I have some kind of right to have stores only stock products that I want?
The whole thing is just so damn entitled. If you don’t like something, don’t buy it. If you find the presence of some products offensive in a marketplace, don’t shop there. Spotify is not a human right.
The Onion has a great response to this from 2009: https://m.youtube.com/watch?v=lMChO0qNbkY
Of course you can opt out. People live in the backwoods of Alaska. But if you want to live a semi normal life there is no option. And absolutely people should feel entitled to a normal life.
Normal life means collectivism and conformity behaviour?
ROFL. Thank you for sharing that link!
If these things are genuinely so universally hated won't they just be.. capitalism'd out of existence? People will stop engaging with them and better products will win
What book store will stock AI slop that no-one wants to buy?
No, because “better products” won’t exist. That’s the complaint: every company is rushing to throw AI into their stuff, and/or use it to replace humans.
They’re not trying to satisfy customers: they’re answering shareholders. Our system is no longer about offering the best products, it’s about having the market share to force people to do business with you or maybe two other equally bad companies that constantly look for ways to extract more money from people to make shareholders happy. See: Two choices of smartphone OS, ISP regional monopolies or duopolies, two consumer OSes, a handful of mobile carriers, almost all available TVs models being “smart TVs” laden with spyware…
(I’m speaking from the US perspective, this may not be as pronounced elsewhere.)
> it’s about having the market share to force people to do business with you
The answer to this is regulation. See: https://www.msn.com/en-us/news/technology/apple-updates-app-...
Outside of a monopoly the best way to extract more money from people is to offer a better product. If AI is being forced and people do hate it, they'll move towards products that don't do that
What happened to Windows Recall being enabled by default? Surely it was in Microsoft's best interest to force it on people. But no, they reversed it after a huge backlash. You see this again and again
Of your examples, ISPs are the only one I can see that's hated without other options. Most people are quite happy with Windows/Mac/Android/iOS/Mint Mobile/Smart-TV-With-No-Internet-Access
Part of the problem is that some of these services have enormous upfront costs to work at all.
It's fun to say "let's go write a complete replacement for Microsoft Office" or the Adobe suite or what have you, but that has a truly astonishing upfront cost to get to a point where it's even servicing 50% of the use cases, let alone 95 or 99%.
Or there's other examples where it's not obvious there's sufficient interest to finance an alternative - how many people are going to pay for something that replicates solely the old functionality of Microsoft Paint or Notepad, for example.
You might be conflating capitalism (owning things like factories) with consumerism (buying things like widgets).
If all of the factory owners discover a type of widget to sell that can incidentally drive down wages the more units they move, it's unlikely for consumers to be provided much choice in their future widgets.
The lowest cost (either purchase price, or to produce) products don't create a monopoly
$30 blenders that break in 3 months haven't bankrupted Vitamix
I'm a bookseller who often uses Ingram to buy books wholesale when I'm not buying direct from publishers. I've used them for their distribution service since opening 5 years ago because they are the only folks in town who can help bootstrap a very small business with coverage of all the major publishers (in the U.S.). They're great at that, for a small cut in revenue.
Six-plus months ago they put a chatbot in the bottom right corner of their website that literally covers up buttons I use all the time for ordering, so that I have to scroll now in order to access those controls (Chrome, MacOS). After testing it with various queries it only seems to provide answers to questions in their pre-existing support documentation.
This is not about choice (see above, they are the only game in town), and it is not about entitlement (we're a tiny shop trying to serve our customers' often obscure book requests). They seemed to literally place the chatbot buttons onto their website with no polling of their users. This is an anecdotal report about Ingram specifically.
Opting out is easy, we can just stop using products from Microsoft, Apple, Meta and Google. Of course, for many that also means opting out of their job, which is a great way to opt out of a home, a family, healthcare, dental care and luxuries like food.
I don't think it's entitlement to make a well-mannered complaint about how little choice we actually have when it comes to the whims of the tech giants.
> If you don’t like something, don’t buy it.
The OP's point is that increasingly, we don't have that choice, for example, because AI slop masquerades as if it were authored by human beings (that's, in fact, its purpose!), or because the software applications you rely on suddenly start pushing "AI companions" on you, whether you want them or not, or because you have no viable alternatives to the software applications you use, so you must put up with those "AI companions," whether you want them in your life or not.
AI shit is usually not advertising as such. It's made to look like it was made a human. So I would have to consider this product carefully beforehand, or to return it after buying. That's a hassle. I don't want to spend productive time on this nonsense. For all I care, say it hurts the GDP.
How is this hard to understand? You’re completely missing the point. You’re basically saying if you get a spam text, don’t read it. If you get spam email, don’t read it. If you see an ad modal popup on a website, close it. It’s all still super annoying just like these AI features screaming “use me! click me! type to me!” all over the place in the UI.
I actually use the AI books that litter kindle unlimited to teach my daughter how to differentiate and be more sophisticated. I think a feature of all this is it inculcates a lot of people to AI spew. If it were isolated to the elite and the unscrupulous alone people would be a lot more vulnerable. By saturating the world with it, people get a true choice by being able to recognize it when they see it and avoid the output. It’s not like all our surfaces are not covered in enshittification as it is, another dose of it won’t make it meaningfully worse. And I know a lot of non English speakers that really appreciate the AI writing assistants built into email, the ai summaries built into search. Assuming no one finds them beneficial because it litters an already littered experience is a bit close minded. Many people otherwise challenged in some way. Summaries help dyslexics get through otherwise intractable walls of text, multi modal glasses help the vision impaired, witting assistants help bilingual workers level the playing field. Just because these don’t apply to you doesn’t mean it’s bothersome. (Now should you be able to disable it? Maybe, but as the author points out that’s a product choice made for financial reasons and there’s a market of products that make a different choice - don’t like google? Don’t feel so entitled that every service be free and pay for kagi)
Probably no one enjoys AI books though. I did my best at devils advocate on that above.
> I don’t want poorly-written (by my standards) books cluttering up bookstores
It's ridiculous to compare bad human books with bad AI books because there many human books which are life-changing, but there isn't a single AI book which isn't trash.
For consumer pproducts, sure, don't buy them. For people in office based careers, they may not get a choice when their company rolls out copilot, or management decide to buy an ai helpdesk agent, or a vendor pushes ai slop into the next enterprise software version.
How is that different from not liking other technology choices one’s employer makes? I could write a book about how much I hate our expense tool. But it’s never occurred to me that I am entitled to have a different one.
You should consider that yes, maybe you are entitled to a better one
Entitled, probably not, able to communicate frustrations and suggest alternative options, absolutely.
[dead]
There are plenty of non AI books on Amazon.
The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.
Also, everyone who requires these sophisticated models now needs to send everything to the gatekeepers. You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
This aggregation of power and centralisation of data worries me as much as the shortcomings of LLMs. The technology is still not accurate enough. But we want it to be accurate because we are lazy. So I fear that we will end up with many things of diminished quality in favour of cheaper operating costs — time will tell.
We run our own LLM server at the office for a month now, as an experiment (for privacy/infosec reasons), and a single RTX 5090 is enough to serve 50 people for occasional use. We run Qwen3 32b which in some benchmarks is equivalent to GPT 4.1-mini or Gemini 2.5 Flash. The GPU allows 2 concurrent requests at the same time with 32k context each and 60 tok/s. At first I was skeptical a single GPU would be enough, but it turns out, most people don't use LLMs 24/7.
If those smaller models are sufficient for your use cases, go for it. But for how much longer will companies release smaller models for free? They invested so much. They have to recoup that money. Much will depend on investor pressure and the financial environment (tax deductions etc).
Open Source endeavors will have a hard time to bear the resources to train models that are competitive. Maybe we will see larger cooperatives, like a Apache Software Foundation for ML?
It's not just about smaller models. I recently bought a Macbook M4 Max with 128GB RAM. You can run surprisingly large models locally with unified memory (albeit somewhat slowly). And now AMD has brought that capability to the X86 world with Strix. But I agree that how long Google, Meta, Alibaba, etc. will continue to release open weight models is a big question. It's obviously just a catch-up strategy aimed at the moats of OpenAI and Anthropic, once they catch up the incentive disappears.
Even Google and Facebook are releasing distills of their models (Gemma3 is very good, competitive with qwen3 if not better sometimes.)
There are a number of reasons to do this: You want local inference, you want attention from devs and potential users etc.
Also the smaller self hostable models are where most of the improvement happens these days. Eventually they'll catch up with where the big ones are today. At this point I honestly wouldn't worry too much about "gatekeepers."
Pricing for commodities does not allow for “recouping costs”. All it takes is one company seeing models as a complementary good to their core product, worth losing money on, and nobody else can charge more.
I’d support an Apache for ML but I suspect it’s unnecessary. Look at all of the money companies spend developing Linux; it will likely be the same story.
> Open Source endeavors will have a hard time to bear the resources to train models that are competitive.
Perhaps, but see also SETI@home and similar @home/BOINC projects.
"Maybe we will see larger cooperatives, like a Apache Software Foundation for ML?"
I suspect the Linux Foundation might be a more likely source considering its backers and how much those backers have provided LF by way of resources. Whether that's aligned with LF's goals ...
Seems like you don’t have to train from scratch. You can just distil a new model off an existing one by just buying api credits to copy the model.
"Just" is doing a lot of heavy lifting there. It definitely helps with getting data but actually training your model would be very capital intensive, ignoring the cost of paying for those outputs you're training on.
Your "API credits" don't buy the model. You just buy some resource to use the model that is running somewhere else.
What the parent poster means is that you can use the API to generate many question/answer pairs on which you then train your own model. For a more detailed explanation of this and other related methods, I can recommend this paper: https://arxiv.org/pdf/2402.13116
You don't understand what Gigachad is talking about. You can buy API credits to gain access to a model in the cloud, and then use that to train your own local model though a process called distilling.
Qwen3 isn't good enough for programming. You need at least Deepseek V3.
That's really great performance! Could you share more details about the implementation (ie which quantized version of the model, how much RAM, etc.)?
Model: Qwen3 32b
GPU: RTX 5090 (no rops missing), 32 GB VRAM
Quants: Unsloth Dynamic 2.0, it's 4-6 bits depending on the layer.
RAM is 96 GB: more RAM makes a difference even if the model fits entirely in the GPU: filesystem pages containing the model on disk are cached entirely in RAM so when you switch models (we use other models as well) the overhead of unloading/loading is 3-5 seconds.
The Key Value Cache is also quantized to 8 bit (less degrades quality considerably).
This gives you 1 generation with 64k context, or 2 concurrent generations with 32k each. Everything takes 30 GB VRAM, which also leaves some space for a Whisper speech-to-text model (turbo & quantized) running in parallel as well.
> The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.
The scale issue isn't the LLM provider, it's the power grid. Worldwide, 250 W/capita. Your body is 100 W and you have a duty cycle of 25% thanks to the 8 hour work day and having weekends, so in practice some hypothetical AI trying to replace everyone in their workplaces today would need to be more energy efficient than the human body.
Even with the extraordinarily rapid roll-out of PV, I don't expect this to be able to be one-for-one replacement for all human workers before 2032, even if the best SOTA model was good enough to do so (and they're not, they've still got too many weak spots for that).
This also applies to open-weights models, which are already good enough to be useful even when SOTA private models are better.
> You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
I dispute that it was not already a problem, due to the GDPR consent popups often asking to share my browsing behaviour with more "trusted partners" than there were pupils in my secondary school.
But I agree that the aggregation of power and centralisation of data is a pertinent risk.
"how much will they charge us for prioritised access to these resources"
For the consumer side, you'll be the product, not the one paying in money just like before.
For the creator side, it will depend on how competition in the market sustains. Expect major regulatory capture efforts to eliminate all but a very few 'sanctioned' providers in the name of 'safety'. If only 2 or 3 remain, it might get realy expensive.
It's like talking into a void. The issue with AI is that it is too subtle, too easy to get acceptable junk answers and too subtle for the majority to realize we've made a universal crib sheet, software developers included, perhaps one of the worst populations due to their extremely weak communications as a community. To be repeatedly successful with AI, one has to exert mental effort to prompt AI effectively, but pretty much nobody is willing to even consider that. Attempts to discuss the language aspects of using an LLM get ridiculed as 'prompt engineer is not engineering' and dismissed, while that is exactly what it is: prompt engineering using a new software language, natural language, that the industry refuses to take seriously, but is in fact an extremely technical programming language so subtle few to none of you realize it, nor the power that is embodied by it within LLMs. They are incredible, they are subtle, to the degree the majority think they are fraud.
Isn't "Engineering" is based on predictability, on repeatability?
LLMs are not very predictable. And that's not just true for the output. Each change to the model impacts how it parses and computes the input. For someone claiming to be a "Prompt Engineer", this cannot work. There are so many variables that are simply unknown to the casual user: training methods, the training set, biases, ...
If I get the feeling I am creating good prompts for Gemini 2.5 Pro, the next version might render those prompts useless. And that might get even worse with dynamic, "self-improving" models.
So when we talk about "Vibe coding", aren't we just doing "Vibe prompting", too?
> LLMs are not very predictable. And that's not just true for the output.
If you run an open source model from the same seed on the same hardware they are completely deterministic. It will spit out the same answer every time. So it’s not an issue with the technology and there’s nothing stopping you from writing repeatable prompts and promoting techniques.
Realistically, how many people do you think have the time, skills and hardware required to do this?
> If you run an open source model from the same seed on the same hardware they are completely deterministic.
Are you sure of that? Parallel scatter/gather operations may still be at the mercy of scheduling variances, due to some forms of computer math not being associative.
By "unpredictability", we mean that AIs will return completely different results if a single word is changed to a close synonym, or an adverb or prepositional phrase is moved to a semantically identical location, etc. Very often this simple change will move you from "get the correct answer 90% of the time" (about the best that AIs can do) to "get the correct answer <10% of the time".
Whenever people talk about "prompt engineering", they're referring to randomly changing these kinds of things, in hopes of getting a query pattern where you get meaningful results 90% of the time.
What you're describing is specifically the subtle nature of LLMs I'm pointing at; that changing of a single word to a close synonym is meaningful. Why and how they are meaningful gets pushback from the developer community, they somehow do not see this as being a topic, a point of engineering proficiency. It is, but requires an understanding of how LLMs encode and retrieve data.
The reason changing one word in a prompt to a close synonym changes the reply is because it is the specific words used in a series that is how information is embedded and recovered by LLMs. The 'in a series' aspect is subtle and important. The same topic is in the LLM multiple times, with different levels of treatment from casual to academic. Each treatment from casual to formal uses different words, similar words, but different and that difference is very meaningful. That difference is how seriously the information is being handled. The use of one term versus another term causes a prompt to index into one treatment of the subject versus another. The more formal the terms used, meaning the synonyms used by experts of that area of knowledge, generate the more accurate replies. While the close synonyms generate replies from outsiders of that knowledge, those not using the same phrases as those with the most expertise, the phrases used by those perhaps trying to understand but do not yet?
It is not randomly changing things in one's prompts at all. It's understanding the knowledge space one is prompting within such that the prompts generate accurate replies. This requires knowing the knowledge space one prompts within, so one knows the correct formal terms that unlock accurate replies. Plus, knowing that area, one is in a better position to identify hallucination.
Words are power, and specifically, specific words are power.
Who's saying that the model stays the same and the seed is not random for most of the companies that run AI? There is no drawback to randomness for them.
Predictable does not necessarily follow from deterministic. Hash algorithms, for instance, are valuable specifically because they are both deterministic and unpredictable.
Relying on model, seed, and hardware to get "repeatable" prompts essentially reduces an LLM to a very lossy natural language decompression algorithm. What other reason would someone have for asking the same question over and over and over again with the same input? If that's a problem you need solve then you need a database, not a deterministic LLM.
The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself
If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software
> The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself
conceding that this may be the case, there are entire categories of problems that i am now able to approach that i have felt discouraged from in the past. even if the code is wrong (which, for the most part, it isn't), there is a value for me to have a team of over-eager puppies fearlessly leading me into the most uninviting problems, and somehow the mess they may or may not create makes solving the problem more accessible to me. even if i have to clean up almost every aspect of their work (i usually don't), the "get your feet wet" part is often the hardest part for me, even with a design and some prototyping. i don't have this problem at work really, but for personal projects it's been much more fun to work with the robots than always bouncing around my own head.
This overlooks a new category of developer who operates in natural language, not in syntax.
As with many productivity-boosting tools, it’s slower to begin with, but once you get used to it, and become “fluent”, it’s faster.
If this nondeterministic software engineering had been invented first we'd have built statues of whoever gave us C.
I agree with the general gist of this piece, but the awkward flow of the writing style makes me wonder if it itself was written by AI…
There are open source or affordable, paid alternatives for everything the author mentioned. However, there are many places where you must use these things due to social pressure, lock-in with a service provider (health insurance co, perhaps), and yes unfortunately I see some of these things as soon or now unavoidable.
Another commenter mentioned that ChatGPT is one of the most popular websites on the internet and therefore users clearly do want this. I can easily think of two points that refute that: 1. The internet has shown us time and time again that popularity doesn’t indicate willingness to pay (which paid social networks had strong popularity…?) 2. There are many extremely popular websites that users wouldn’t want to be woven throughout the rest of their personal and professional digital lives
>Everybody wanted the Internet.
I don't think this is true. A lot of people had no interest until smartphones arrived. Doing anything on a smartphone is a miserable experience compared to using a desktop computer, but it's more convenient. "Worse but more convenient" is the same sales pitch as for AI, so I can only assume that AI will be accepted by the masses too.
People didn't even want mobile phones. In The Netherlands, there's a famous video of an interviewer asking people on the street ca. 1997 whether they would want a mobile phone. So not even a smartphone, just a mobile phone. The answer was overwhelmingly negative.
>there's a famous video of an interviewer asking people on the street ca. 1997 whether they would want a mobile phone. So not even a smartphone, just a mobile phone. The answer was overwhelmingly negative.
So people didn't want to be walking around with a tether that allowed the whole world to call them where ever they were? Le Shock!
Now if they'd asked people if they'd like a small portable computer they could keep in touch with friends and read books, play games, play music and movies on where ever they went which also made phone calls. I suspect the answer might have been different.
I’m at the point where a significant part of me wishes they hadn’t been invented.
We sat yesterday and watched a table of 4 lads drinking beer each just watch their phones. At the slightest gap in conversation, out they came.
They’re ruining human interaction. (The phone, not the beer-drinking lad.)
Is the problem really the phone, or everything but the actual phoning capability? Mobile phones were a thing twenty years ago and I didn't recall them being pulled out at the slightest gap in the conversation. I feel like the notifications and internet access caused the change, not the phone (or SMS for that matter).
Interesting you should say that. I found a Substack post earlier today along those lines [0].
I almost never take my phone with me, especially when with my wife and son, as they always have theirs with them, although with elderly parents not in the best of health I really should take it more.
But it's something I see a lot these days, in fact, the latest Vodafone ad in the uk has a bunch of lads sitting outside a pub and one is laughing at something on his phone. There's also a betting ad where the guy is making bets on his phone (presumably) while in a restaurant with others!
I find this normalized behaviour somewhat concerning for the future.
[0] - https://abysspostcard.substack.com/p/party-like-it-is-1975
Think like an engineer to solve the problem. You could start by adjusting the beer-to-lad ratio and see where that gets you.
In US colleges there is a game known as “Edward Fortyhands” which would solve the problem quite well.
As a kid I had Internet access since the early 90s. Whenever there was some actual technology to see (Internet, mobile gadgets etc.) people stood there with big eyes and forgot for a moment this was the most nerdy stuff ever
Yes, everyone wanted the internet. It was massively hyped and the uptake was widespread and rapid.
Obviously saying “everyone” is hyperbole. There were luddites and skeptics about it just like with electricity and telephones. Nevertheless the dotcom boom is what every new industry hopes to be.
I was there. There was massive skepticism, endless jokes about internet-enabled toasters and the uselessness and undesirability of connecting everything to the internet, people bemoaning the loss of critical skills like using library card catalogs, all the same stuff we see today.
In 20 years AI will be pervasive and nobody will remember being one of the luddites.
I was there too. You’re forgetting internet addiction, pornography, stranger danger, hacking and cybercrime, etc.
Whether the opposition was massive or not, in proportion to the enthusiasm and optimism about the globally connected information superhighway, isn’t something I can quantify, so I’ll bow out of the conversation.
Toasters in fact dot need internet and jokes about them are entirely valid. Quite a lot of devices that dont need internet have useless internet slapped on them.
Internet of things was largely BS.
I've seen this bad take over and over again in the last few years, as a response to the public reaction to cryptocurrency, NFTs, and now generative AI.
It's bullshit.
I mean, sure: there were people who hated the Internet. There still are! They were very clearly a minority, and almost exclusively older people who didn't like change. Most of them were also unhappy about personal computers in general.
But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was, and people were making businesses based on it left and right that didn't rely on grifting, artificial scarcity, or convincing people that replacing their own critical thinking skills with a glorified autocomplete engine was the solution to all their problems. (Yes, there were also plenty of scams and unsuccessful businesses. They did not in any way outweigh the legitimate successes.)
By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public. And a huge reason for that is how much it is being pushed on them against their will, replacing human interaction with companies and attempting to replace other things like search.
>But the Internet caught on very fast, and was very, very popular. It was completely obvious how positive it was,
>By contrast, generative AI, while it has a contingent of supporters that range from reasonable to rabid, is broadly disliked by the public.
It is absolutely wild how people can just ignore something staring right at them, plain as day.
ChatGPT.com is the 5 most visited site on the planet and growing. It's the fastest growing software product ever, with over 500M Weekly active users and over a billion messages per day. Just ChatGPT. This is not information that requires corporate espionage. The barest minimum effort would have shown you how blatantly false you are.
What exactly is the difference between this and a LLM hallucination ?
US public opinion is negative on AI. It’s also negative on Google and Meta (the rest of the top 5.)
No condescension necessary.
ChatGPT is the 5th most-visited website on the planet and growing quickly. that’s one of many popular products. Hardly call that unwilling. I bet only something like 8% of Instagram users say they would pay for it. Are we to take this to mean that Instagram is an unpopular product that is rbi g forced on an unwilling public?
My 75 year old father uses Claude instead of google now for basically any search function.
All the anti-AI people I know are in their 30s. I think there are many in this age group that got use to nothing changing and are wishing it to stay that way.
A friend of mine is a 65 years old philosopher who uses it to translate ancient greek texts or generate arguments between specific philosophers.
Would you like your Facebook feed or Twitter or even Hacker News feed inserted in between your work emails or while you are shopping for clothes on a completely different website?
If you answer no, does that make you an unwilling user of social media? It’s the most visited sites in the world after all, how could randomly injecting it into your GPS navigation system be a poor fit?
I downloaded a Quordle game on Android yesterday. It pushes you to buy a premium subscription, and you know what that gets you? AI chat inside the game.
I'm not unwilling to use AI in places where I choose. But let's not pretend that just because people do use it in one place, they are willing to have it shoved upon them in every other place.
I do think Facebook and Instagram are forced on the public if they want to fully interact with their peers.
I just don't participate in discussions about Facebook marketplace links friends share, or Instagram reels my D&D groups post.
So in a sense I agree with you, forcing AI into products is similar to forcing advertising into products.
If I want to use ChatGPT I will go and use ChatGPT myself without a middleman. I don't need every app and website to have it's own magical chat interface that is slow, undiscoverable and makes the stuff up half the time.
I actually quite like the AI-for-search use case. I can't load all of a company's support documents and manuals into ChatGPT easily; if they've done that for me, great!
I was searching for something on Omnissa Horizon here: https://docs.omnissa.com/
It has some kind of ChatGPT integration, and I tried it and it found the answer I was looking for straight away, after 10 minutes of googling and manual searching had failed.
Seems to be not working at the moment though :-/
And on an unwilling workforce. Everyone I know is being made to drop what they were working on a year ago and stuff AI into everything.
Some are excited about it. Some are actually making something cool with AI. Very few are both.
I looked for the right term but force-feeding is what it is. I yesterday also changed my default search engine from Duckduckgo to Ecosia as they seem the only one left not to provide flaky AI summaries.
In fact I also tried the communication part - outside of Outlook - but people don't like superficial AI polish
FWIW noai.duckduckgo.com is a thing
You can also just scroll down
You can completely turn off the AI summaries in DDG.
Yes. https://duckduckgo.com/settings#aifeatures
currently run duckduck, but dont get the summeraries as my phone browser sets individual domain conditions, and duck duck will return results without having java,cookies,dom enabled
Dunno about DDG but on Brave Search you can turn off the AI summaries if you prefer not to have them. Disclosure: I work at Brave.
Yep, seems like every product is cramming in their forced slop everywhere begging you to use their new AI they spent so much on.
But why are the CEOs insisting so much on AI? Because stock investors prefer to invest on anything with "AI inside". So the "AI business model" would not collapse , because it is what investors want. It is a bubble. It will be bubbly for a while, until it isn't.
It is not just that. Companies that already have lots of users interacting with their platform (Microsoft, Google, Meta, Apple ...) want to capture your AI interactions to generate more training data, get insights in what you want and how you go about it, and A/B test on you. Last thing they want is someone else (Anthropic, Deepseek ...) capturing all that data on their users and improve the competition.
Because it can, will and has increase productivity in a lot of fields.
Of course it’s a bubble! Most new tech like this is until it gets to a point where the market is too saturated or has been monopolised.
Yeah literally every new tech like this has literally everyone investing in it and trying lots of silly ideas. The web, mobile apps, cryptocurrencies, doesn't mean they are fundamentally useless (though cryptocurrencies have yet to make anything successful beyond Bitcoin).
I bet if you go back to the printing press, telegraph, telephone, etc. you will find people saying "it's only a bubble!".
But that's exactly the problem with proprietary software. It's not force-feeding you anything, it's working exactly as intended.
Software is loyal to the owner. If you don't own your software, software won't be loyal to you. It can be convenient for you, but as time passes and interest changes, if you don't own software it can turn against you. And you shouldn't blame Microsoft or it's utilities. It doesn't owe you anything just because you put effort in it and invested time in it. It'll work according to who it's loyal to, who owns it.
If it bothers you, choose software you can own. If you can't choose software you own now, change your life so you can in the future. And if you just can't, you have to accept the consequences.
Companies didn't ask your opinion when they offshore manufacturing to Asia. They didn't ask your opinion when they offshore support to call centers in Asia. Companies don't ask your opinion, they do what they think is best for their financial interest, and that is how capitalism works.
Once upon a time, not too long ago, there was someone who would bag your groceries, and someone who would clean your window at the gas station. Now you do self-checkout. Has anyone asked for this? Your quality of life is worse, the companies are automating away humanity into something they think is more profitable for them.
In a society where you don't have government protection for such companies, there would be other companies who provide a better service whose competition would win. But when you have a fat corrupt government, lobbying makes sense, and crony-capitalism births monopolies which cannot have any competition. Then they do whatever they want to you and society at large, and they don't owe you, you owe them. Your tax dollars sponsor all of this even more than your direct payments do.
New Jersey gas stations still do this, and here is napkin cost calculation
https://www.sciotoanalysis.com/news/2024/7/12/how-much-do-yo...
The issue really is that the AI isn’t good enough that people actually want it and are willing to pay for it.
It’s like IPV6, if it really was a huge benefit to the end user, we’d have adopted it already.
IPv6 adoption is actually limited by network effect and infrastructure transition costs, not lack of end-user benefits - unlike AI, which faces a value perception problem.
ChatGPT has more than 500m DAU, three years after creation. Is that really a value perception problem?
That value (of one company) is from speculative investment. I don't think it negates that the field has a perception problem.
After seeing something like blockchain run completely afoul/used for the wrong things and embraced by the public for it, I at least agree that AI has a value perception problem.
End users don't choose ipv6 or not - ISPs do
> isn’t good enough that people actually want it and are willing to pay for it.
Just from current ARR announcements: 3b+ anthropic, 10b+ oai, whatever google makes, whatever ms makes, yeah people are already paying for it.
Given everyone and their mother is putting AI in to their products it makes me wonder how that revenue breaks down between people incidentally paying for it versus deliberately paying for it versus being subsidized by VC. Obviously ultimately all this revenue is being collected at a massive loss but I wonder if that carries on down the value chain.
Amusing the way the argument shifts every time. This one's new though.
"If it was any good, people would pay for it."
"The data shows people are paying for it."
"Aah but they don't know they're paying for it."
I don’t think I’m trying to make that argument but thanks for putting it in my mouth. I do pay (or via employment get paid access) for a lot of products that have AI features that I don’t care about so from personal experience I know that at least some of the value chain is incidental.
They have been multiple crashed again and again due to people bot actually paying.
And VC investments are distorting markets - unprofitable companies kill profitable ones before crashing.
Huh? I’ve been programming for 20 years now and LLMs/GenAI have replaced search and StackOverflow for me - I’d say that means they are pretty good! They are not perfect, not even close, but they are excellent when used as an assistant and when you know the result you’re expecting and can spot its obvious errors.
So, are there any EU citizens around who are willing to create and run the needed European Citizens' Initiative to get this ball rolling? :)
As a data point, the "Stop Killing Games" one has passed the needed 1M signatures so is in good shape:
https://www.stopkillinggames.com
UK already responded saying "No, thanks".
https://petition.parliament.uk/petitions/702074/
The UK left EU
You don't say.
The point is that thinking number of signatures is a victory is naive.
You can't use this as an example of success until you actually achieve something.
I mostly agree with TFA, with one glaring exception: The quality of Google search results has regressed so badly in the past years (played by SEO experts), that AI was actually a welcome improvement.
I think it was just Google that got bad.
I use Kagi who returns excellent results, also when I need non AI verbatim queries.
It didn't get bad for no reason. It needs to be bad for ads to continue to be profitable.
Displaying what you searched for immediately is cannibalizing that market.
I'm guessing ads in AI results is the logical next step.
Yes, that's the next logical step. The only silverlining is Google currently has less of a moat than last time in the technology in question, so some upstart could always be on their heels in a Kagi-esque way.
LOL. I’ll take declining relevancy over (in order of badness) AI results that -
Badly summarise articles.
Outright invent local attractions that don’t exist.
Gave subtly wrong, misleading advice about employment rights.
All while coming across as confidently authoritative.
User issue. Every single time this comes up.
People don't know how to search, that's it. Even the HN population.
Every time this gets posted, I ask for one example of thing you tried to find and what keywords you used. So I'm giving you the same offer, give me for one thing you couldn't find easily on Google and the keywords you used, and I'll show you Google search is just fine.
Allright, had this recently since i keep forgetting luks commands.
How do you set up an encrypted file on linux that can be mounted and accessed same as a hard drive.
(note: luks, a few commands)
You will see a nonsensical ai summarization, lots of videos and junk websites being promoted then you'll likely find a few blogs with the actual commands needed. Nowhere is there a link to a manual for luks or similar.
This in the past had the no-ad straightforward blogs as first links, then some man pages, then other unrelated things for the same searches that i do now and get garbage.
FWIW, when I put <<linux create file image encrypted file system>> into Google (this was the first thing I tried, though without knowledge that it might be a tricky case I might have been less careful picking keywords) I get what look like plausible results.
At the top there's a "featured snippet" from opensource.com, allegedly from 2021, that begins with: create an empty file (this turns out to mean a file of given size with no useful data in it, not a size-0 file), then make a LUKS volume using cryptsetup, etc.
First actual search result is a question on Ask Ubuntu (the Stack Exchange site dedicated to Ubuntu) headed "How do I create an encrypted filesystem inside a file?" which unless I'm confused is at least the correct question. Top answer there (from 2017) looks plausible and seems to be describing the same steps as the "featured snippet". A couple of other links to Ask Ubuntu are given below that one but they seem worse.
Next search result is a Reddit thread that describes how to do something different but possibly still of interest to someone who wants to do the thing you describe.
Next search result is a question on unix.stackexchange.com that turns out to be about something different; under it are other results from the same site, the first of which has a cryptsetup-based recipe that seems similar to the other plausible ones mentioned above.
Further search results continue to have a good density of plausible-looking answers to essentially the intended question.
This all seems fairly satisfactory assuming the specific answers don't turn out to be garbage, which doesn't look very likely; it seems like Google has done a decent job here. It doesn't specifically turn up the LUKS manual, but then that wasn't the question you actually asked.
Having done that search to find that the relevant command seems to be cryptsetup and the underlying facility is called LUKS, searches for <<cryptsetup manual>> and <<luks documentation>> (again, the first search terms that came to mind) look to me like they find the right things.
(Google isn't my first-choice search engine at present; DuckDuckGo provides similar results in all these cases.)
I am not taking any sides on the broader question of whether in general Google can give good search results if one picks the right words for it, but in this particular case it seems OK.
I asked Google that exact question, and I got an AI summary that looks alright? Please verify if those steps make sense, I pasted them into a text service, it's too much for an HN comment: https://justpaste.it/63eiz
It shoed 25 or so URLs as the source.
That wasn't the question. The complaint is the poster can't find anything on Google because the results are now so poor, and your response is "but here's some AI generated slop, which may or may not make any sense."
Is "How do you set up an encrypted file on linux that can be mounted and accessed same as a hard drive." literally what you put into the search bar? if so, that's the problem.
try "mount luks encrypted file" or "luks file mount". too many words and any grammar at all will degrade your results. it's all about keywords
edit: after trying it myself i quickly realized the problem - luks related articles are usually about drives or partitions, not about files. this search got me what i wanted: "luks mount file -partition -filesystem" i found this article[1], which is in german (my native tongue), but contained the right information.
1: https://blog.netways.de/blog/2018/07/25/verschluesselten-fil...
Your version assumes that the user knows that luks exists in the first place, OP's does not.
Google hasn't really worked like you imagine for a decade.
Google is nearly useless for recipes. Try finding a recipe for beef bourguignon. They exist, but with huge prefaces and elaboration that mean endless scrolling on a phone, all in the name of maximizing time spent on page (which is a search ranking criteria).
I've also heard a 3rd-hand claims that not authors of those recipes vett what they've written. E.g., what the true prep / cooking times are.
I still find online recipes convenient, but I don't blindly trust details like cooking time and temperature. (I mean, those things are always subject to variability, but now I don't trust the times to even be in the right ballpark.)
Happily, there are some cooks that I think deserve our trust, e.g. Chef John.
I feel an urge to build personal local AI bots that would be personal spam filters. AI filtering AI, fight fire with fire. Mostly because the world OP wants is never coming back. Everything will be AI and it's everywhere.
I also feel an urge to build spaces in the internet just for humans, with some 'turrets' to protect against AI invasion and exploitation. I just don't know what content would be shared in those spaces because AI is already everywhere in content production.
This already exists around 20 years ago and didn't consume as much resources as an AI bot would ... Bayesian-Filters.
I noticed that some of his choices contributed to his problem. I haven't been forced into accepting AI (so far) while I've been using duckduckgo for search, libreoffice, protonmail, and linux.
even ddg has integrated AI now and while it can be disabled, the privacy aspect seems to mean that ddg regularily forgets my settings and re-enables the ai features.
maybe i'm doing something wrong here, but even ddg is annoying me with this.
I agree it’s annoying that the setting seem to change all the time, but you can use noai.duckduckgo.com
> Before proceeding let me ask a simple question: Has there ever been a major innovation that helped society, but only 8% of the public would pay for it?
Highways.
> Highways.
In my European country you have to pay a toll to use a highway. Most people opt to use them, instead of taking the old 2-lane road that existed before the highway and is still free.
[dead]
pretty much the whole population pays taxes
To be fair..
pretty much the whole population also wants tax cuts.
It's kind of insane out there in tax land.
I think there’s a difference between the tool that helps you do work better and the service that generates the end result.
People would be less upset if ai is shown to support the person. This also allows that person to curate the output and ignore it if needed before sharing it, so it’s a win/win.
But is the big money in revolution?
Just a quick quibble…the subtitle of the article calls this problem tyranny.
Tyranny is a real thing which exists in the world and is not exemplified by “product manager adding text expansion to word processor.”
The natural state of capitalism is trying things which get voted on by money. It’s always subject to boom-bust cycles and we are in a big boom. This will eventually correct itself once the public makes its position clear and the features which truly suck will get fixed or removed.
I agree copilot for answering emails is negative value. But I find Google AI search results are very useful, can't see how they will monetise this, but can't complain for now.
Excellent Frank Zappa reference in The Famous Article is "I'm the Slime"[1].
The thing that really chafes me about this AI, irrespective of whether it is awesome or not, is emitting all of the information to some unknown server. To go with another Zappa reference, AI becomes The Central Scrutinizer[2].
I predict an increasing use of Free Software by discerning people who want to maintain more control of their information.
[1] https://www.youtube.com/watch?v=JPFIkty4Zvk
[2] https://en.wikipedia.org/wiki/Joe%27s_Garage#Lyrical_and_sto...
Are you not concerned that force-feeding might be unduly disparaged by your comparison?
I assume you've been happy with the other slop Microsoft and Google fed you for years.
"Shut up, buddy, and chew on your rock."
It’s not force-feeding. It’s rape and assault.
I said no. Respect my preferences.
Why do people who attempt to critique AI lean on the "no one wants this, everyone hates this" instead of just making their point. If your arguments are strong you don't need to wrap them in false statistics.
> no one wants this, everyone hates this
Is not false statistics. "Nobody wanted or asked for this" is literally true.
Proof by counterexample: I want this.
You probably want the version of it they sold you in the advertising. Or are you actually happy with the slop they're currently shipping?
Yes, I use Cursor every day. It has changed my life.
But this is one thing that Gen AI is genuinely good at, constructing computer programs under close human supervision. It's also the most profitable (but not enough to justify valuations) Also, it may be a big thing here but its pretty niche in the larger scheme of things
The article is about it encroaching in the domain of human communications. Mass adoption is the only way to justify the incredible financial promises.
I use Claude at least weekly to help write documents for me. And I’m a good writer, who spent a lot of time and energy getting that way. I have a friend who is a terrible writer who I do proofreading for. He uses chatgpt and it’s made a world of difference for him in getting things accomplished and communicating what he wants.
I think there are lots of valid arguments against llm usage, but it’s extremely tiring to here how it’s not useful when I get so much use out of it.
I still remember how the very first Office Copilot video/mockup?/ads had people very excited. When they finally got it, it was meh for most.
This guy calls himself honest broker but his articles are just expressions of status anxiety. The kind of media the he loves to write about is becoming less relevant and so he lashes out at everything new from AI to TikTok.
I’ve observed the opposite—not enough people are leveraging AI, especially in government institutions. Critical time and taxpayer money are wasted on tasks that could be automated with state-of-the-art models. Instead of embracing efficiency, these organizations perpetuate inefficiency at public expense.
The same issue plagues many private companies. I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine. There's a lot of resistance from the public. They're probably thinking that their job will be saved if they refuse to adopt AI tools.
> I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine.
What I have seen is employees spending days asking the model again and again to actually generate the document they need, and then submit it without reviewing it, only for a problem to explode a month later because no one noticed a glaring absurdity in the middle of the AI-polished garbage.
AI is the worst kind of liar: a bullshitter.
You're describing incompetence or laziness—I’ve encountered those kinds of people as well. But I’ve also seen others who are 2-3 times more productive thanks to AI. That said, I’m not suggesting AI should be used for every single task, especially if the output is garbage. If someone blindly relies on AI without adding any real value beyond typing prompts, then they’re not contributing anything meaningful.
Days to write a document, but you think that it'll only take 30-60 minutes to review AI slop that may, or may not, bear any relationship to the truth?
Yeah, no you cant see that yet. What you see is comparison between own super optimistic imagined idea of useful AI with either reality or even knee jerk "goverment is stupid and wastful becauce Musk said so".
The irony is it’ll likely be the opposite.
The thing is, though, that time wasn’t wasted. It was spent fully understanding what they were actually trying to say, the context, the connotations of various different phrasings etc. It was spent mapping the territory. Throwing your initial, unexamined description into a prompt might generate something that looks enough like the email they’d have written, but it’s not been thought through. If the 10 minutes’ thought spent on the prompt was sufficient, the final email wouldn’t be taking days to do by hand.