They don't need AI to turn a profit.
They need AI to be seen as widely adopted and "a part of life".
They need certain categories of folks (CEOs, CIOs, Boards of Directors) to see AI as valuable enough to invest in.
They need to keep up the veneer of success long enough to make their investments attractive to acquisition by Private Equity or to an IPO.
They need to juice the short-term stock price.
Their goal isn't to produce a long-term business, their goal is to raise short-term returns to the point that the investors get a nice return on their investment, and then it becomes someone else's problem.
Just not whoever ends up with the bag of now far less valuable stock.
https://www.cnbc.com/video/2025/12/03/microsoft-have-not-low...
Even if the Microsoft spokesperson is being completely honest, lower growth targets is still evidence of weakness in the AI bubble.
Ultimately, you can spend what you want; if the product is bad people won’t use it.
Think about it via a manufacturing analogy. I think we can all agree that modern cnc machining is much better for mass manufacturing than needing to hire an equivalent number of skilled craftspeople to match that throughput.
Imagine we had a massive runup of innovation in the cnc manufacturing industry all in one go. We went from cnc lathes to 2d routing tables to 3, then 4, then 5 axis machining all in the span of three years. Investment was so sloshy that companies were just releasing their designs as open source, with the hope that they'd attract the best designers and engineers in the global race to create the ultimate manufacturing platform. They were imagining being able to design and manufacture the next generations of super advanced fighter jets all in one universal machine.
Now these things are great at manufacturing fully custom one-off products, and the technicians who can manipulate them to their fullest are awestruck by the power they now have at their fingertips. They can design absolutely anything they could imagine.
But you know what people really want? Not fighter jets, but cheap furniture. Do you know what it takes to make cheap furniture? Slightly customized variants of the early iterations that were released as open source. Variants that can't be monitized by the companies that spent millions on designing and releasing them.
The tech might work great, but that doesn't mean the investment pays off with the desired returns.
The glaring difference is that specialised machines, usually invented to do an existing task better, faster or more safely, do indeed revolutionise the world. As you pointed out, they perform necessary functions better, faster, and / or more safely.
Note that segues, that weird juice machine etc, we not built to fill a gap or to perform a task better, faster or more safely. Neither were pet rocks or see-through phones. Nobody was sitting around before the Metaverse going 'man, I wish Minecraft could be pre-made and corporate with my work colleagues", and when these things launched the sales pitches were all about "look at the awesome things this tech can do, isn't it great?!", rather than "look at the awesome things this tech will allow you / help you to do, aren't they great?!".
LLMs are really impressive tech. So are segues and those colour-changing t-shirts we had in the 80s. They looked awesome, the tech was awesome, and there were genuine practical applications for doomerist, somewhere.
But they do not allow the average poison to do anything awesome. They don't make arduous tasks faster, better or safer without heavily sacrificing quality, privacy, and sanity. They do not fill a gap in anybody's life.
That's the difference. Most AI is currently a really cool technology that can do a bunch of things and it's very exciting to look at, just like the Segway and the Metaverse. And, really, an ant, or a furby. They are not going to revolutionise anything, because they were more built to. They weren't built to summarise your emails or to improve your coding (there are many princes of software that were built to assist with coding, and they are pretty good) or to perform any arduous or dangerous tasks. They were built to experiment, to push boundaries, to impress, and to sell.
So yes, I 100% agree with you and take your point a little further it's not even that LLM's are too high tech and fancy for most periods. I don't even think that they're products. They are components, or add-ons, being sold as products like extension power cables 50 years before the invention of the plug socket, or flexible silicone phone cases being sold in the era of landlines and phone boxes.
And I'm legit still baffled that so many people seem to have jobs that revolve around reading and writing emails or producing boilerplate code, who are not able to confidently do those things, but aren't just looking for a new job.
Like, it's a tough market, but if you haven't learned to skim-read an email by now, do yourself a favour and find a job that doesn't involve so much skim reading of emails. I don't get it.
I could see US-built AI being a national security concern.
And if that happens, will the taxpayer be on the hook to make investors whole? We shouldn't. If it is nationalized, it needs to be done at a small fraction of the private investment.
If one or more of the AI companies fail, the government would pay what they feel is the market value for the graphics cards, warehouses, and standing desks and it will surely be way less than what the investors have put in.
they failed to grow their capital and they can hold the bag
That being said, my daily driver is macOS ever since apple silicon released, purely due to the laptop hardware. I keep a reasonably powerful Beelink mini PC mounted under my desk running ubuntu server and most of my work happens there over SSH with Tailscale. If you're primarily a laptop user, I'd definitely recommend this set up (or something similar), you get the best of both worlds.
The main pain-point was that the remote backup service had no Linux client. I ended up solving it with restic, but I acknowledge that isn't a turnkey solution for archetypal Aunt Tillie.
EDIT: I was also able to connect to my solar panel gateway trivially from the CLI just a few days ago.
Sleep/Hibernate on the other hand; well, let's just say that fast boot times "solved" those issues.
Well, unless someone gets recommended Arch Linux as a first Linux experience
I do not participate in the Microsoft ecosystem except only when needed. And every time I have to buy something on someone’s behalf, I can’t just buy or subscribe to The Thing, I have to get all of Office and cloud storage when all they need is an email box.
Then Australia slapped Microsoft over the head and forced them to apologize: https://news.microsoft.com/source/asia/2025/11/06/an-apology...
I am generally curious, because LLMs, VLMs, generative AI, advances are proving useful, but the societal impact scale and at this the desired rate is not revealing itself.
That is something I would possibly pay for but as the failures on complex tasks are so expensive, this seems to be a major use case and will just be a commodity.
Creating the scaffolding for a jwt token or other similar tasks will be a race to the bottom IMHO although valuable and tractable.
IMHO they are going to have to find ways to build a mote, and what these tools are really bad at is the problem domains that make your code valuable.
Basically anything that can be vibe coded can be trivially duplicated and the big companies will just kill off the small guys who are required to pay the bills.
Something like surveillance capitalism will need to be found to generate revenue needed for the scale of Microsoft etc…
NPUs seem to be targeted towards running tiny ML models at very low power, not running large AI models.
Of course, the market segment who would be most interested, probably has the expertise and funds to setup something with better horsepower than could be offered in a one size fits all solution.
Still, glad to see someone is making the product.
But if you follow the podman instructions for cuda, the llama.cpp shows you how to use their plugin here
Do you want to walk us through that math?
I say this as someone who once had the bright idea of sending deadline reminders, complete with full names of cases, to my smart watch. It worked great and made me much more organised until my managers had to have a little chat about data protection and confidentiality and 'sorry, what the hell were you thinking?'. I am no stranger to embarrassing attempts to jump the technological gun, or the wonders of automation in time saving.
But absolutely nobody in any professional legal context in the UK, that I can imagine, would use LLMs with any more gusto and pride than an industrial pack of diarrhoea relief pills or something - if you ever saw it in an office, you'd just hope it was for personal use and still feel a bit funny about shaking their hands.
https://www.reuters.com/legal/government/judge-disqualifies-...
what does lorraine williams have to do with this?
Windows XP was the pinnacle, with everything working just as it should.
(yes, hardly anybody remembers that there was a Windows version between 7 and 10 - but it did exist, I'm not making it up, saw it with me own eyes on a coworker's PC once).
If you count paying for ESM, someone could have gone from XP->7->11 and still been within support the whole time. Or from vista straight to 10.
Fun fact: Windows for Workgroups 3.11 was supported all the way to 2008. I believe it was the longest supported version of Windows.
Also indirectly: DirectX saved Linux gaming ;)
Nah, if windows stayed with OpenGL instead of inventing its own, gaming on linux would be far easier for decades.
But it is a bit funny that win32 api turned out to be most stable way to make apps running on linux
The problem with OpenGL is that it is a complete mess compared to the D3D APIs (D3D was the better designed API since at least D3D9, arguably even D3D7). Also DirectX wasn't just about rendering, it also covered sound, input and networking - although most of that has been dissolved into regular Windows APIs since quite a while).
Also, Vulkan repeats some of the same problems that OpenGL had, but at least Vulkan is an uptodate mess, not a deprecated mess like GL.
Microsoft platforms move too slowly too keep up with innovation pace, and suffer from classic platform restriction in regards to building useful, relevant, and *reliable* integrations into business systems.
My advise is to always start from scratch with AI, e.g. "build your own agent" and focus intimately on the rules/guardrails and custom tools you need for that agent to create value. A platform can't do that for you in current day.
MSFT needs to stay focused on O365 and coding tools with very simple UX wins. Not introduce custom agent platforms and auto-embed intrusive agents where no one asked for them.
1) Incomplete integration. Often I just want to write a prompt to create structured data from unstructured data. e.g. read an email and create a structured contact record. There's a block for this in Power Platform, but I can't access it. Studio can do this pretty well, but...
2) CoPilot Studio sucks at determinism. You really need to create higher level tools in Power Automate and call them from Studio. Because of (1) this makes it hard to compose complex systems.
3) Permissions. We haven't been able to figure out a secure way for people to share Copilot Studio agents. This means you need to log into studio and use the debug chat instead of turning the agent on in the main Copilot interface.
4) IDE. Copilot Studio bogs down real fast. The UI gets super laggy, creating a terrible DX. There should be a way to write agents in VScode, push the definitions to source control, and deploy to Copilot, but it isn't obvious.
5) Dumb By Default. The Power Platform has hooks into Outlook and Active Directory. Copilot has access to the latest OpenAI models. CoPIlot Studio has an MCP server for Calendar. Out of the box I should be able to tell CoPilot "schedule a 30min meeting with Joe and Larry next week." Nope. Maybe if I struggle through CoPilot Studio to create an agent? Still no. WTF Microsoft.
I guess I'll stop there. I really wanted to like Copilot studio, but it just didn't deliver. Maybe I'll circle back in a couple months, but for now I'm exploring other platforms.
PS don't even get me started on how we were so excited to retire our home-grown chat front end for the Azure OpenAI Service in favor of Copilot, only to have our users complain that Copilot was a downgrade.
PPS also don't talk to me about how CoPilot is now integrated into Windows and SIGNS YOU INTO THE FREE COMMERCIAL SERVICE BY DEFAULT. Do you know how hard it is to get people to use the official corporate AI tools instead of shadow AI? Do you know how important it is to keep our proprietary data out of AI training sets? Apparently not.
Answer - 60 shots of generation/summarization etc per month. ie way below even casual use.
Ok so maybe the copilot chat works well if I’m logged in with my paid account then? Nope. Slow and often generates empty code cells. (The enterprise version at work never has issues).
ie between low quota and broken tech their consumer level office AI is literally of no use to me.
Microsoft denies report of lowering targets for AI software sales growth