We are launching Phind 3 (https://www.phind.com), an AI answer engine that instantly builds a complete mini-app to answer and visualize your questions in an interactive way. A Phind mini-app appears as a beautiful, interactive webpage — with images, charts, diagrams, maps, and other widgets. Phind 3 doesn’t just present information more beautifully; interacting with these widgets dynamically updates the content on the page and enables new functionality that wasn’t possible before.
For example, asking Phind for “options for a one-bedroom apartment in the Lower East Side” (https://www.phind.com/search/find-me-options-for-a-72e019ce-...) gives an interactive apartment-finding experience with customizable filters and a map view. And asking for a “recipe for bone-in chicken thighs” gives you a customizable recipe where changing the seasoning, cooking method, and other parameters will update the recipe content itself in real-time (https://www.phind.com/search/make-me-an-recipe-for-7c30ea6c-...).
Unlike Phind 2 and ChatGPT apps, which use pre-built brittle widgets that can’t truly adapt to your task, Phind 3 is able to create tools and widgets for itself in real-time. We learned this lesson the hard way with our previous launch – the pre-built widgets made the answers much prettier, but they didn’t fundamentally enable new functionality. For example, asking for “Give me round-trip flight options from JFK to SEA on Delta from December 1st-5th in both miles and cash” (https://www.phind.com/search/give-me-round-trip-flight-c0ebe...) is not something that neither Phind 2 nor ChatGPT apps can handle, because its Expedia widget can only display cash fares and not those with points. We realized that Phind needs to be able to create and consume its own tools, with schema it designs, all in real time. Phind 3’s ability to design and create fully custom widgets in real-time means that it can answer these questions while these other tools can’t. Phind 3 now generates raw React code and is able to create any tool to harness its underlying AI answer, search, and code execution capabilities.
Building on our history of helping developers solve complex technical questions, Phind 3 is able to answer and visualize developers’ questions like never before. For example, asking to “visualize quicksort” (https://www.phind.com/search/make-me-a-beautiful-visualizati...) gives an interactive step-by-step walkthrough of how the algorithm works.
Phind 3 can help visualize and bring your ideas to life in seconds — you can ask it to “make me a 3D Minecraft simulation” (https://www.phind.com/search/make-me-a-3d-minecraft-fde7033f...) or “make me a 3D roller coaster simulation” (https://www.phind.com/search/make-me-a-3d-roller-472647fc-e4...).
Our goal with Phind 3 is to usher in the era of on-demand software. You shouldn’t have to compromise by either settling for text-based AI conversations or using pre-built webpages that weren’t customized for you. With Phind 3, we create a “personal internet” for you with the visualization and interactivity of the internet combined with the customization possible with AI. We think that this current “chat” era of AI is akin to the era of text-only interfaces in computers. The Mac ushering in the GUI in 1984 didn’t just make computer outputs prettier — it ushered in a whole new era of interactivity and possibilities. We aim to do that now with AI.
On a technical level, we are particularly excited about:
- Phind 3’s ability to create its own tools with its own custom schema and then consume them
- Significant improvements in agentic searching and a new deep research mode to surface hard-to-access information
- All-new custom Phind models that blend speed and quality. The new Phind Fast model is based on GLM-4.5-Air while the new Phind Large model is based on GLM 4.6. Both models are state-of-the-art when it comes to reliable code generation, producing over 70% fewer errors than GPT-5.1-Codex (high) on our internal mini-app generation benchmark. Furthermore, we trained custom Eagle3 heads for both Phind Fast and Phind Large for fast inference. Phind Fast runs at up to 300 tokens per second, and Phind Large runs at up to 200 tokens per second, making them the fastest Phind models ever.
While we have done Show HNs before for previous Phind versions, we’ve never actually done a proper Launch HN for Phind. As always, we can’t wait to hear your feedback! We are also hiring, so please don’t hesitate to reach out.
– Michael
For example, it feels like Google's featured snippet (quick answer box) but expanded. But the thing is, many people don't like the feature snippet, and there's a reason it doesn't appear for many queries - it doesn't contribute meaningfully to those.
This functionality is doing exactly the opposite of the process of building good web apps: Rather than "unpacking functionality" and making it specific for an audience, it "packs" all functionality into a generalized use case, at the cost of becoming extremely mediocre for each use case, which makes it precisely worse than any other tool you'd use for that job.
As a specific example, I clicked your apartments in LES search (https://www.phind.com/search/find-me-options-for-a-72e019ce-...) and it shows us just 4 listings...? It shows some arbitrary subset of all things I could find on StreetEasy, and then provides a subset of the search functionality, losing things such as days on market, neighborhood, etc.
It's a cool demo, but "on-demand software" is exactly "Solution-In-Search-of-a-Problem".
The difficult part you need to ask is, like feature snippet, what are the questions worth solving with this, and is the pain point big enough that it's worth solving?
I offer this in the spirit of feeling like I’m missing something, not out of negativity—I just genuinely don't understand the proposition.
What’s the advantage of trying to extract and normalize features from already-messy data sources, then provide controls that duplicate the query, rather than just applying the query and returning the results? Isn’t the user turning to a natural-language LLM specifically to avoid operating idiosyncratic UI controls?
For that matter, it takes time to learn to use an interface effectively. To understand how what it says it’s doing connects to what it’s actually doing. I know I can always trust McMaster Carr’s filter controls, and I know I can never trust Amazon’s wacky random ones.
It seems to me that it’s much harder to pick the right controls and make them work correctly than it is to throw some controls in an interface. Maybe that’s what I’m missing: that just wiring in controls in the first place is the hard part for most people who don’t work in this space.
Is the idea here that I’d need to learn a brand new interface, and figure out whether I can trust it, with every query?
For example, here's an example for a "day trip plan in Bristol" that contains a canonical example (directly based on the query), but also a customization widget that presents some options that you might not have already thought about if you were just doing a text-based followup.
https://www.phind.com/search/make-me-a-day-plan-ac8c583b-ce6...
Many years ago in college I worked on building Java applets that let kids visualize math related concepts. Sliders make things like sine/cosine and all sorts of other cool stuff way way more intuitive. We had a applet that, let you do ridiculous comparisons, to visualize how many empire state buildings a marathon is in length, etc. We had an primitive 'engine' simulator that let you adjust inputs on a steam engine. stuff like that
My most common usecase now is "give me a quick answer because I don't want to wade through the search engine results page and then wade through the blog to get my one liner. Eg: "what's the command line to untar an xz over ssh?"
If your reports include a small error, it could be catastrophic.
Rough edges: - aspect ratios on photos (maybe because I was on mobile, cropping was weird) - map was very hard to read (again, mobile) - some formatting problems with tables - it tried to show an embedded Gmap for one location but must have gotten the location wrong, was just ocean
Low hanging fruit feedback that would really improve the experience of many: I haven't been able to pinpoint it, but there seems to be double scroll bar enabled in both the container of the whole page (top navbar, and "non-content") and the actual wanted scrollbar inside the rendered content. Because of this, especially on mobile, when I try to scroll, the outside scrollbar "captured" my scroll input and I could never get past the Headline.
Might be a missing height: 100vh; or overflow-y: hidden in your min-h-screen class. Cheers!
I could see something like this being especially killer in ecommerce, where comparisons, faceted search, and heavy use of video/photos/3D models is important. Also ecommerce brands love to have control over the aesthetic of their experiences. Their UI is constantly evolving due to sales, up/cross sells, recommendations, personalization, A/B testing, etc
Dey phucked up phind.
Then they pivoted to flowcharts and now to "one off apps". It is a bit weird because the concepts are not bad necessarily, but instead of adding features to their product incrementally they decided to make that one feature their product. The problem imo is that there is no one size fitting all. Flowcharts or interactive content can be great, but it is not always the best fit for a search query. Like, I do believe that the chat based UI is not optimal and flowcharts and more complex interactive/dynamic functionalities are great improvements for some use cases, but I am not sure of what to make of this product. If it was a feature in a more general product that I could still get the same functionality as before, it may have been more appealing. Now it feels like they are building a new hammer every time and looking for nails that fit that one hammer.
I used it quite often, even instead of GH Copilot.
Now, it's much slower and has some kind of solution view that gets updated with every new message.
Found myself to resorting to GH Copilot chat quite often today, because Phind felt like a different/worse service.
Before I got a proper summary and an arch diagram. It felt like its own work. Now it spun in circles for a good while and then it regurgitated the continue.dev website in weird topic boxes and that wasn’t helpful at all.
It does seem that it’s SOTA for LLMs to take forever to respond. In that sense it’s in good company. More and more, lately, I send a prompt to an LLM and switch tabs because it’s likely to be 20-60 seconds at best.
A curious regression, even if I understand why.
At least to me, this is totally fresh take on AI and providing answers. OpenAI is burning through billions without trying to make nicer interface or just come up with some innovation how to train models (Qwen and Minimax). Unlike Claude who tries to smother you with content and emojis, I got clean and focused answer to my query and an app.
Again, love it, thank you. If you have to sell yourself, make sure you get a lot of billions.
if every response starts with "You're absolutely right -- ..." you know phind is hallucinating and you can immediately close the tab.
anyway I think you need better QA processes
Application error: a client-side exception has occurred while loading www.phind.com (see the browser console for more information).
Getting this error the homepage. In the browser console I am just seeing Content-Security-Policy: (Report-Only policy) The page’s settings would block a script (script-src-elem) at https://www.phind.com/_next/static/chunks/c857e369-746618a9672c8ed0.js?dpl=dpl_4dLj9qrNQMh6evFNeDZbEJjTnT9B from being executed because it violates the following directive: “script-src 'none'”
GET https://www.phind.com/_next/static/chunks/4844-90bb89386b9ed987.js?dpl=dpl_4dLj9qrNQMh6evFNeDZbEJjTnT9B [HTTP/1.1 403 403 Forbidden 716ms]
The other links you shared seem to work thoughPrompt:
"I want to build a V-plotter (carriage hanging from two points, connected by light chain or belts). How can I figure out the dimensions of the printable area that will have good print quality? Good quality requires that there is enough, but not too much pull on both chains."
Result:
https://www.phind.com/search/i-want-to-build-a-e402fb56-8e69...
Yes, it has some form elements to adjust values. But it's not really interactive and the "map" it talks about is not showing. Also "Keep both chains between 0.5 m g and 1.5 m g" sounds like nonsense.
You also get the usual LLM crap like "Loose belts cause skipped steps and misaligned layers" where "layers" clearly refers to 3D printers and has no meaning here.
What I expected:
When I told Phind I'm a complete novice, it came up with very detailed instructions and troubleshooting tips.
and after about 90 seconds the mini app was created which had a few sliders for cardamom, cinnamon, ginger which was really confusing, then it showed a bunch of other stuff which was also completely useless. I did the same search on Google ( https://tinyurl.com/47sh4eah ) and did not dislike the answer bc i know it didn't burn 1000s of tokens for that query. Sorry for being a bit harsh but I have never seen wastage of resources as bad as this.
>A geometry app with nodes which interact based on their coordinates which may be linked to describe lines or arcs with side panels for variables and programming constructs.
which resulted in:
https://www.phind.com/search/a-geometry-app-with-nodes-ed416...
which didn't seem workable at all, and notable was lacking in a side panel.
https://www.phind.com/search/i-want-to-find-out-d79b4dca-bac...
I tried to make it generate an explainer page and it created an unrelated page: https://www.phind.com/search/explain-to-me-how-dom-66e58f3f-...
I tried generating your answer again: https://www.phind.com/search/explain-to-me-how-dom-78d20f04-....
I tried it out with a relatively basic Medicinal Chem/Pharmacology question, asking for an interactive Structure-Activity-Relationship viewer:
> "Build an interactive app showing SAR for a congeneric series. Use simple beta-2 agonists (salbutamol -> formoterol -> salmeterol). Display the common phenethylamine scaffold with R-group positions highlighted, and let me toggle substituents to see how logP, receptor binding affinity, and duration of action change."
It did not quite get it right. It put a bunch of pieces together, but the interactivity/functionality didn't work and choice of visualization was poor for the domain:https://www.phind.com/search/find-me-options-for-a-72e019ce-...
It gave me a decent introduction to biology, it defined what life is, then quizzed me. The problem is, it says to select the appropriate answer, but selection does not work.
It reminds me of the game developer behind "Another World". He made some good games, and was able to raise money from early game investors. He thought he could make a game maker. He would develop it once, and it would make all sort of games. So he pitched it, and investors were more interested than ever. Obviously he realized that such concept would never work. Today we have over ambiguous ideas, but they ship them anyway.
I was hoping to get a map with arrows like "$35B in agriculture" from China to USA. I wasn't able to make it do that, but the information was still there presented in a reasonable way!
Since we’re sharing related work, I’ve been building something at a very different layer of the stack. Shameless plug warning!
Where Phind gives you an interactive answer right now, I built SageNet for the opposite problem: when you want to go from zero → actually good at something over weeks/months, not just get a one-shot result.
SageNet:
- builds a personalized learning plan
- adapts as you progress
- generates short audio lessons
- gives real projects
- has a daily voice check-in agent
- lets you share a public progress dashboard
If anyone wants to try it: https://www.sagenet.club
I’m curious to see how it evolves with more complex, multi-step queries.
First: my sense is that for most use cases, this will begin to feel gimmicky rather quickly and that you will do better by specializing rather than positioning yourself next to ChatGPT, which answers my questions without too much additional ceremony.
If you have any diehard users, I suspect they will cluster around very particular use cases, say business users trying to create quick internal tools, users who want to generate a quick app on mobile, scientists that want quick apps to validate data. Focusing on those clusters (your actual ones, not these specific examples) and building something optimized for their use cases seems likelier to be a stronger long term play for you
Secondly, I asked it to prove a theorem, and it gave me a link to a proof. This is fine, since LLM generated math proofs are a bit of a mess, but I was surprised that it didn't offer any visualizations or anything further. I then asked it for numerical experiments that support the conjecture, and it just showed me some very generic code and print statements for a completely different problem, unrelated to what I asked about. Not very compelling
Finally, and least important really: please stop submitting my messages when I hit return/enter! Many of us like to send more complex multi-line queries to LLMs
Good luck
But, assuming you are trying to be in between lovable and google, how are you not going to be steamrolled by google or perplexity etc the moment you get solid traction? Like, if your insight for v3 was that the model should make its own tools, so even less hardcoded, then i just dont see a moat or any vertical direction. What really is the difference?
Our long-term vision is to build a fully personalized internet. For Google this is an innovator's dilemma, as Google currently serves as a portal to the current internet.
Phind user for ~2 years.
I want to build an app. Not to enter an email in a textbox.
I was surprised not to see a share and embed button. I would expect that could be huge for growth.
We tried to do this for learning purposes with Reasonote, but the tech wasn't quite there yet.
I'm excited to dig back in with some newer models.
I suppose this is one interesting pattern for that.
_1: (explored a bit here at https://world.hey.com/akumar/software-s-blog-era-2812c56c)_
Congrats on the launch and keep up the great work.