Show HN: I built an LLM chat app because we shouldn't need 10 AI subscriptions
prismharmony.comI'm lost between ChatGPT vs Claude vs Gemini... which subscriptions to take? With Cursor and all these specific AI tools, I just wanted one simple chat app where I can use any model and pay only when I use it.
Couldn't find one, so I built one.
Pay only for what you use. Your prompts and docs, knowledge bases work with every model - no more copy-pasting between apps.
Started as a personal project, but thought someone else might benefit from this too.
What do you think?
I like the idea of a single chat with many models. Pre-AI-everything, I was already a Kagi user, so already paying for that. I've started using the Kagi Assistant[0] to solve this for myself. I pay $10/month, as I always did, and that's my credit limit to the various LLMs. They added an AI cost tracker to create transparency to those costs. So far this month I've used $0.88 of my $10.00 allotment. So I don't feel like I'm in any danger of going over. If I wasn't already paying for this, I'd be pretty interested in an option that was pay-as-you-go.
Looking at your pricing I find the credit model a bit confusing. It feels like credit card points and I don't really have a concept of what that will get me. Tokens are a bit abstract, but that's the currency of AI, so it is what it is. Adding credits as an intermediary between tokens and dollars may have been done with the goal of simplify things, but it my head it makes it harder to understand and leaves more places for hidden fees to hide.
Giving some idea of how much usage someone could expect to get out of a 1,000 tokens or 100 credits or $1 would be useful. I can do the math and see I can do 20 web searches for $1, but does that include follow up questions? Is every question a web search? Kagi shows that I've used 15 searches so far today, and it's cost me less than 2¢ for the almost 19k tokens. So I'm a bit confused.
More generally on the chat-only umbrella tools, I do miss some of the nice-to-have options of going directly with the big players (interactive code editors, better images generation, etc), but not enough to be paying $20+/month/service.
[0] https://kagi.com/assistant
I've been using openrouter.ai to use "all llm's". No subscription, and can be tied to your editor of choice
For free? How's that possible when one AI prompt uses 10x more energy than a Google search [1]?
[1] Source: https://kanoppi.co/search-engines-vs-ai-energy-consumption-c...
10 google searches are also free
You didn't click on the link I shared. I'm talking about the cost to produce the response, not the request. One AI prompt uses around 10 times more CPU and energy than a Google search.
If ChatGPT handles 1 billion queries a day, that's like the energy cost of 10 billion Google searches every single day.
Someone has to pay the electricity bill. We all know it's not free like you claim.
you also didn't click on the link the poster you replied to shared...
seconding openrouter and fal, having to muck around with idiosyncrasies of each vendor just to try their "bestest model" and find out it does not satisfy your requirements is a chore.
> Couldn't find one, so I built one.
there are no less than 100 of these.
They couldn't find one of their own. All those 100 others were built by others!
Librechat seems perfect for your use case. It’s open source as well. Used by many of the big techcos to solve the problem you’re describing, so it’s battle tested https://www.librechat.ai
Only has some models but https://Duck.Ai is free
It's Duckduckgo's offering.
Good idea, and I also explored this idea and a year ago and also started building one, recognising a gap in the market for a solution that supports multiple LLMs, but also provides small businesses with a centralised managed AI client - billing, monitoring, logging, company prompts, etc.
Ultimately, I discovered https://www.typingmind.com, which offers all of these features. I am sure there are others - I was amazed that not more of these came out. Might be worth to see what they have built. The more of these that come out the better - its a whole new market.
> Couldn't find one, so I built one.
> What do you think?
You were lost between all the AI stuff... but have you not tried to simply use Google to find a bunch of similar services?
AnythingLLM is great for this, it even enables you to setup RAG from your own set of docs, which it then retrieves via embedding and adds as context.
I use that + OpenRouter which gives me API access to more models as well. Huge fan of this approach.
I also built VT (https://vtchat.io.vn), a secure, privacy-first AI chat platform focused on data sovereignty. It supports BYOK (Bring Your Own Key) models, OpenRouter integration, and local models through LM Studio and Ollama (currently in beta).
Should have used ChatGPT deep research to prevent you wasting time building the 41st version of this same product.
Here's all the main competitors:
1. You.com
2. Poe
3. Mammouth
4. Magai
5. TeamAI
6. TypingHive
7. WritingMate
8. ChatHub
9. Monica
Isn't this what openrouter is for?
What's openrouter?
https://openrouter.ai/
edited to remove statement saying API only, as per comments.
They have a chat feature allowing you to talk with multiple models
Not sure if you noticed, but the first CTA on the link you posted is a prompt to "Start a message", which then opens a multi-model capable chat interface.
Openrouter has a chat built in as well. Also, since librechat was mentioned, the self hosted option I currently prefer is openwebui - connected to openrouter and gemini here.
There is an open source alternative
https://github.com/open-webui/open-webui
anyone could run this themselves with open web UI and litellm, in fact that's the stack I've been using
There's also https://t3.chat/
Tangentially related, but is there any sort of chat bot that will look at the query and suggest which LLM might be the best for that particular task?
Have a look at OpenRouter's Auto Router[0].
[0]: https://openrouter.ai/openrouter/auto
Cool and hats off to you for building an app ... but how come you couldn't find one?! There are so many. I use Raycast, which has access to all the models, there's Poe (from Quora) which is popular, Perplexity offers access to many models, and there are many more...
I'll bite: what is the web search story like? This is the killer feature of e.g. chatgpt that none of the alternative or OSS options offer. Having the search be fast (i.e. not round tripping to the client on every search) and integrated into the thinking process is unbeatable, I use it constantly. The big API providers all provide search options in their APIs now, but they're very quirky - openai doesn't allow search via their API with thinking models. Gemini doesn't allow you to use search with any other tools available. Claude's just doesn't seem to work that well, even in their own web UI. I even paid for typing mind, which is nice, but I never use it and always just end up paying for chatgpt again because of this.
Question to people talking about the various alternatives that already exist for this: does anyone know if there's something like OpenRouter that's open source and that, either as the only interface or preferably as an optional alternative to a web interface, lets you use a standard non-AI chat app (ideally Signal, more likely WhatsApp) as the interface?
Edit: I'd still be grateful for a reply with any recommendations or other options, but ChatGPT has given me a few things to look into when I'm at my PC - https://chatgpt.com/share/6873a9b5-ea8c-800c-b111-96b5f27a09...
"If you're on the free plan, there's an 19.8% platform fee"
I think you mean the "pay as you go" plan? If not, that's pretty confusing, and 19.8% of "free" should still be "free" :-)
I think in theory this type of approach is good but you know what’s going to happen eventually. Companies like OpenAI is already gatekeeping its best models. You need to pay and tier up on their platform to even use it. There’s no free lunch in AI landscape.
I think this does the same as well https://mammouth.ai/
Or this
https://chatllm.abacus.ai/
off-topic: anyone aware of a service where I could plugin my api keys for openai/gemini/claude and it asks the same question to all 3 and refines the answer by using one of them as the final arbiter ?
I also created my own. Frontend-only, no signup, based on openrouter, written in rust+leptos (wasm):
https://github.com/fdietze/tomatic
https://tomatic.app
I believe there's a couple of similar apps like https://msty.app and https://jan.ai that do the same and allow you to plug in your own API keys.
The pricing doesn’t render properly on mobile. The responsive design puts what is presumably a table into a vertical stack, which means I have no idea which price corresponds to which feature.
I use Librechat and OpenWebUI for that, but lately I've gotten a Claude subscription because A) deep research is amazing, B) their "canvas" UI is great, and C) Claude Code.
If you truly couldn’t find anything then you clearly don’t know how to use a web search. Poe.com brings many models under one subscription, and they have you completely beat on features and the number of models they offer. They are owned by Quora, so have a decently large group backing them. You.com lets you use multiple models, and also offers more features and models than you. Perplexity lets you use multiple models to chat. Merlin.ai lets you use multiple models. The list goes on, but there are a variety of established players in this space.
It looks like the only thing you offer over them is a “pay-as-you-go” option, where they are only subscription based. You kind of cheapen your differentiating factor by also offering a subscription. You need to show how you’re different from the competitors in this space, otherwise your growth will be very slow. You’re competing against OpenAI and Anthropic who are trying to sell their chat interfaces along with the other aggregator websites who have been around longer and have been developing features as integrating models from various providers this whole time. Do you think your pricing model will be enough, or do you have some killer features planned?
> If you truly couldn’t find anything then...
...you are clearly an engineer who has already decided to write their own so wasn't looking to hard, or did find something but went "I could do this better". Pretty much how many projects start.
OP said, in their post, they “Couldn't find one”
It’s totally fine to build your own implementation of something, especially if it’s for personal use or you’re not charging. OP is pitching a paid product. It’s not okay to ignore the sea of competitors and pitch your product on HN with a marketing blurb that isn’t true.
Anybody can slap together a chat UI and integrate a few LLM APIs. We need more than that if you’re charging money.
In the age of LLMs, why use something someone else built when an LLM could just build you your own!
Just tried it but the messages keep disappearing after submission and streaming.
You should integrate with AWS bedrock, you'd be able to offer 60 models instantly without having to spend any time integrating with separate APIs: https://docs.aws.amazon.com/bedrock/latest/userguide/models-...
this already existed with things like POE. like I mean for normal users, with a subscription for multiple models. same with frontends for api keys.
> Couldn't find one, so I built one.
I’m sorry, but I find it hard to believe that you didn’t find any. I personally know at least 5 services that offer this.
One improvement that would interesting, if not in there already, is to compare the answers from the different LLMs, and then combine them into the highest probability statement.
Reminds me of the xkcd https://xkcd.com/927/
This thread has Cunningham's Law¹ written all over it, too
¹https://en.wikipedia.org/wiki/Cunningham%27s_Law
[dead]