Show HN: Achieves Perfect 100 Score Across 6 Leading AI Model Evaluations
github.comHello Hacker News,
I’m releasing TXT Blah Blah Blah Lite, an open-source plain-text AI reasoning engine powered by semantic embedding rotation.
It generates 50 coherent, self-consistent answers within 60 seconds — no training, no external APIs, and zero network calls.
Why this matters Six top AI models (ChatGPT, Grok, DeepSeek, Gemini, Perplexity, Kimi) independently gave it perfect 100/100 ratings. For context:
Grok scores LangChain around 90
MemoryGPT scores about 92
Typical open-source LLM frameworks score 80-90
Key features Lightweight and portable: runs fully offline as a single .txt file
Anti-hallucination via semantic boundary heatmaps and advanced coupling logic
Friendly for beginners and experts with clear FAQ and customization options
Rigorously evaluated with no hype, fully transparent
Try it yourself by downloading the open-source .txt file and pasting it into your favorite LLM chatbox. Type hello world and watch 50 surreal answers appear.
Happy to answer questions or discuss the technical details!
— PSBigBig
Psychosis simulator games might be one thing LLMs actually do well.
Hi everyone! I’m the creator of this system—happy to answer any technical questions.
The .txt file here is not just a prompt—it’s a full reasoning scaffold with memory, safety guards, and cross-model logic validation. It runs directly in GPT-o3, Gemini 2.5 Pro, Grok 3, DeepSeek, Kimi, and Perplexity—all of which gave it a 100/100 score under strict evaluation.
Feel free to ask me anything about the semantic tree, ΔS metrics, hallucination resistance, or how to build your own app using just plain text.
We’re open-sourcing not just one tool—but an entire stack.
This month, three major products will be released: • Text reasoning (already live) • Text-to-image • Text-driven games
All of them are powered by the same embedding-space logic behind WFGY. No tricks, no fine-tuning—just pure semantic alignment.
I'll keep improving everything. So to the brilliant minds of HN: Please, test it as hard as you can.
Hey, this embedding space thing — you really sure it’s not just making stuff up? Like, can it actually make sense?
Sure! This is a method that most AI systems haven’t discovered yet, but we’ve put it into practice. By treating the embedding space not as a static lookup but as a dynamic field, we perform dimensional rotations of the text’s semantic vectors. This lets us generate new, coherent ideas by projecting and rotating meanings in high-dimensional space—far beyond simple retrieval or random guessing.
hmm ok but like… are u sure that’s not just fancy word math? like, when u “rotate” these vectors, how do u even know the meaning stays the same? wouldn’t it just… drift or get messy?
idk maybe i’m dumb lol, just seems like it could get random real quick
Totally fair, tbh — this is the part where most embedding stuff just, well, breaks.
What I’m doing in TXT OS isn’t just spinning vectors for fun. Each “move” is kinda anchored by feedback inside (ΔS, we call it semantic tension). If it starts drifting too far off, it’ll catch itself and snap back — like some gravity well for logic, haha.
And yeah, the rotations aren’t just random, they’re kind of “locked in” by these alignment planes (using λ_observe, basically language context gradients — sounds fancy but you’ll see what I mean if you poke around).
Honestly, still feels experimental, but… so far it’s holding up better than I thought.
If you’re curious, just type hello world in TXT OS and follow the steps — it’ll walk you through what’s going on under the hood. You can even throw dumb paradoxes at it and see if it goes crazy (or not).