How to wrangle non-deterministic AI outputs into conventional software? (2025)
3 by druther | 2 comments on Hacker News.
Friday, January 16, 2026
New top story on Hacker News: Show HN: Aventos – An experiment in cheap AI SEO
Show HN: Aventos – An experiment in cheap AI SEO
3 by JimsonYang | 0 comments on Hacker News.
Hi HN, we built Aventos- a cheap way to track company mentions in LLMs. Aventos is an experiment we're doing after spending ~6 weeks working on various projects in the AI search / GEO / AEO space. One thing that surprised us is how most tools in this category work. Traditionally, they simulate ChatGPT or Perplexity queries by attempting to reverse engineer the search process. Over the past year, many have shifted to scraping live ChatGPT results instead, since those are signficantly cheaper and reflect more real outputs. Building and maintaining scrapers is tedious and fragile, so recently a number of SaaS products have emerged that effectively wrap a small number of third-party ChatGPT/Perplexity/Google AIO/etc scraping APIs. What felt odd to us is that many of these still tools charge $70–$200+ per month, despite largely being wrappers around the same underlying data providers. So we wanted to test a simple idea: if the core cost is just API usage and commodity infrastructure and software costs are lower because of AI, can we be a successful startup if we price near our costs? What we have so far: 1. Analytics similar to other tools (tracking AI citations, AI search results, and competitor mentions) 2. Content creation features (early and still being improved) We’d love feedback- especially from a non-marketing perspective on: * bugs * confusing terminology or tabs * anything that feels hand-wavy or misleading There’s a demo account available if you want to poke around: username: divit.endal4@gmail.com password: password Happy to answer questions about what other things we've built in the space, how these tools work, etc.
3 by JimsonYang | 0 comments on Hacker News.
Hi HN, we built Aventos- a cheap way to track company mentions in LLMs. Aventos is an experiment we're doing after spending ~6 weeks working on various projects in the AI search / GEO / AEO space. One thing that surprised us is how most tools in this category work. Traditionally, they simulate ChatGPT or Perplexity queries by attempting to reverse engineer the search process. Over the past year, many have shifted to scraping live ChatGPT results instead, since those are signficantly cheaper and reflect more real outputs. Building and maintaining scrapers is tedious and fragile, so recently a number of SaaS products have emerged that effectively wrap a small number of third-party ChatGPT/Perplexity/Google AIO/etc scraping APIs. What felt odd to us is that many of these still tools charge $70–$200+ per month, despite largely being wrappers around the same underlying data providers. So we wanted to test a simple idea: if the core cost is just API usage and commodity infrastructure and software costs are lower because of AI, can we be a successful startup if we price near our costs? What we have so far: 1. Analytics similar to other tools (tracking AI citations, AI search results, and competitor mentions) 2. Content creation features (early and still being improved) We’d love feedback- especially from a non-marketing perspective on: * bugs * confusing terminology or tabs * anything that feels hand-wavy or misleading There’s a demo account available if you want to poke around: username: divit.endal4@gmail.com password: password Happy to answer questions about what other things we've built in the space, how these tools work, etc.
New top story on Hacker News: The Alignment Game
The Alignment Game
4 by dmvaldman | 0 comments on Hacker News.
https://docs.google.com/spreadsheets/d/1BYh9ZtEv4k7xoSXmtf1q...
4 by dmvaldman | 0 comments on Hacker News.
https://docs.google.com/spreadsheets/d/1BYh9ZtEv4k7xoSXmtf1q...
New top story on Hacker News: Show HN: 1Code – Open-source Cursor-like UI for Claude Code
Show HN: 1Code – Open-source Cursor-like UI for Claude Code
15 by Bunas | 4 comments on Hacker News.
Hi, we're Sergey and Serafim. We've been building dev tools at 21st.dev and recently open-sourced 1Code ( https://1code.dev ), a local UI for Claude Code. Here's a video of the product: https://www.youtube.com/watch?v=Sgk9Z-nAjC0 Claude Code has been our go-to for 4 months. When Opus 4.5 dropped, parallel agents stopped needing so much babysitting. We started trusting it with more: building features end to end, adding tests, refactors. Stuff you'd normally hand off to a developer. We started running 3-4 at once. Then the CLI became annoying: too many terminals, hard to track what's where, diffs scattered everywhere. So we built 1Code.dev, an app to run your Claude Code agents in parallel that works on Mac and Web. On Mac: run locally, with or without worktrees. On Web: run in remote sandboxes with live previews of your app, mobile included, so you can check on agents from anywhere. Running multiple Claude Codes in parallel dramatically sped up how we build features. What’s next: Bug bot for identifying issues based on your changes; QA Agent, that checks that new features don't break anything; Adding OpenCode, Codex, other models and coding agents. API for starting Claude Codes in remote sandboxes. Try it out! We're open-source, so you can just bun build it. If you want something hosted, Pro ($20/mo) gives you web with live browser previews hosted on remote sandboxes. We’re also working on API access for running Claude Code sessions programmatically. We'd love to hear your feedback!
15 by Bunas | 4 comments on Hacker News.
Hi, we're Sergey and Serafim. We've been building dev tools at 21st.dev and recently open-sourced 1Code ( https://1code.dev ), a local UI for Claude Code. Here's a video of the product: https://www.youtube.com/watch?v=Sgk9Z-nAjC0 Claude Code has been our go-to for 4 months. When Opus 4.5 dropped, parallel agents stopped needing so much babysitting. We started trusting it with more: building features end to end, adding tests, refactors. Stuff you'd normally hand off to a developer. We started running 3-4 at once. Then the CLI became annoying: too many terminals, hard to track what's where, diffs scattered everywhere. So we built 1Code.dev, an app to run your Claude Code agents in parallel that works on Mac and Web. On Mac: run locally, with or without worktrees. On Web: run in remote sandboxes with live previews of your app, mobile included, so you can check on agents from anywhere. Running multiple Claude Codes in parallel dramatically sped up how we build features. What’s next: Bug bot for identifying issues based on your changes; QA Agent, that checks that new features don't break anything; Adding OpenCode, Codex, other models and coding agents. API for starting Claude Codes in remote sandboxes. Try it out! We're open-source, so you can just bun build it. If you want something hosted, Pro ($20/mo) gives you web with live browser previews hosted on remote sandboxes. We’re also working on API access for running Claude Code sessions programmatically. We'd love to hear your feedback!