I canceled my book deal
68 by azhenley | 25 comments on Hacker News.
Wednesday, December 31, 2025
Tuesday, December 30, 2025
New top story on Hacker News: Show HN: Tidy Baby is a SET game but with words
Show HN: Tidy Baby is a SET game but with words
10 by brgross | 1 comments on Hacker News.
Hi HN — Tidy Baby is a new game made by me and Wyna Liu (of NYT Connections!) that is inspired by the legendary card-based game SET that we assume many of you love (we too love SET). In SET, you’ve got four dimensions: shape, number, color, and shading, each with three variants. In Tidy Baby you only have to deal with three dimensions: - word length (3, 4, or 5 letters) - part of speech (noun, verb, or adjective) - style (bold, underline, or italic) Like in SET, you are trying to form sets of three cards where, along each dimension, the set is either all the same or all different. If you’ve never played SET there are more details/examples at “how to play” in the game. The mechanics of Tidy Baby are sort of inspired by a solitaire/practice version of SET I sometimes play where you draw two random cards and have to name the third card that would make a valid set. In Tidy Baby you are presented with two “game cards” and a grid of up to nine candidates to complete a valid set – your job is to pick the right one before the clock runs out. Unlike in SET, you get points for “partial” sets where your set is valid on one or two dimensions (but not all three). It’s actually a pretty fun challenge to try to get only sets that are invalid along all three dimensions. In building the game, we were sort of surprised that the biggest challenge was ensuring that all words were unambiguously one part of speech. You’d be surprised how hard it is to find three-letter adjectives that are not also common verbs or nouns. We did our best! We’ve got three “paces” in the game: Steady, Strenuous, and Grueling (s/o MECC!) Let us know what you think!
10 by brgross | 1 comments on Hacker News.
Hi HN — Tidy Baby is a new game made by me and Wyna Liu (of NYT Connections!) that is inspired by the legendary card-based game SET that we assume many of you love (we too love SET). In SET, you’ve got four dimensions: shape, number, color, and shading, each with three variants. In Tidy Baby you only have to deal with three dimensions: - word length (3, 4, or 5 letters) - part of speech (noun, verb, or adjective) - style (bold, underline, or italic) Like in SET, you are trying to form sets of three cards where, along each dimension, the set is either all the same or all different. If you’ve never played SET there are more details/examples at “how to play” in the game. The mechanics of Tidy Baby are sort of inspired by a solitaire/practice version of SET I sometimes play where you draw two random cards and have to name the third card that would make a valid set. In Tidy Baby you are presented with two “game cards” and a grid of up to nine candidates to complete a valid set – your job is to pick the right one before the clock runs out. Unlike in SET, you get points for “partial” sets where your set is valid on one or two dimensions (but not all three). It’s actually a pretty fun challenge to try to get only sets that are invalid along all three dimensions. In building the game, we were sort of surprised that the biggest challenge was ensuring that all words were unambiguously one part of speech. You’d be surprised how hard it is to find three-letter adjectives that are not also common verbs or nouns. We did our best! We’ve got three “paces” in the game: Steady, Strenuous, and Grueling (s/o MECC!) Let us know what you think!
New top story on Hacker News: Igniting the GPU: From Kernel Plumbing to 3D Rendering on RISC-V
Igniting the GPU: From Kernel Plumbing to 3D Rendering on RISC-V
23 by michalwilczynsk | 1 comments on Hacker News.
23 by michalwilczynsk | 1 comments on Hacker News.
Monday, December 29, 2025
Sunday, December 28, 2025
New top story on Hacker News: Ask HN: Best Podcasts of 2025?
Ask HN: Best Podcasts of 2025?
27 by adriancooney | 21 comments on Hacker News.
The Rest is Politics, Leading, Philosophize This and Stratechery (paid) are the podcasts that stood out the most in 2025. Curious what other HNers listen to.
27 by adriancooney | 21 comments on Hacker News.
The Rest is Politics, Leading, Philosophize This and Stratechery (paid) are the podcasts that stood out the most in 2025. Curious what other HNers listen to.
Saturday, December 27, 2025
Friday, December 26, 2025
Thursday, December 25, 2025
New top story on Hacker News: Show HN: Lamp Carousel – DIY kinetic sculpture powered by lamp heat
Show HN: Lamp Carousel – DIY kinetic sculpture powered by lamp heat
8 by Evidlo | 0 comments on Hacker News.
I wanted to share this fun craft activity for the holidays that I've been doing with my family over the last few years. I came up with these while cutting up some cans trying to make an aluminum version of paper spinners. There are a variety of shapes that work, but generally bigger+lighter spinners are better. Also incandescent bulbs are the best, but LEDs work too. They remind me of candle carousels I would see at my grandparents' house during Christmas. Let me know what you think!
8 by Evidlo | 0 comments on Hacker News.
I wanted to share this fun craft activity for the holidays that I've been doing with my family over the last few years. I came up with these while cutting up some cans trying to make an aluminum version of paper spinners. There are a variety of shapes that work, but generally bigger+lighter spinners are better. Also incandescent bulbs are the best, but LEDs work too. They remind me of candle carousels I would see at my grandparents' house during Christmas. Let me know what you think!
Wednesday, December 24, 2025
New top story on Hacker News: Show HN: Vibium – Browser automation for AI and humans, by Selenium's creator
Show HN: Vibium – Browser automation for AI and humans, by Selenium's creator
40 by hugs | 25 comments on Hacker News.
i started the selenium project 21 years ago. vibium is what i'd build if i started over today with ai agents in mind. go binary under the hood (handles browser, bidi, mcp) but devs never see it. just npm install vibium. python/java coming. for claude code: claude mcp add vibium -- npx -y vibium v1 ships today. ama.
40 by hugs | 25 comments on Hacker News.
i started the selenium project 21 years ago. vibium is what i'd build if i started over today with ai agents in mind. go binary under the hood (handles browser, bidi, mcp) but devs never see it. just npm install vibium. python/java coming. for claude code: claude mcp add vibium -- npx -y vibium v1 ships today. ama.
Tuesday, December 23, 2025
Monday, December 22, 2025
Sunday, December 21, 2025
Saturday, December 20, 2025
New top story on Hacker News: Show HN: HN Wrapped 2025 - an LLM reviews your year on HN
Show HN: HN Wrapped 2025 - an LLM reviews your year on HN
19 by hubraumhugo | 7 comments on Hacker News.
I was looking for some fun project to play around with the latest Gemini models and ended up building this :) Enter your username and get: - Generated roasts and stats based on your HN activity 2025 - Your personalized HN front page from 2035 (inspired by a recent Show HN [0]) - An xkcd-style comic of your HN persona It uses the latest gemini-3-flash and gemini-3-pro-image (nano banana pro) models, which deliver pretty impressive and funny results. A few examples: - dang: https://ift.tt/5HN4WXY - myself: https://ift.tt/xHt3Xgy Give it a try and share yours :) Happy holidays! [0] https://ift.tt/RtSOydP
19 by hubraumhugo | 7 comments on Hacker News.
I was looking for some fun project to play around with the latest Gemini models and ended up building this :) Enter your username and get: - Generated roasts and stats based on your HN activity 2025 - Your personalized HN front page from 2035 (inspired by a recent Show HN [0]) - An xkcd-style comic of your HN persona It uses the latest gemini-3-flash and gemini-3-pro-image (nano banana pro) models, which deliver pretty impressive and funny results. A few examples: - dang: https://ift.tt/5HN4WXY - myself: https://ift.tt/xHt3Xgy Give it a try and share yours :) Happy holidays! [0] https://ift.tt/RtSOydP
New top story on Hacker News: Show HN: Claude Code Plugin to play music when waiting on user input
Show HN: Claude Code Plugin to play music when waiting on user input
16 by Sevii | 7 comments on Hacker News.
Claude Code tends to be just slow enough you have time to tab away and get distracted. This plugin uses Claude Code's hooks to play music when Claude is waiting for user input so you don't just leave it sitting for 15 minutes.
16 by Sevii | 7 comments on Hacker News.
Claude Code tends to be just slow enough you have time to tab away and get distracted. This plugin uses Claude Code's hooks to play music when Claude is waiting for user input so you don't just leave it sitting for 15 minutes.
Friday, December 19, 2025
New top story on Hacker News: Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)
Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)
6 by linggen | 2 comments on Hacker News.
Hi HN, Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this. My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem. The Tech: Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required. Team Memory: Index knowledge so teammates' LLMs get context automatically. Visual Map: See file dependencies and refactor "blast radius." MCP-Native: Supports Cursor, Zed, and Claude Desktop. Linggen saves me hours. I’d love to hear how you manage complex system context! Repo: https://ift.tt/WIbsB3h Website: https://linggen.dev
6 by linggen | 2 comments on Hacker News.
Hi HN, Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this. My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem. The Tech: Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required. Team Memory: Index knowledge so teammates' LLMs get context automatically. Visual Map: See file dependencies and refactor "blast radius." MCP-Native: Supports Cursor, Zed, and Claude Desktop. Linggen saves me hours. I’d love to hear how you manage complex system context! Repo: https://ift.tt/WIbsB3h Website: https://linggen.dev
Thursday, December 18, 2025
New top story on Hacker News: Show HN: Paper2Any – Open tool to generate editable PPTs from research papers
Show HN: Paper2Any – Open tool to generate editable PPTs from research papers
7 by Mey0320 | 0 comments on Hacker News.
Hi HN, We are the OpenDCAI group from Peking University. We built Paper2Any, an open-source tool designed to automate the "Paper to Slides" workflow based on our DataFlow-Agent framework. The Problem: Writing papers is hard, but creating professional architecture diagrams and slides (PPTs) is often more tedious. Most AI tools just generate static images (PNGs) that are impossible to tweak for final publication. The Solution: Paper2Any takes a PDF, text, or sketch as input, understands the research logic, and generates fully editable PPTX (PowerPoint) files and SVGs. We prioritize flexibility and fidelity—allowing you to specify page ranges, switch visual styles, and preserve original assets. How it works: 1. Multimodal Reading: Extracts text and visual elements from the paper. You can now specify page ranges (e.g., Method section only) to focus the context and reduce token usage. 2. Content Understanding: Identifies core contributions and structural logic. 3. PPT Generation: Instead of generating one flat image, it generates independent elements (blocks, arrows, text) with selectable visual styles and organizes them into a slide layout. Links: - Demo: http://dcai-paper2any.cpolar.top/ - Code (DataFlow-Agent): https://ift.tt/O4X9cqk We'd love to hear your feedback on the generation quality and the agent workflow!
7 by Mey0320 | 0 comments on Hacker News.
Hi HN, We are the OpenDCAI group from Peking University. We built Paper2Any, an open-source tool designed to automate the "Paper to Slides" workflow based on our DataFlow-Agent framework. The Problem: Writing papers is hard, but creating professional architecture diagrams and slides (PPTs) is often more tedious. Most AI tools just generate static images (PNGs) that are impossible to tweak for final publication. The Solution: Paper2Any takes a PDF, text, or sketch as input, understands the research logic, and generates fully editable PPTX (PowerPoint) files and SVGs. We prioritize flexibility and fidelity—allowing you to specify page ranges, switch visual styles, and preserve original assets. How it works: 1. Multimodal Reading: Extracts text and visual elements from the paper. You can now specify page ranges (e.g., Method section only) to focus the context and reduce token usage. 2. Content Understanding: Identifies core contributions and structural logic. 3. PPT Generation: Instead of generating one flat image, it generates independent elements (blocks, arrows, text) with selectable visual styles and organizes them into a slide layout. Links: - Demo: http://dcai-paper2any.cpolar.top/ - Code (DataFlow-Agent): https://ift.tt/O4X9cqk We'd love to hear your feedback on the generation quality and the agent workflow!
Wednesday, December 17, 2025
Tuesday, December 16, 2025
New top story on Hacker News: Show HN: Zenflow – orchestrate coding agents without "you're right" loops
Show HN: Zenflow – orchestrate coding agents without "you're right" loops
7 by andrewsthoughts | 2 comments on Hacker News.
Hi HN, I’m Andrew, Founder of Zencoder. While building our IDE extensions and cloud agents, we ran into the same issue many of you likely face when using coding agents in complex repos: agents getting stuck in loops, apologizing, and wasting time. We tried to manage this with scripts, but juggling terminal windows and copy-paste prompting was painful. So we built Zenflow, a free desktop tool to orchestrate AI coding workflows. It handles the things we were missing in standard chat interfaces: Cross-Model Verification: You can have Codex review Claude’s code, or run them in parallel to see which model handles the specific context better. Parallel Execution: Run five different approaches on a backlog item simultaneously—mix "Human-in-the-Loop" for hard problems with "YOLO" runs for simple tasks. Dynamic Workflows: Configured via simple .md files. Agents can actually "rewire" the next steps of the workflow dynamically based on the problem at hand. Project list/kanban views across all workload What we learned building this To tune Zenflow, we ran 100+ experiments across public benchmarks (SWE-Bench-*, T-Bench) and private datasets. Two major takeaways that might interest this community: Benchmark Saturation: Models are becoming progressively overtrained on all versions of SWE-Bench (even Pro). We found public results are diverging significantly from performance on private datasets. If you are building workflows, you can't rely on public benches. The "Goldilocks" Workflow: In autonomous mode, heavy multi-step processes often multiply errors rather than fix them. Massive, complex prompt templates look good on paper but fail in practice. The most reliable setups landed in a narrow “Goldilocks” zone of just enough structure without over-orchestration. The app is free to use and supports Claude Code, Codex, Gemini, and Zencoder. We’ve been dogfooding this heavily, but I'd love to hear your thoughts on the default workflows and if they fit your mental model for agentic coding. Download: https://ift.tt/1mSKjlg YT flyby: https://www.youtube.com/watch?v=67Ai-klT-B8
7 by andrewsthoughts | 2 comments on Hacker News.
Hi HN, I’m Andrew, Founder of Zencoder. While building our IDE extensions and cloud agents, we ran into the same issue many of you likely face when using coding agents in complex repos: agents getting stuck in loops, apologizing, and wasting time. We tried to manage this with scripts, but juggling terminal windows and copy-paste prompting was painful. So we built Zenflow, a free desktop tool to orchestrate AI coding workflows. It handles the things we were missing in standard chat interfaces: Cross-Model Verification: You can have Codex review Claude’s code, or run them in parallel to see which model handles the specific context better. Parallel Execution: Run five different approaches on a backlog item simultaneously—mix "Human-in-the-Loop" for hard problems with "YOLO" runs for simple tasks. Dynamic Workflows: Configured via simple .md files. Agents can actually "rewire" the next steps of the workflow dynamically based on the problem at hand. Project list/kanban views across all workload What we learned building this To tune Zenflow, we ran 100+ experiments across public benchmarks (SWE-Bench-*, T-Bench) and private datasets. Two major takeaways that might interest this community: Benchmark Saturation: Models are becoming progressively overtrained on all versions of SWE-Bench (even Pro). We found public results are diverging significantly from performance on private datasets. If you are building workflows, you can't rely on public benches. The "Goldilocks" Workflow: In autonomous mode, heavy multi-step processes often multiply errors rather than fix them. Massive, complex prompt templates look good on paper but fail in practice. The most reliable setups landed in a narrow “Goldilocks” zone of just enough structure without over-orchestration. The app is free to use and supports Claude Code, Codex, Gemini, and Zencoder. We’ve been dogfooding this heavily, but I'd love to hear your thoughts on the default workflows and if they fit your mental model for agentic coding. Download: https://ift.tt/1mSKjlg YT flyby: https://www.youtube.com/watch?v=67Ai-klT-B8
Monday, December 15, 2025
New top story on Hacker News: Show HN: 100 Million splats, a whole town, rendered in M2 MacBook Air
Show HN: 100 Million splats, a whole town, rendered in M2 MacBook Air
21 by Arun_Kurian | 3 comments on Hacker News.
Written natively from scratch in Metal and Swift. Build for AirVis app.
21 by Arun_Kurian | 3 comments on Hacker News.
Written natively from scratch in Metal and Swift. Build for AirVis app.
Sunday, December 14, 2025
Saturday, December 13, 2025
Friday, December 12, 2025
Thursday, December 11, 2025
New top story on Hacker News: Show HN: SIM – Apache-2.0 n8n alternative
Show HN: SIM – Apache-2.0 n8n alternative
28 by waleedlatif1 | 2 comments on Hacker News.
Hey HN, Waleed here. We're building Sim ( https://sim.ai/ ), an open-source visual editor to build agentic workflows. Repo here: https://ift.tt/JzQ28yD . Docs here: https://docs.sim.ai . You can run Sim locally using Docker, with no execution limits or other restrictions. We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves. We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added: - 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ... - Tool calling with granular control: forced, auto - Agent memory: conversation memory with sliding window support (by last n messages or tokens) - Trace spans: detailed logging and observability for nested workflows and tool calling - Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents - Workflow deployment versioning with rollbacks - MCP support, Human-in-the-loop block - Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows) Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives. Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening. We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :) [1] https://ift.tt/6YtMFBW [2] https://ift.tt/VH4PXnc
28 by waleedlatif1 | 2 comments on Hacker News.
Hey HN, Waleed here. We're building Sim ( https://sim.ai/ ), an open-source visual editor to build agentic workflows. Repo here: https://ift.tt/JzQ28yD . Docs here: https://docs.sim.ai . You can run Sim locally using Docker, with no execution limits or other restrictions. We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves. We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added: - 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ... - Tool calling with granular control: forced, auto - Agent memory: conversation memory with sliding window support (by last n messages or tokens) - Trace spans: detailed logging and observability for nested workflows and tool calling - Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents - Workflow deployment versioning with rollbacks - MCP support, Human-in-the-loop block - Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows) Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives. Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening. We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :) [1] https://ift.tt/6YtMFBW [2] https://ift.tt/VH4PXnc
Wednesday, December 10, 2025
Tuesday, December 9, 2025
Monday, December 8, 2025
Sunday, December 7, 2025
Saturday, December 6, 2025
Friday, December 5, 2025
Thursday, December 4, 2025
Wednesday, December 3, 2025
New top story on Hacker News: Show HN: Fresh – A new terminal editor built in Rust
Show HN: Fresh – A new terminal editor built in Rust
9 by _sinelaw_ | 7 comments on Hacker News.
I built Fresh to challenge the status quo that terminal editing must require a steep learning curve or endless configuration. My goal was to create a fast, resource-efficient TUI editor with the usability and features of a modern GUI editor (like a command palette, mouse support, and LSP integration). Core Philosophy: - Ease-of-Use: Fundamentally non-modal. Prioritizes standard keybindings and a minimal learning curve. - Efficiency: Uses a lazy-loading piece tree to avoid loading huge files into RAM - reads only what's needed for user interactions. Coded in Rust. - Extensibility: Uses TypeScript (via Deno) for plugins, making it accessible to a large developer base. The Performance Challenge: I focused on resource consumption and speed with large file support as a core feature. I did a quick benchmark loading a 2GB log file with ANSI color codes. Here is the comparison against other popular editors: - Fresh: Load Time: *~600ms* | Memory: *~36 MB* - Neovim: Load Time: ~6.5 seconds | Memory: ~2 GB - Emacs: Load Time: ~10 seconds | Memory: ~2 GB - VS Code: Load Time: ~20 seconds | Memory: OOM Killed (~4.3 GB available) (Only Fresh rendered the ansi colors.) Development process: I embraced Claude Code and made an effort to get good mileage out of it. I gave it strong specific directions, especially in architecture / code structure / UX-sensitive areas. It required constant supervision and re-alignment, especially in the performance critical areas. Added very extensive tests (compared to my normal standards) to keep it aligned as the code grows. Especially, focused on end-to-end testing where I could easily enforce a specific behavior or user flow. Fresh is an open-source project (GPL-2) seeking early adopters. You're welcome to send feedback, feature requests, and bug reports. Website: https://sinelaw.github.io/fresh/ GitHub Repository: https://ift.tt/RaYq3yz
9 by _sinelaw_ | 7 comments on Hacker News.
I built Fresh to challenge the status quo that terminal editing must require a steep learning curve or endless configuration. My goal was to create a fast, resource-efficient TUI editor with the usability and features of a modern GUI editor (like a command palette, mouse support, and LSP integration). Core Philosophy: - Ease-of-Use: Fundamentally non-modal. Prioritizes standard keybindings and a minimal learning curve. - Efficiency: Uses a lazy-loading piece tree to avoid loading huge files into RAM - reads only what's needed for user interactions. Coded in Rust. - Extensibility: Uses TypeScript (via Deno) for plugins, making it accessible to a large developer base. The Performance Challenge: I focused on resource consumption and speed with large file support as a core feature. I did a quick benchmark loading a 2GB log file with ANSI color codes. Here is the comparison against other popular editors: - Fresh: Load Time: *~600ms* | Memory: *~36 MB* - Neovim: Load Time: ~6.5 seconds | Memory: ~2 GB - Emacs: Load Time: ~10 seconds | Memory: ~2 GB - VS Code: Load Time: ~20 seconds | Memory: OOM Killed (~4.3 GB available) (Only Fresh rendered the ansi colors.) Development process: I embraced Claude Code and made an effort to get good mileage out of it. I gave it strong specific directions, especially in architecture / code structure / UX-sensitive areas. It required constant supervision and re-alignment, especially in the performance critical areas. Added very extensive tests (compared to my normal standards) to keep it aligned as the code grows. Especially, focused on end-to-end testing where I could easily enforce a specific behavior or user flow. Fresh is an open-source project (GPL-2) seeking early adopters. You're welcome to send feedback, feature requests, and bug reports. Website: https://sinelaw.github.io/fresh/ GitHub Repository: https://ift.tt/RaYq3yz
Tuesday, December 2, 2025
Monday, December 1, 2025
Sunday, November 30, 2025
Saturday, November 29, 2025
Friday, November 28, 2025
Thursday, November 27, 2025
New top story on Hacker News: Tell HN: Happy Thanksgiving
Tell HN: Happy Thanksgiving
38 by prodigycorp | 6 comments on Hacker News.
I’ve been a part of this community for fifteen years. Despite the yearly bemoaning of HN’s quality compared to its mythical past, I’ve found that it’s the one community that has remained steadfast as a source of knowledge, cattiness, and good discussion. Thank you @dang and @tomhow. Here's to another year.
38 by prodigycorp | 6 comments on Hacker News.
I’ve been a part of this community for fifteen years. Despite the yearly bemoaning of HN’s quality compared to its mythical past, I’ve found that it’s the one community that has remained steadfast as a source of knowledge, cattiness, and good discussion. Thank you @dang and @tomhow. Here's to another year.
Wednesday, November 26, 2025
Tuesday, November 25, 2025
Monday, November 24, 2025
Sunday, November 23, 2025
Saturday, November 22, 2025
Friday, November 21, 2025
Thursday, November 20, 2025
Wednesday, November 19, 2025
New top story on Hacker News: Show HN: DNS Benchmark Tool – Compare and monitor resolvers
Show HN: DNS Benchmark Tool – Compare and monitor resolvers
7 by ovo101 | 1 comments on Hacker News.
I built a CLI to benchmark DNS resolvers after discovering DNS was adding 300ms to my API requests. v0.3.0 just released with new features: compare: Test single domain across all resolvers top: Rank resolvers by latency/reliability/balanced monitor: Continuous tracking with threshold alerts 1,400+ downloads in first week. Quick start: pip install dns-benchmark-tool dns-benchmark compare --domain google.com CLI stays free forever. Hosted version (multi-region, historical tracking, alerts) coming Q1 2026. GitHub: https://ift.tt/RuPFkNU Feedback: https://forms.gle/BJBiyBFvRJHskyR57 Built with Python + dnspython. Open to questions and feedback!
7 by ovo101 | 1 comments on Hacker News.
I built a CLI to benchmark DNS resolvers after discovering DNS was adding 300ms to my API requests. v0.3.0 just released with new features: compare: Test single domain across all resolvers top: Rank resolvers by latency/reliability/balanced monitor: Continuous tracking with threshold alerts 1,400+ downloads in first week. Quick start: pip install dns-benchmark-tool dns-benchmark compare --domain google.com CLI stays free forever. Hosted version (multi-region, historical tracking, alerts) coming Q1 2026. GitHub: https://ift.tt/RuPFkNU Feedback: https://forms.gle/BJBiyBFvRJHskyR57 Built with Python + dnspython. Open to questions and feedback!
New top story on Hacker News: Larry Summers resigns from OpenAI board
Larry Summers resigns from OpenAI board
56 by koolba | 30 comments on Hacker News.
https://ift.tt/r5ONGja... , https://ift.tt/Uw79m4s
56 by koolba | 30 comments on Hacker News.
https://ift.tt/r5ONGja... , https://ift.tt/Uw79m4s
Tuesday, November 18, 2025
Monday, November 17, 2025
Sunday, November 16, 2025
Saturday, November 15, 2025
Friday, November 14, 2025
Thursday, November 13, 2025
Wednesday, November 12, 2025
Tuesday, November 11, 2025
Monday, November 10, 2025
Sunday, November 9, 2025
New top story on Hacker News: Ask HN: How do you get over the fear of sharing code?
Ask HN: How do you get over the fear of sharing code?
11 by sodokuwizard | 15 comments on Hacker News.
I'm a junior. Truth be told, I don't really care if professionals/adults see my code or pick it apart/mock it/fork it or whatever. All my repos are private just because I worry about other students being lazy and just ripping my hard work and claiming it as their own. That really pisses me off when I hear some horror stories like that. Is this unfounded? Or do I have a right for some concern? It's obviously easier for viewers to just see public code repos and browse without ever requesting access so I know I'm losing some traffic (from my portfolio site) I was thinking the alternative would be just linking my demo on my portfolio site as a proof of concept that yes I made it, yes it works, and if you're curious , here's a link to the code u can request independently of github. Thank you in advance.
11 by sodokuwizard | 15 comments on Hacker News.
I'm a junior. Truth be told, I don't really care if professionals/adults see my code or pick it apart/mock it/fork it or whatever. All my repos are private just because I worry about other students being lazy and just ripping my hard work and claiming it as their own. That really pisses me off when I hear some horror stories like that. Is this unfounded? Or do I have a right for some concern? It's obviously easier for viewers to just see public code repos and browse without ever requesting access so I know I'm losing some traffic (from my portfolio site) I was thinking the alternative would be just linking my demo on my portfolio site as a proof of concept that yes I made it, yes it works, and if you're curious , here's a link to the code u can request independently of github. Thank you in advance.
Saturday, November 8, 2025
Friday, November 7, 2025
Thursday, November 6, 2025
New top story on Hacker News: Supply chain attacks are exploiting our assumptions
Supply chain attacks are exploiting our assumptions
14 by crescit_eundo | 2 comments on Hacker News.
14 by crescit_eundo | 2 comments on Hacker News.
Wednesday, November 5, 2025
Tuesday, November 4, 2025
Sunday, November 2, 2025
Saturday, November 1, 2025
Friday, October 31, 2025
Thursday, October 30, 2025
Wednesday, October 29, 2025
Tuesday, October 28, 2025
Monday, October 27, 2025
New top story on Hacker News: 10M people watched a YouTuber shim a lock; the lock company sued him – bad idea
10M people watched a YouTuber shim a lock; the lock company sued him – bad idea
88 by Brajeshwar | 36 comments on Hacker News.
https://www.youtube.com/shorts/YjzlmKz_MM8
88 by Brajeshwar | 36 comments on Hacker News.
https://www.youtube.com/shorts/YjzlmKz_MM8
Sunday, October 26, 2025
Saturday, October 25, 2025
Friday, October 24, 2025
Thursday, October 23, 2025
Wednesday, October 22, 2025
New top story on Hacker News: Show HN: Create interactive diagrams with pop-up content
Show HN: Create interactive diagrams with pop-up content
5 by ttd | 0 comments on Hacker News.
This is a recent addition to Vexlio which I think the HN crowd may find interesting or useful. TL;DR: easy creation of interactive diagrams, meaning diagrams that have mouse click/hover hooks that you can use to display pop-up content. The end result can be shared with a no-sign-in-required web link. My thought is that this is useful for system docs, onboarding or user guides, presentations, etc. Anything where there is a high-level view that should remain uncluttered + important metadata or details that still need to be available somewhere. You can try it out without signing up for anything, just launch the app here ( https://app.vexlio.com/ ), create a shape, select it with the main pointer tool and then click "Add popup" on the context toolbar. I'd be grateful for any and all feedback!
5 by ttd | 0 comments on Hacker News.
This is a recent addition to Vexlio which I think the HN crowd may find interesting or useful. TL;DR: easy creation of interactive diagrams, meaning diagrams that have mouse click/hover hooks that you can use to display pop-up content. The end result can be shared with a no-sign-in-required web link. My thought is that this is useful for system docs, onboarding or user guides, presentations, etc. Anything where there is a high-level view that should remain uncluttered + important metadata or details that still need to be available somewhere. You can try it out without signing up for anything, just launch the app here ( https://app.vexlio.com/ ), create a shape, select it with the main pointer tool and then click "Add popup" on the context toolbar. I'd be grateful for any and all feedback!
Tuesday, October 21, 2025
Monday, October 20, 2025
Sunday, October 19, 2025
Saturday, October 18, 2025
Friday, October 17, 2025
Thursday, October 16, 2025
Wednesday, October 15, 2025
Tuesday, October 14, 2025
Monday, October 13, 2025
Sunday, October 12, 2025
New top story on Hacker News: Show HN: I built a simple ambient sound app with no ads or subscriptions
Show HN: I built a simple ambient sound app with no ads or subscriptions
11 by alpaca121 | 3 comments on Hacker News.
I’ve always liked having background noise while working or falling asleep, but I got frustrated that most “white noise” or ambient sound apps are either paywalled, stuffed with ads, or try to upsell subscriptions for basic features. So I made Ambi, a small iOS app with a clean interface and a set of freely available ambient sounds — rain, waves, wind, birds, that sort of thing. You can mix them, adjust volume levels, and just let it play all night or while you work. Everything works offline and there are no hidden catches. It’s something I built for myself first, but I figured others might find it useful too. Feedback, bugs, and suggestions are all welcome. https://ift.tt/qrJE0mb...
11 by alpaca121 | 3 comments on Hacker News.
I’ve always liked having background noise while working or falling asleep, but I got frustrated that most “white noise” or ambient sound apps are either paywalled, stuffed with ads, or try to upsell subscriptions for basic features. So I made Ambi, a small iOS app with a clean interface and a set of freely available ambient sounds — rain, waves, wind, birds, that sort of thing. You can mix them, adjust volume levels, and just let it play all night or while you work. Everything works offline and there are no hidden catches. It’s something I built for myself first, but I figured others might find it useful too. Feedback, bugs, and suggestions are all welcome. https://ift.tt/qrJE0mb...
Saturday, October 11, 2025
Friday, October 10, 2025
Thursday, October 9, 2025
Wednesday, October 8, 2025
New top story on Hacker News: Show HN: I built a local-first podcast app
Show HN: I built a local-first podcast app
19 by aegrumet | 4 comments on Hacker News.
I worked on early podcast software in 2004 (iPodder/Juice) and have been a heavy podcast consumer ever since. I wanted a podcast app that respects your privacy and embraces the open web—and to explore what's possible in the browser. The result is wherever.audio, which you can try right now at the link above. How it works: It's a progressive web app that stores all your subscriptions and data locally in your browser using IndexedDB. Add it to your home screen and it feels native. Works offline with downloaded episodes. No central server storing your data—just some Cloudflare/AWS helpers to smooth out browser limitations. What makes it different: - True local-first: Your data stays on your device - Custom feeds: Add any RSS feed, not just what's in a directory - On-device search: Search across all feeds and episodes, including your custom ones - Podcasting 2.0 support: Chapters, transcripts, funding tags, and others - Auto-generated chapters: For popular shows that don't have them - AI-powered discovery: Ask questions to find shows and episodes (this feature does send queries to a 3rd party API, and also uses anonymized analytics while we work out the prompts) - Audio-guided tutorials: Interactive walkthroughs with voice guidance and visual cues The basics work well too: Standard playback features, queue management, speed controls, etc. I'm really interested in feedback—this is more passion project than business right now. I've been dogfooding it as my daily podcast app for over a year, and I'm open to exploring making it a business if people find it valuable. Curious if there are unmet needs that a privacy-focused, open web approach could address.
19 by aegrumet | 4 comments on Hacker News.
I worked on early podcast software in 2004 (iPodder/Juice) and have been a heavy podcast consumer ever since. I wanted a podcast app that respects your privacy and embraces the open web—and to explore what's possible in the browser. The result is wherever.audio, which you can try right now at the link above. How it works: It's a progressive web app that stores all your subscriptions and data locally in your browser using IndexedDB. Add it to your home screen and it feels native. Works offline with downloaded episodes. No central server storing your data—just some Cloudflare/AWS helpers to smooth out browser limitations. What makes it different: - True local-first: Your data stays on your device - Custom feeds: Add any RSS feed, not just what's in a directory - On-device search: Search across all feeds and episodes, including your custom ones - Podcasting 2.0 support: Chapters, transcripts, funding tags, and others - Auto-generated chapters: For popular shows that don't have them - AI-powered discovery: Ask questions to find shows and episodes (this feature does send queries to a 3rd party API, and also uses anonymized analytics while we work out the prompts) - Audio-guided tutorials: Interactive walkthroughs with voice guidance and visual cues The basics work well too: Standard playback features, queue management, speed controls, etc. I'm really interested in feedback—this is more passion project than business right now. I've been dogfooding it as my daily podcast app for over a year, and I'm open to exploring making it a business if people find it valuable. Curious if there are unmet needs that a privacy-focused, open web approach could address.
Tuesday, October 7, 2025
New top story on Hacker News: Show HN: Arc – high-throughput time-series warehouse with DuckDB analytics
Show HN: Arc – high-throughput time-series warehouse with DuckDB analytics
6 by ignaciovdk | 4 comments on Hacker News.
Hi HN, I’m Ignacio, founder at Basekick Labs. Over the past months I’ve been building Arc, a time-series data platform designed to combine very fast ingestion with strong analytical queries. What Arc does? Ingest via a binary MessagePack API (fast path), Compatible with Line Protocol for existing tools (Like InfluxDB, I'm ex Influxer), Store data as Parquet with hourly partitions, Query via DuckDB engine using SQL Why I built it: Many systems force you to trade retention, throughput, or complexity. I wanted something where ingestion performance doesn’t kill your analytics. Performance & benchmarks that I have so far. Write throughput: ~1.88M records/sec (MessagePack, untuned) in my M3 Pro Max (14 cores, 16gb RAM) ClickBench on AWS c6a.4xlarge: 35.18 s cold, ~0.81 s hot (43/43 queries succeeded) In those runs, caching was disabled to match benchmark rules; enabling cache in production gives ~20% faster repeated queries I’ve open-sourced the Arc repo so you can dive into implementation, benchmarks, and code. Would love your thoughts, critiques, and use-case ideas. Thanks!
6 by ignaciovdk | 4 comments on Hacker News.
Hi HN, I’m Ignacio, founder at Basekick Labs. Over the past months I’ve been building Arc, a time-series data platform designed to combine very fast ingestion with strong analytical queries. What Arc does? Ingest via a binary MessagePack API (fast path), Compatible with Line Protocol for existing tools (Like InfluxDB, I'm ex Influxer), Store data as Parquet with hourly partitions, Query via DuckDB engine using SQL Why I built it: Many systems force you to trade retention, throughput, or complexity. I wanted something where ingestion performance doesn’t kill your analytics. Performance & benchmarks that I have so far. Write throughput: ~1.88M records/sec (MessagePack, untuned) in my M3 Pro Max (14 cores, 16gb RAM) ClickBench on AWS c6a.4xlarge: 35.18 s cold, ~0.81 s hot (43/43 queries succeeded) In those runs, caching was disabled to match benchmark rules; enabling cache in production gives ~20% faster repeated queries I’ve open-sourced the Arc repo so you can dive into implementation, benchmarks, and code. Would love your thoughts, critiques, and use-case ideas. Thanks!
Monday, October 6, 2025
Sunday, October 5, 2025
Saturday, October 4, 2025
New top story on Hacker News: Show HN: Run – a CLI universal code runner I built while learning Rust
Show HN: Run – a CLI universal code runner I built while learning Rust
5 by esubaalew | 0 comments on Hacker News.
Hi HN — I’m learning Rust and decided to build a universal CLI for running code in many languages. The tool, Run, aims to be a single, minimal dependency utility for: running one-off snippets (from CLI flags), running files, reading and executing piped stdin, and providing language-specific REPLs that you can switch between interactively. I designed it to support both interpreted languages (Python, JS, Ruby, etc.) and compiled languages (Rust, Go, C/C++). It detects languages from flags or file extensions, can compile temporary files for compiled languages, and exposes a unified REPL experience with commands like :help, :lang, and :quit. Install: cargo install run-kit (or use the platform downloads on GitHub). Source & releases: https://ift.tt/QAgyL0o I used Rust while following the official learning resources and used AI to speed up development, so I expect there are bugs and rough edges. I’d love feedback on: usability and UX of the REPL, edge cases for piping input to language runtimes, security considerations (sandboxing/resource limits), packaging and cross-platform distribution. Thanks — I’ll try to answer questions and share design notes.
5 by esubaalew | 0 comments on Hacker News.
Hi HN — I’m learning Rust and decided to build a universal CLI for running code in many languages. The tool, Run, aims to be a single, minimal dependency utility for: running one-off snippets (from CLI flags), running files, reading and executing piped stdin, and providing language-specific REPLs that you can switch between interactively. I designed it to support both interpreted languages (Python, JS, Ruby, etc.) and compiled languages (Rust, Go, C/C++). It detects languages from flags or file extensions, can compile temporary files for compiled languages, and exposes a unified REPL experience with commands like :help, :lang, and :quit. Install: cargo install run-kit (or use the platform downloads on GitHub). Source & releases: https://ift.tt/QAgyL0o I used Rust while following the official learning resources and used AI to speed up development, so I expect there are bugs and rough edges. I’d love feedback on: usability and UX of the REPL, edge cases for piping input to language runtimes, security considerations (sandboxing/resource limits), packaging and cross-platform distribution. Thanks — I’ll try to answer questions and share design notes.
Friday, October 3, 2025
Thursday, October 2, 2025
Wednesday, October 1, 2025
New top story on Hacker News: Show HN: Glide, an extensible, keyboard-focused web browser
Show HN: Glide, an extensible, keyboard-focused web browser
38 by probablyrobert | 6 comments on Hacker News.
38 by probablyrobert | 6 comments on Hacker News.
Tuesday, September 30, 2025
Monday, September 29, 2025
Sunday, September 28, 2025
Saturday, September 27, 2025
New top story on Hacker News: Americans Are Using PTO to Sleep, Not for Vacation–Report
Americans Are Using PTO to Sleep, Not for Vacation–Report
21 by randycupertino | 10 comments on Hacker News.
21 by randycupertino | 10 comments on Hacker News.
Friday, September 26, 2025
Thursday, September 25, 2025
Wednesday, September 24, 2025
Tuesday, September 23, 2025
Monday, September 22, 2025
Sunday, September 21, 2025
Saturday, September 20, 2025
Friday, September 19, 2025
Thursday, September 18, 2025
Wednesday, September 17, 2025
Tuesday, September 16, 2025
Monday, September 15, 2025
New top story on Hacker News: Show HN: AI-powered web service combining FastAPI, Pydantic-AI, and MCP servers
Show HN: AI-powered web service combining FastAPI, Pydantic-AI, and MCP servers
5 by Aherontas | 1 comments on Hacker News.
Hey all! I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems. To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: https://ift.tt/pH2RFUX ) https://ift.tt/GM1DKHZ... The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration. Features: - Multiple agents running in containers - MCP servers (Brave search, GitHub, filesystem, etc.) as tools - A2A communication between services - Minimal UI for experimentation for Tech Trend - repo analysis I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases. It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems. Big questions: Do you think agent-to-agent protocols like MCP/A2A will stick? Or will the future be mostly single powerful LLMs with plugin stacks? Thanks — excited to hear what the HN crowd thinks!
5 by Aherontas | 1 comments on Hacker News.
Hey all! I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems. To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: https://ift.tt/pH2RFUX ) https://ift.tt/GM1DKHZ... The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration. Features: - Multiple agents running in containers - MCP servers (Brave search, GitHub, filesystem, etc.) as tools - A2A communication between services - Minimal UI for experimentation for Tech Trend - repo analysis I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases. It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems. Big questions: Do you think agent-to-agent protocols like MCP/A2A will stick? Or will the future be mostly single powerful LLMs with plugin stacks? Thanks — excited to hear what the HN crowd thinks!
Sunday, September 14, 2025
Saturday, September 13, 2025
Friday, September 12, 2025
Thursday, September 11, 2025
Wednesday, September 10, 2025
Tuesday, September 9, 2025
Monday, September 8, 2025
Sunday, September 7, 2025
Saturday, September 6, 2025
Friday, September 5, 2025
Thursday, September 4, 2025
New top story on Hacker News: A high schooler writes about AI tools in the classroom
A high schooler writes about AI tools in the classroom
76 by dougb5 | 69 comments on Hacker News.
https://ift.tt/pCBoTEe
76 by dougb5 | 69 comments on Hacker News.
https://ift.tt/pCBoTEe
Wednesday, September 3, 2025
New top story on Hacker News: Vector search on our codebase transformed our SDLC automation
Vector search on our codebase transformed our SDLC automation
9 by antonybrahin | 0 comments on Hacker News.
Hey HN, In software development, the process of turning a user story into detailed documentation and actionable tasks is critical. However, this manual process can often be a source of inconsistency and a significant time investment. I was driven to see if I could streamline and elevate it. I know this is a hot space, with big players like GitHub and Atlassian building integrated AI, and startups offering specialized platforms. My goal wasn't to compete with them, but to see what was possible by building a custom, "glass box" solution using the best tools for each part of the job, without being locked into a single ecosystem. What makes this approach different is the flexibility and full control. Instead of a pre-packaged product, this is a resilient workflow built on Power Automate, which acts as the orchestrator for a sequence of API calls: Five calls to the Gemini API for the core generation steps (requirements, tech spec, test strategy, etc.). One call to an Azure OpenAI model to create vector embeddings of our codebase. One call to Azure AI Search to perform the Retrieval-Augmented Generation (RAG). This was the key to getting context-aware, non-generic outputs. It reads our actual code to inform the technical spec and tasks. A bunch of direct calls to the Azure DevOps REST API (using a PAT) to create the wiki pages and work items, since the standard connectors were a bit limited. The biggest challenge was moving beyond simple prompts and engineering a resilient system. Forcing the final output into a rigid JSON schema instead of parsing text was a game-changer for reliability. The result is a system that saves us hours on every story and produces remarkably consistent, high-quality documentation and tasks. The full write-up with all the challenges, final prompts, and screenshots is in the linked blog post. I’m here to answer any questions. Would love to hear your feedback and ideas!
9 by antonybrahin | 0 comments on Hacker News.
Hey HN, In software development, the process of turning a user story into detailed documentation and actionable tasks is critical. However, this manual process can often be a source of inconsistency and a significant time investment. I was driven to see if I could streamline and elevate it. I know this is a hot space, with big players like GitHub and Atlassian building integrated AI, and startups offering specialized platforms. My goal wasn't to compete with them, but to see what was possible by building a custom, "glass box" solution using the best tools for each part of the job, without being locked into a single ecosystem. What makes this approach different is the flexibility and full control. Instead of a pre-packaged product, this is a resilient workflow built on Power Automate, which acts as the orchestrator for a sequence of API calls: Five calls to the Gemini API for the core generation steps (requirements, tech spec, test strategy, etc.). One call to an Azure OpenAI model to create vector embeddings of our codebase. One call to Azure AI Search to perform the Retrieval-Augmented Generation (RAG). This was the key to getting context-aware, non-generic outputs. It reads our actual code to inform the technical spec and tasks. A bunch of direct calls to the Azure DevOps REST API (using a PAT) to create the wiki pages and work items, since the standard connectors were a bit limited. The biggest challenge was moving beyond simple prompts and engineering a resilient system. Forcing the final output into a rigid JSON schema instead of parsing text was a game-changer for reliability. The result is a system that saves us hours on every story and produces remarkably consistent, high-quality documentation and tasks. The full write-up with all the challenges, final prompts, and screenshots is in the linked blog post. I’m here to answer any questions. Would love to hear your feedback and ideas!
Tuesday, September 2, 2025
Monday, September 1, 2025
Sunday, August 31, 2025
Saturday, August 30, 2025
Friday, August 29, 2025
Thursday, August 28, 2025
New top story on Hacker News: Show HN: Grammit – Local-only AI grammar checker (Chrome extension)
Show HN: Grammit – Local-only AI grammar checker (Chrome extension)
9 by scottfr | 0 comments on Hacker News.
Hey HN, I wanted a grammar checker that didn’t send my writing to someone's servers, so we built Grammit, a Chrome extension that runs grammar checks locally using an LLM. Your text never leaves your computer during checking. Here’s a 2-minute overview: https://ift.tt/sey7EpU Because it uses an LLM, it catches more than spelling and grammar. For example, it can correct some wrong statements like “The first US president was Benjamin Franklin.” Grammit also includes an in-page writing assistant that can rephrase or draft new text. It also uses the local LLM. We used many new web features to build this, such as: - Chrome’s new Prompt API to talk to the local model. - Anchor Positioning API to place the UI with minimal impact on the DOM. - CSS Custom Highlights API for inline error marking. - The new CSS sign() function to create CSS-driven layout with discontinuities. Part of the fun of being early adopters of bleeding edge tech is we’re discovering new Chrome bugs (e.g., https://ift.tt/L9bwa7D , https://ift.tt/K9zurEX ). I’d love your feedback on: - Where the UX feels rough - What do you think of the corrections and suggestions Happy to answer questions about the tech or the Prompt API. Thanks for trying it out! Chrome Web Store extension link: https://ift.tt/IUF8wnQ...
9 by scottfr | 0 comments on Hacker News.
Hey HN, I wanted a grammar checker that didn’t send my writing to someone's servers, so we built Grammit, a Chrome extension that runs grammar checks locally using an LLM. Your text never leaves your computer during checking. Here’s a 2-minute overview: https://ift.tt/sey7EpU Because it uses an LLM, it catches more than spelling and grammar. For example, it can correct some wrong statements like “The first US president was Benjamin Franklin.” Grammit also includes an in-page writing assistant that can rephrase or draft new text. It also uses the local LLM. We used many new web features to build this, such as: - Chrome’s new Prompt API to talk to the local model. - Anchor Positioning API to place the UI with minimal impact on the DOM. - CSS Custom Highlights API for inline error marking. - The new CSS sign() function to create CSS-driven layout with discontinuities. Part of the fun of being early adopters of bleeding edge tech is we’re discovering new Chrome bugs (e.g., https://ift.tt/L9bwa7D , https://ift.tt/K9zurEX ). I’d love your feedback on: - Where the UX feels rough - What do you think of the corrections and suggestions Happy to answer questions about the tech or the Prompt API. Thanks for trying it out! Chrome Web Store extension link: https://ift.tt/IUF8wnQ...
Wednesday, August 27, 2025
New top story on Hacker News: Malicious versions of Nx and some supporting plugins were published
Malicious versions of Nx and some supporting plugins were published
93 by longcat | 242 comments on Hacker News.
See also: https://ift.tt/DVJFk9P... https://ift.tt/wcX8qaI...
93 by longcat | 242 comments on Hacker News.
See also: https://ift.tt/DVJFk9P... https://ift.tt/wcX8qaI...
Tuesday, August 26, 2025
New top story on Hacker News: Show HN: SecretMemoryLocker – File Encryption Without Static Passwords
Show HN: SecretMemoryLocker – File Encryption Without Static Passwords
4 by YuriiDev | 0 comments on Hacker News.
I built SecretMemoryLocker ( https://ift.tt/boKeGJa ), a file encryption tool that generates keys dynamically from your answers to personal questions instead of using a static master password. This makes offline brute-force attacks much more difficult. Think of it as a password manager that meets mnemonic seed recovery, but without storing any sensitive keys on disk. Why? I kept losing master passwords and wanted a solution that wasn't tied to a single point of failure. I also wanted to create a "digital legacy" that my family could access only under specific conditions. The core principle is knowledge-based encryption: the key only exists in memory when you provide the correct answers. Status: * MVP is ready for Windows (.exe). * Linux and macOS support is planned. * UI is available in English, Spanish, and Ukrainian. Key Features: * No Static Secrets: No master password or seed phrase is ever stored. The key is reconstructed on the fly. * Knowledge-Based Key Generation: The final encryption key is derived from a combination of your personal answers and file metadata. * Offline Brute-Force Resistance: Uses MirageLoop, a decoy system that activates when incorrect answers are entered. Instead of decrypting real data, it generates an endless sequence of AI-created questions from a secure local database, creating an illusion of progress while keeping your real data untouched. * Offline AI Generation Mode: Optional offline Q&A generator (prototype). How It Works (Simplified): 1) Files are packed into an AES-256 encrypted ZIP archive. 2) A JSON key file stores the questions in an encrypted chain. Each subsequent question is encrypted with a key derived from the previous correct answer and the file's hash. This forces you to answer them sequentially. 3) The final encryption key for the ZIP file is derived by combining the hashes of all your correct answers. The key derivation formula looks like this: K_final = SHA256(H(answer1+file_hash) + H(answer2+file_hash) + ...) (Note: We are aware that a fast hash like SHA256 is not ideal for a KDF. We plan to migrate to Argon2 in a future release to further strengthen resistance against brute-force attacks.) To encrypt, you provide a file. This creates two outputs: your_file.txt → your_file_SMLkey.json + your_file_SecretML.zip To decrypt, you need both files and the correct answers. Install & Quick Start: Download the EXE from GitHub Releases (no dependencies needed): https://ift.tt/VcifBEl Encrypt: SecretMemoryLocker.exe --encrypt "C:\docs\important.pdf" Decrypt: SecretMemoryLocker.exe --decrypt "C:\docs\important_SMLkey.json" I would love to get your feedback on the concept, the user experience, and any security assumptions I've made. Thanks!
4 by YuriiDev | 0 comments on Hacker News.
I built SecretMemoryLocker ( https://ift.tt/boKeGJa ), a file encryption tool that generates keys dynamically from your answers to personal questions instead of using a static master password. This makes offline brute-force attacks much more difficult. Think of it as a password manager that meets mnemonic seed recovery, but without storing any sensitive keys on disk. Why? I kept losing master passwords and wanted a solution that wasn't tied to a single point of failure. I also wanted to create a "digital legacy" that my family could access only under specific conditions. The core principle is knowledge-based encryption: the key only exists in memory when you provide the correct answers. Status: * MVP is ready for Windows (.exe). * Linux and macOS support is planned. * UI is available in English, Spanish, and Ukrainian. Key Features: * No Static Secrets: No master password or seed phrase is ever stored. The key is reconstructed on the fly. * Knowledge-Based Key Generation: The final encryption key is derived from a combination of your personal answers and file metadata. * Offline Brute-Force Resistance: Uses MirageLoop, a decoy system that activates when incorrect answers are entered. Instead of decrypting real data, it generates an endless sequence of AI-created questions from a secure local database, creating an illusion of progress while keeping your real data untouched. * Offline AI Generation Mode: Optional offline Q&A generator (prototype). How It Works (Simplified): 1) Files are packed into an AES-256 encrypted ZIP archive. 2) A JSON key file stores the questions in an encrypted chain. Each subsequent question is encrypted with a key derived from the previous correct answer and the file's hash. This forces you to answer them sequentially. 3) The final encryption key for the ZIP file is derived by combining the hashes of all your correct answers. The key derivation formula looks like this: K_final = SHA256(H(answer1+file_hash) + H(answer2+file_hash) + ...) (Note: We are aware that a fast hash like SHA256 is not ideal for a KDF. We plan to migrate to Argon2 in a future release to further strengthen resistance against brute-force attacks.) To encrypt, you provide a file. This creates two outputs: your_file.txt → your_file_SMLkey.json + your_file_SecretML.zip To decrypt, you need both files and the correct answers. Install & Quick Start: Download the EXE from GitHub Releases (no dependencies needed): https://ift.tt/VcifBEl Encrypt: SecretMemoryLocker.exe --encrypt "C:\docs\important.pdf" Decrypt: SecretMemoryLocker.exe --decrypt "C:\docs\important_SMLkey.json" I would love to get your feedback on the concept, the user experience, and any security assumptions I've made. Thanks!
Monday, August 25, 2025
New top story on Hacker News: Show HN: Stagewise – frontend coding agent for real codebases
Show HN: Stagewise – frontend coding agent for real codebases
3 by glenntws | 1 comments on Hacker News.
Hey HN, we're Glenn and Julian, and we're building stagewise ( https://stagewise.io ), a frontend coding agent that inside your app’s dev mode and that makes changes in your local codebase. We’re compatible with any framework and any component library. Think of it like a v0 of Lovable that works locally and with any existing codebase. You can spawn the agent into locally running web apps in dev mode with `npx stagewise` from the project root. The agent lets you then click on HTML Elements in your app, enter prompts like 'increase the height here' and will implement the changes in your source code. Before stagewise, we were building a vertical SaaS for logistics from scratch and loved using prototyping tools like v0 or lovable to get to the first version. But when switching from v0/ lovable to Cursor for local development, we felt like the frontend magic was gone. So, we decided to build stagewise to bring that same magic to local development. The first version of stagewise just forwarded a prompt with browser context to existing IDEs and agents (Cursor, Cline, ..) and went viral on X after we open sourced it. However, the APIs of existing coding agents were very limiting, so we figured that building our own agent would unlock the full potential of stagewise. Since our last Show HN ( https://ift.tt/wRjQTm1 ), we launched a few very important features and changes: You now have a proprietary chat history with the agent, an undo button to revert changes, and we increased the amount of free credits AND reduced the pricing by 50%. We made a video about all these changes, showing you how stagewise works: https://ift.tt/jI8L6Xi... . So far, we've seen great adoption from non-technical users who wanted to continue building their lovable prototype locally. We personally use the agent almost daily to make changes to our landing page and to build the UI of new features on our console ( https://ift.tt/uscD0YJ ). If you have an app running in dev mode, simply `cd` into the app directory and run `npx stagewise` - the agent should appear, ready to play with. We're very excited to hear your feedback!
3 by glenntws | 1 comments on Hacker News.
Hey HN, we're Glenn and Julian, and we're building stagewise ( https://stagewise.io ), a frontend coding agent that inside your app’s dev mode and that makes changes in your local codebase. We’re compatible with any framework and any component library. Think of it like a v0 of Lovable that works locally and with any existing codebase. You can spawn the agent into locally running web apps in dev mode with `npx stagewise` from the project root. The agent lets you then click on HTML Elements in your app, enter prompts like 'increase the height here' and will implement the changes in your source code. Before stagewise, we were building a vertical SaaS for logistics from scratch and loved using prototyping tools like v0 or lovable to get to the first version. But when switching from v0/ lovable to Cursor for local development, we felt like the frontend magic was gone. So, we decided to build stagewise to bring that same magic to local development. The first version of stagewise just forwarded a prompt with browser context to existing IDEs and agents (Cursor, Cline, ..) and went viral on X after we open sourced it. However, the APIs of existing coding agents were very limiting, so we figured that building our own agent would unlock the full potential of stagewise. Since our last Show HN ( https://ift.tt/wRjQTm1 ), we launched a few very important features and changes: You now have a proprietary chat history with the agent, an undo button to revert changes, and we increased the amount of free credits AND reduced the pricing by 50%. We made a video about all these changes, showing you how stagewise works: https://ift.tt/jI8L6Xi... . So far, we've seen great adoption from non-technical users who wanted to continue building their lovable prototype locally. We personally use the agent almost daily to make changes to our landing page and to build the UI of new features on our console ( https://ift.tt/uscD0YJ ). If you have an app running in dev mode, simply `cd` into the app directory and run `npx stagewise` - the agent should appear, ready to play with. We're very excited to hear your feedback!
Sunday, August 24, 2025
Saturday, August 23, 2025
Friday, August 22, 2025
Thursday, August 21, 2025
Wednesday, August 20, 2025
Tuesday, August 19, 2025
New top story on Hacker News: CRLite: Certificate Revocation Checking in Firefox
CRLite: Certificate Revocation Checking in Firefox
11 by TangerineDream | 0 comments on Hacker News.
11 by TangerineDream | 0 comments on Hacker News.
Monday, August 18, 2025
New top story on Hacker News: Show HN: Whispering – Open-source, local-first dictation you can trust
Show HN: Whispering – Open-source, local-first dictation you can trust
13 by braden-w | 5 comments on Hacker News.
Hey HN! Braden here, creator of Whispering, an open-source speech-to-text app. I really like dictation. For years, I relied on transcription tools that were almost good, but they were all closed-source. Even a lot of them that claimed to be “local” or “on-device” were still black boxes that left me wondering where my audio really went. So I built Whispering. It’s open-source, local-first, and most importantly, transparent with your data. All your data is stored locally on your device. For me, the features were good enough that I left my paid tools behind (I used Superwhisper and Wispr Flow before). Productivity apps should be open-source and transparent with your data, but they also need to match the UX of paid, closed-software alternatives. I hope Whispering is near that point. I use it for several hours a day, from coding to thinking out loud while carrying pizza boxes back from the office. Here’s an overview: https://www.youtube.com/watch?v=1jYgBMrfVZs , and here’s how I personally am using it with Claude Code these days: https://www.youtube.com/watch?v=tpix588SeiQ . There are plenty of transcription apps out there, but I hope Whispering adds some extra competition from the OSS ecosystem (one of my other OSS favorites is Handy https://ift.tt/EOmZBAY ). Whispering has a few tricks up its sleeve, like a voice-activated mode for hands-free operation (no button holding), and customizable AI transformations with any prompt/model. Whispering used to be in my personal GH repo, but I recently moved it as part of a larger project called Epicenter ( https://ift.tt/xl0ARmP ), which I should explain a bit... I’m basically obsessed with local-first open-source software. I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it. Whispering is the first app in this effort. It’s not there yet regarding memory, but it’s getting there. I’ll probably write more about the bigger picture soon, but mainly I just want to make software and let it speak for itself (no pun intended in this case!), so this is my Show HN for now. I just finished college and was about to move back with my parents and work on this instead of getting a job…and then I somehow got into YC. So my current plan is to cover my living expenses and use the YC funding to support maintainers, our dependencies, and people working on their own open-source local-first projects. More on that soon. Would love your feedback, ideas, and roasts. If you would like to support the project, star it on GitHub here ( https://ift.tt/xl0ARmP ) and join the Discord here ( https://ift.tt/aitfUMo ). Everything’s MIT licensed, so fork it, break it, ship your own version, copy whatever you want!
13 by braden-w | 5 comments on Hacker News.
Hey HN! Braden here, creator of Whispering, an open-source speech-to-text app. I really like dictation. For years, I relied on transcription tools that were almost good, but they were all closed-source. Even a lot of them that claimed to be “local” or “on-device” were still black boxes that left me wondering where my audio really went. So I built Whispering. It’s open-source, local-first, and most importantly, transparent with your data. All your data is stored locally on your device. For me, the features were good enough that I left my paid tools behind (I used Superwhisper and Wispr Flow before). Productivity apps should be open-source and transparent with your data, but they also need to match the UX of paid, closed-software alternatives. I hope Whispering is near that point. I use it for several hours a day, from coding to thinking out loud while carrying pizza boxes back from the office. Here’s an overview: https://www.youtube.com/watch?v=1jYgBMrfVZs , and here’s how I personally am using it with Claude Code these days: https://www.youtube.com/watch?v=tpix588SeiQ . There are plenty of transcription apps out there, but I hope Whispering adds some extra competition from the OSS ecosystem (one of my other OSS favorites is Handy https://ift.tt/EOmZBAY ). Whispering has a few tricks up its sleeve, like a voice-activated mode for hands-free operation (no button holding), and customizable AI transformations with any prompt/model. Whispering used to be in my personal GH repo, but I recently moved it as part of a larger project called Epicenter ( https://ift.tt/xl0ARmP ), which I should explain a bit... I’m basically obsessed with local-first open-source software. I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it. Whispering is the first app in this effort. It’s not there yet regarding memory, but it’s getting there. I’ll probably write more about the bigger picture soon, but mainly I just want to make software and let it speak for itself (no pun intended in this case!), so this is my Show HN for now. I just finished college and was about to move back with my parents and work on this instead of getting a job…and then I somehow got into YC. So my current plan is to cover my living expenses and use the YC funding to support maintainers, our dependencies, and people working on their own open-source local-first projects. More on that soon. Would love your feedback, ideas, and roasts. If you would like to support the project, star it on GitHub here ( https://ift.tt/xl0ARmP ) and join the Discord here ( https://ift.tt/aitfUMo ). Everything’s MIT licensed, so fork it, break it, ship your own version, copy whatever you want!