AMD Disables Zen 4's Loop Buffer
14 by luyu_wu | 1 comments on Hacker News.
Saturday, November 30, 2024
Friday, November 29, 2024
Thursday, November 28, 2024
Wednesday, November 27, 2024
Tuesday, November 26, 2024
Monday, November 25, 2024
New top story on Hacker News: Show HN: Minimal, customizable new tab for Chrome/Firefox
Show HN: Minimal, customizable new tab for Chrome/Firefox
13 by georg-stone | 6 comments on Hacker News.
Hello HN! Flowtide is a project I have been working on for about 2 months now. It is a customizable new tab page for Firefox or Chrome. By default, it is configured to have a minimal amount of features, but it can be configured to include a clock, to-do list, or even soundscapes. Install: https://flowtide.app/ GitHub: https://ift.tt/QJx0X7K
13 by georg-stone | 6 comments on Hacker News.
Hello HN! Flowtide is a project I have been working on for about 2 months now. It is a customizable new tab page for Firefox or Chrome. By default, it is configured to have a minimal amount of features, but it can be configured to include a clock, to-do list, or even soundscapes. Install: https://flowtide.app/ GitHub: https://ift.tt/QJx0X7K
Sunday, November 24, 2024
Saturday, November 23, 2024
Friday, November 22, 2024
Thursday, November 21, 2024
Wednesday, November 20, 2024
Urban voters disappointed in Maharashtra, least number of voters came out from Mumbai, Pune and Thane, see the figures
Maharashtra assembly elections 2024
New Delhi: Voting for the first and last phase of Maharashtra assembly elections took place today. The figures that have come out after the voting was completed at 6 pm are not very encouraging. According to the Election Commission, low voter participation was seen in cities like Mumbai, Pune and Thane. By 5 pm, 58.22 percent voting took place in Maharashtra and 67.59 percent in Jharkhand. At the same time, in Jharkhand, 67.04 percent voting took place on these assembly seats in 2019.
Tuesday, November 19, 2024
Monday, November 18, 2024
New top story on Hacker News: Show HN: FastGraphRAG – Better RAG using good old PageRank
Show HN: FastGraphRAG – Better RAG using good old PageRank
22 by liukidar | 5 comments on Hacker News.
Hey there HN! We’re Antonio, Luca, and Yuhang, and we’re excited to introduce Fast GraphRAG, an open-source RAG approach that leverages knowledge graphs and the 25 years old PageRank for better information retrieval and reasoning. Building a good RAG pipeline these days takes a lot of manual optimizations. Most engineers intuitively start from naive RAG: throw everything in a vector database and hope that semantic search is powerful enough. This can work for use cases where accuracy isn’t too important and hallucinations are tolerable, but it doesn’t work for more difficult queries that involve multi-hop reasoning or more advanced domain understanding. Also, it’s impossible to debug it. To address these limitations, many engineers find themselves adding extra layers like agent-based preprocessing, custom embeddings, reranking mechanisms, and hybrid search strategies. Much like the early days of machine learning when we manually crafted feature vectors to squeeze out marginal gains, building an effective RAG system often becomes an exercise in crafting engineering “hacks.” Earlier this year, Microsoft seeded the idea of using Knowledge Graphs for RAG and published GraphRAG - i.e. RAG with Knowledge Graphs. We believe that there is an incredible potential in this idea, but existing implementations are naive in the way they create and explore the graph. That’s why we developed Fast GraphRAG with a new algorithmic approach using good old PageRank. There are two main challenges when building a reliable RAG system: (1) Data Noise: Real-world data is often messy. Customer support tickets, chat logs, and other conversational data can include a lot of irrelevant information. If you push noisy data into a vector database, you’re likely to get noisy results. (2) Domain Specialization: For complex use cases, a RAG system must understand the domain-specific context. This requires creating representations that capture not just the words but the deeper relationships and structures within the data. Our solution builds on these insights by incorporating knowledge graphs into the RAG pipeline. Knowledge graphs store entities and their relationships, and can help structure data in a way that enables more accurate and context-aware information retrieval. 12 years ago Google announced the knowledge graph we all know about [1]. It was a pioneering move. Now we have LLMs, meaning that people can finally do RAG on their own data with tools that can be as powerful as Google’s original idea. Before we built this, Antonio was at Amazon, while Luca and Yuhang were finishing their PhDs at Oxford. We had been thinking about this problem for years and we always loved the parallel between pagerank and the human memory [2]. We believe that searching for memories is incredibly similar to searching the web. Here’s how it works: - Entity and Relationship Extraction: Fast GraphRAG uses LLMs to extract entities and their relationships from your data and stores them in a graph format [3]. - Query Processing: When you make a query, Fast GraphRAG starts by finding the most relevant entities using vector search, then runs a personalized PageRank algorithm to determine the most important “memories” or pieces of information related to the query [4]. - Incremental Updates: Unlike other graph-based RAG systems, Fast GraphRAG natively supports incremental data insertions. This means you can continuously add new data without reprocessing the entire graph. - Faster: These design choices make our algorithm faster and more affordable to run than other graph-based RAG systems because we eliminate the need for communities and clustering. Suppose you’re analyzing a book and want to focus on character interactions, locations, and significant events: from fast_graphrag import GraphRAG DOMAIN = "Analyze this story and identify the characters. Focus on how they interact with each other, the locations they explore, and their relationships." EXAMPLE_QUERIES = [ "What is the significance of Christmas Eve in A Christmas Carol?", "How does the setting of Victorian London contribute to the story's themes?", "Describe the chain of events that leads to Scrooge's transformation.", "How does Dickens use the different spirits (Past, Present, and Future) to guide Scrooge?", "Why does Dickens choose to divide the story into \"staves\" rather than chapters?" ] ENTITY_TYPES = ["Character", "Animal", "Place", "Object", "Activity", "Event"] grag = GraphRAG( working_dir="./book_example", domain=DOMAIN, example_queries="\n".join(EXAMPLE_QUERIES), entity_types=ENTITY_TYPES ) with open("./book.txt") as f: grag.insert(f.read()) print(grag.query("Who is Scrooge?").response) This code creates a domain-specific knowledge graph based on your data, example queries, and specified entity types. Then you can query it in plain English while it automatically handles all the data fetching, entity extractions, co-reference resolutions, memory elections, etc. When you add new data, locking and checkpointing is handled for you as well. This is the kind of infrastructure that GenAI apps need to handle large-scale real-world data. Our goal is to give you this infrastructure so that you can focus on what’s important: building great apps for your users without having to care about manually engineering a retrieval pipeline. In the managed service, we also have a suite of UI tools for you to explore and debug your knowledge graph. We have a free hosted solution with up to 100 monthly requests. When you’re ready to grow, we have paid plans that scale with you. And of course you can self host our open-source engine. Give us a spin today at https://circlemind.co and see our code at https://ift.tt/lXzjWo8 We’d love feedback :) [1] https://ift.tt/Ow8FjoM... [2] Griffiths, T. L., Steyvers, M., & Firl, A. (2007). Google and the Mind: Predicting Fluency with PageRank. Psychological Science, 18(12), 1069–1076. https://ift.tt/OZ0R9fb [3] Similarly to Microsoft’s GraphRAG: https://ift.tt/W6YFs4a [4] Similarly to OSU’s HippoRAG: https://ift.tt/numkr9D https://ift.tt/a0C84ek
22 by liukidar | 5 comments on Hacker News.
Hey there HN! We’re Antonio, Luca, and Yuhang, and we’re excited to introduce Fast GraphRAG, an open-source RAG approach that leverages knowledge graphs and the 25 years old PageRank for better information retrieval and reasoning. Building a good RAG pipeline these days takes a lot of manual optimizations. Most engineers intuitively start from naive RAG: throw everything in a vector database and hope that semantic search is powerful enough. This can work for use cases where accuracy isn’t too important and hallucinations are tolerable, but it doesn’t work for more difficult queries that involve multi-hop reasoning or more advanced domain understanding. Also, it’s impossible to debug it. To address these limitations, many engineers find themselves adding extra layers like agent-based preprocessing, custom embeddings, reranking mechanisms, and hybrid search strategies. Much like the early days of machine learning when we manually crafted feature vectors to squeeze out marginal gains, building an effective RAG system often becomes an exercise in crafting engineering “hacks.” Earlier this year, Microsoft seeded the idea of using Knowledge Graphs for RAG and published GraphRAG - i.e. RAG with Knowledge Graphs. We believe that there is an incredible potential in this idea, but existing implementations are naive in the way they create and explore the graph. That’s why we developed Fast GraphRAG with a new algorithmic approach using good old PageRank. There are two main challenges when building a reliable RAG system: (1) Data Noise: Real-world data is often messy. Customer support tickets, chat logs, and other conversational data can include a lot of irrelevant information. If you push noisy data into a vector database, you’re likely to get noisy results. (2) Domain Specialization: For complex use cases, a RAG system must understand the domain-specific context. This requires creating representations that capture not just the words but the deeper relationships and structures within the data. Our solution builds on these insights by incorporating knowledge graphs into the RAG pipeline. Knowledge graphs store entities and their relationships, and can help structure data in a way that enables more accurate and context-aware information retrieval. 12 years ago Google announced the knowledge graph we all know about [1]. It was a pioneering move. Now we have LLMs, meaning that people can finally do RAG on their own data with tools that can be as powerful as Google’s original idea. Before we built this, Antonio was at Amazon, while Luca and Yuhang were finishing their PhDs at Oxford. We had been thinking about this problem for years and we always loved the parallel between pagerank and the human memory [2]. We believe that searching for memories is incredibly similar to searching the web. Here’s how it works: - Entity and Relationship Extraction: Fast GraphRAG uses LLMs to extract entities and their relationships from your data and stores them in a graph format [3]. - Query Processing: When you make a query, Fast GraphRAG starts by finding the most relevant entities using vector search, then runs a personalized PageRank algorithm to determine the most important “memories” or pieces of information related to the query [4]. - Incremental Updates: Unlike other graph-based RAG systems, Fast GraphRAG natively supports incremental data insertions. This means you can continuously add new data without reprocessing the entire graph. - Faster: These design choices make our algorithm faster and more affordable to run than other graph-based RAG systems because we eliminate the need for communities and clustering. Suppose you’re analyzing a book and want to focus on character interactions, locations, and significant events: from fast_graphrag import GraphRAG DOMAIN = "Analyze this story and identify the characters. Focus on how they interact with each other, the locations they explore, and their relationships." EXAMPLE_QUERIES = [ "What is the significance of Christmas Eve in A Christmas Carol?", "How does the setting of Victorian London contribute to the story's themes?", "Describe the chain of events that leads to Scrooge's transformation.", "How does Dickens use the different spirits (Past, Present, and Future) to guide Scrooge?", "Why does Dickens choose to divide the story into \"staves\" rather than chapters?" ] ENTITY_TYPES = ["Character", "Animal", "Place", "Object", "Activity", "Event"] grag = GraphRAG( working_dir="./book_example", domain=DOMAIN, example_queries="\n".join(EXAMPLE_QUERIES), entity_types=ENTITY_TYPES ) with open("./book.txt") as f: grag.insert(f.read()) print(grag.query("Who is Scrooge?").response) This code creates a domain-specific knowledge graph based on your data, example queries, and specified entity types. Then you can query it in plain English while it automatically handles all the data fetching, entity extractions, co-reference resolutions, memory elections, etc. When you add new data, locking and checkpointing is handled for you as well. This is the kind of infrastructure that GenAI apps need to handle large-scale real-world data. Our goal is to give you this infrastructure so that you can focus on what’s important: building great apps for your users without having to care about manually engineering a retrieval pipeline. In the managed service, we also have a suite of UI tools for you to explore and debug your knowledge graph. We have a free hosted solution with up to 100 monthly requests. When you’re ready to grow, we have paid plans that scale with you. And of course you can self host our open-source engine. Give us a spin today at https://circlemind.co and see our code at https://ift.tt/lXzjWo8 We’d love feedback :) [1] https://ift.tt/Ow8FjoM... [2] Griffiths, T. L., Steyvers, M., & Firl, A. (2007). Google and the Mind: Predicting Fluency with PageRank. Psychological Science, 18(12), 1069–1076. https://ift.tt/OZ0R9fb [3] Similarly to Microsoft’s GraphRAG: https://ift.tt/W6YFs4a [4] Similarly to OSU’s HippoRAG: https://ift.tt/numkr9D https://ift.tt/a0C84ek
Sunday, November 17, 2024
Saturday, November 16, 2024
Friday, November 15, 2024
Thursday, November 14, 2024
Wednesday, November 13, 2024
New top story on Hacker News: Show HN: Konga Beat – A custom track editor for Donkey Konga 2 and 3
Show HN: Konga Beat – A custom track editor for Donkey Konga 2 and 3
31 by CIARobotFish | 7 comments on Hacker News.
Howdy HN! For those who don't know, back in the early 2000s, Nintendo and Namco developed a series of music rhythm games for the GameCube featuring Donkey Kong called Donkey Konga: https://ift.tt/RhCuPST The Donkey Konga games borrowed heavily from Taiko no Tatsujin (another music rhythm game by Namco). However, instead of taiko drums, the player would use DK Bongos to jam along with music from different eras and genres. Long story short, I figured out how to add custom tracks to some of the Donkey Konga games (Donkey Konga 2 and 3) but found the entire process cumbersome, so I decided to make a dedicated editor. It was a lot of fun to make, and I hope others get some enjoyment out of it too!
31 by CIARobotFish | 7 comments on Hacker News.
Howdy HN! For those who don't know, back in the early 2000s, Nintendo and Namco developed a series of music rhythm games for the GameCube featuring Donkey Kong called Donkey Konga: https://ift.tt/RhCuPST The Donkey Konga games borrowed heavily from Taiko no Tatsujin (another music rhythm game by Namco). However, instead of taiko drums, the player would use DK Bongos to jam along with music from different eras and genres. Long story short, I figured out how to add custom tracks to some of the Donkey Konga games (Donkey Konga 2 and 3) but found the entire process cumbersome, so I decided to make a dedicated editor. It was a lot of fun to make, and I hope others get some enjoyment out of it too!
Tuesday, November 12, 2024
New top story on Hacker News: Large Language Models in National Security Applications
Large Language Models in National Security Applications
34 by bindidwodtj | 9 comments on Hacker News.
34 by bindidwodtj | 9 comments on Hacker News.
Monday, November 11, 2024
Sunday, November 10, 2024
Saturday, November 9, 2024
Friday, November 8, 2024
New top story on Hacker News: Pirating "The Pirate Bay" TV Series Is Ironically Difficult
Pirating "The Pirate Bay" TV Series Is Ironically Difficult
20 by HieronymusBosch | 5 comments on Hacker News.
20 by HieronymusBosch | 5 comments on Hacker News.
Thursday, November 7, 2024
Wednesday, November 6, 2024
New top story on Hacker News: Launch HN: Midship (YC S24) – Turn unstructured documents into usable data
Launch HN: Midship (YC S24) – Turn unstructured documents into usable data
6 by maxmaio | 1 comments on Hacker News.
Hey HN, we are Max, Kieran, and Aahel from Midship ( https://midship.ai ). Midship makes it easy to extract data from unstructured documents like pdfs and images. Here’s a video showing it in action: https://ift.tt/W4wFRue?... , and a demo playground (no signup required!) to test it out: https://ift.tt/QRsAd1b We started 5 months ago initially trying to make an AI natural language workflow builder that would be a simpler alternative to Zapier or Make.com. However, most of our users seemed to be much more interested in the basic (and not very good) document extraction feature we had. Seeing how people were spending hours a day manually extracting data from pdfs inspired us to build what has become Midship! The problem is that despite all our progress in software, huge amounts of business data still lives in PDFs and images. Sure, you can OCR them, but getting clean, structured data out is still painful. Most existing tools just give you a blob of markdown - leaving you to figure out which parts matter and how they relate. We've found that combining OCR with language models lets us do something more useful: extract specific fields and tables that users actually care about. The LLMs help correct OCR mistakes and understand context (like knowing that "Inv#" and "Invoice Number" mean the same thing). We have two main kinds of users today, non-technical users that extract data via our web app and developers who use our extraction api. We were initially focused on the first one as they seemed like an underserved part of the market, but we’ve received a lot of interest from developers who face the same issues. For pricing, we currently charge a monthly Saas fee per seat for the web app and a volume based pricing for the API. We’re really excited to share what we’ve built so far and look forward to any feedback from the community!
6 by maxmaio | 1 comments on Hacker News.
Hey HN, we are Max, Kieran, and Aahel from Midship ( https://midship.ai ). Midship makes it easy to extract data from unstructured documents like pdfs and images. Here’s a video showing it in action: https://ift.tt/W4wFRue?... , and a demo playground (no signup required!) to test it out: https://ift.tt/QRsAd1b We started 5 months ago initially trying to make an AI natural language workflow builder that would be a simpler alternative to Zapier or Make.com. However, most of our users seemed to be much more interested in the basic (and not very good) document extraction feature we had. Seeing how people were spending hours a day manually extracting data from pdfs inspired us to build what has become Midship! The problem is that despite all our progress in software, huge amounts of business data still lives in PDFs and images. Sure, you can OCR them, but getting clean, structured data out is still painful. Most existing tools just give you a blob of markdown - leaving you to figure out which parts matter and how they relate. We've found that combining OCR with language models lets us do something more useful: extract specific fields and tables that users actually care about. The LLMs help correct OCR mistakes and understand context (like knowing that "Inv#" and "Invoice Number" mean the same thing). We have two main kinds of users today, non-technical users that extract data via our web app and developers who use our extraction api. We were initially focused on the first one as they seemed like an underserved part of the market, but we’ve received a lot of interest from developers who face the same issues. For pricing, we currently charge a monthly Saas fee per seat for the web app and a volume based pricing for the API. We’re really excited to share what we’ve built so far and look forward to any feedback from the community!
