Show HN: Tinder, but to Decide What to Eat (Score: 151+ in 11 hours)
Link:
Comments:
Hello HN,
My girlfriend and I waste too much energy to decide what to eat. Every day, we would text each other, "what do we eat tonight" messages, and go over options and many times spend too much time on deciding. I am an indie dev and created this app to solve my own problem: decide with my girlfriend what to eat for dinner.
Initially, I created a simple app, in which we listed all the recipes we ever prepared, and it would propose randomly three of them. We would then choose together one of them. This app[0] turned into a tinder-like app, which would propose every day a set of recipes to my girlfriend and me - we would swipe and go for the first match.
If have some time, give it a try and feedback is very appreciated!
Cheers,
Kiru
[0]
11/4/2024, 8:40:06 AM
Ask HN: What would you preserve if the internet were to go down tomorrow? (Score: 150+ in 1 day)
Link:
thought experiment: if the internet were to go down tomorrow for an indefinite period, what content would you most want to download and preserve?
11/4/2024, 7:10:19 AM
Tell HN: We (Causal) got acquired – thank you HN (Score: 150+ in 13 hours)
Link:
Hi HN, I'm the co-founder/CEO of Causal. We just announced our acquisition:
Just wanted to say a big thank you to the HN community —
Everything started with this post from over 5 years ago [1] which gave me and Lukas (CTO) enough conviction to quit our jobs to work on Causal full-time.
A few months later we launched an Excel sensitivity tool [2]. In 2022 we shared how we scaled our calculation engine to billions of cells [3] and a few months ago we had a successful Show HN of Causal 2.0 [4]
The product is now used by 100s of startups (many YC companies) and we're excited for this next phase of growth within Lucanet :)
[1]
[2]
[3]
[4]
11/2/2024, 6:30:18 AM
Ask HN: Who is hiring? (November 2024) (🔥 Score: 152+ in 3 hours)
Link:
NEW RULE: Please only post a job if you actually intend to fill a position
and are committed to responding to everyone who applies.
----
Please state the location and include REMOTE for remote work, REMOTE (US)
or similar if the country is restricted, and ONSITE when remote work is not an option.
Please only post if you personally are part of the hiring company—no
recruiting firms or job boards. One post per company. If it isn't a household name,
explain what your company does.
Commenters: please don't reply to job posts to complain about
something. It's off topic here.
Readers: please only email if you are personally interested in the job.
Searchers: try , ,
, , .
Don't miss these other fine threads:
Who wants to be hired?
Freelancer? Seeking freelancer?
11/1/2024, 6:40:02 PM
Show HN: Trench – Open-source analytics infrastructure (❄️ Score: 150+ in 4 days)
Link:
Comments:
Hey HN! I want to share a new open source project I've been working on called Trench (). It's open source analytics infrastructure for tracking events, page views, and identifying users, and it's built on top of ClickHouse and Kafka.
I built Trench because the Postgres table we used for tracking events at our startup () was getting expensive and becoming a performance bottleneck as we scaled to millions of end users.
Many companies run into the same problem as us (e.g. Stripe, Heroku: ). They often start by adding a basic events table to their relational database, which works at first, but can become an issue as the application scales. It’s usually the biggest table in the database, the slowest one to query, and the longest one to back up.
With Trench, we’ve put together a single Docker image that gives you a production-ready tracking event table built for scale and speed. When we migrated our tracking table from Postgres to Trench, we saw a 42% reduction in cost to serve on our primary Postgres cluster and all lag spikes from autoscaling under high traffic were eliminated.
Here are some of the core features:
* Fully compliant with the Segment tracking spec e.g. track(), identify(), group(), etc.
* Can handle thousands of events per second on a single node
* Query tracking data in real-time with read-after-write guarantees
* Send data anywhere with throttled and batched webhooks
* Single production-ready docker image. No need to manage and roll your own Kafka/ClickHouse/Nodejs/etc.
* Easily plugs into any cloud hosted ClickHouse and Kafka solutions e.g. ClickHouse Cloud, Confluent
Trench can be used for a range of use cases. Here are some possibilities:
1. Real-Time Monitoring and Alerting: Set up real-time alerts and monitoring for your services by tracking custom events like errors, usage spikes, or specific user actions and sending that data anywhere with Trench’s webhooks
2. Event Replay and Debugging: Capture all user interactions in real-time for event replay
3. A/B Testing Platform: Capture events from different users and groups in real time. Segment users by querying in real time and serve the right experiences to the right users
4. Product Analytics for SaaS Applications: Embed Trench into your existing SaaS product to power user audit logs or tracking scripts on your end-users’ websites
5. Build a custom RAG model: Easily query event data and give users answers in real-time. LLMs are really good at writing SQL
The project is open-source and MIT-licensed. If there’s interest, we’re thinking about adding support for Elastic Search, direct data integrations (e.g. Redshift, S3, etc.), and an admin interface for creating queries, webhooks, etc.
Have you experienced the same issues with your events tables? I'd love to hear what HN thinks about the project.
10/30/2024, 12:30:04 PM
Show HN: Unforget, the note-taking app I always wanted: offline first, encrypted (Score: 150+ in 13 hours)
Link: Comments: Hi HN! I created Unforget out of years of frustration with Google Keep and the lack of alternative that met all my needs. I hope you find it useful too!
Features include:
- import from Google Keep
- offline first including search
- sync when online
- own your data and fully encrypted
- Desktop, mobile, web
- lightweight, progressive web app without Electron.js
- markdown support
- programmable with public APIs
- open source [1]
While I still use org mode for long-form notes with lots of code, Unforget has become my go-to for quickly jotting down ideas and to-do lists after migrating the thousands of notes I had on Google Keep.
In addition, I'm thrilled to announce the opening of our software agency, Computing Den [2]. We specialize in helping businesses transition from legacy software, manual workflows, and Excel spreadsheets to modern, automated systems. Please get it touch to discuss how we can help you or if you wish to join our team.
[1]
[2]
6/12/2024, 2:40:00 AM
ARC Prize – a $1M+ competition towards open AGI progress (Score: 151+ in 5 hours)
Link:
Comments:
Hey folks! Mike here. Francois Chollet and I are launching ARC Prize, a public competition to beat and open-source the solution to the ARC-AGI eval.
ARC-AGI is (to our knowledge) the only eval which measures AGI: a system that can efficiently acquire new skill and solve novel, open-ended problems. Most AI evals measure skill directly vs the acquisition of new skill.
Francois created the eval in 2019, SOTA was 20% at inception, SOTA today is only 34%. Humans score 85-100%. 300 teams attempted ARC-AGI last year and several bigger labs have attempted it.
While most other skill-based evals have rapidly saturated to human-level, ARC-AGI was designed to resist “memorization” techniques (eg. LLMs)
Solving ARC-AGI tasks is quite easy for humans (even children) but impossible for modern AI. You can try ARC-AGI tasks yourself here:
ARC-AGI consists of 400 public training tasks, 400 public test tasks, and 100 secret test tasks. Every task is novel. SOTA is measured against the secret test set which adds to the robustness of the eval.
Solving ARC-AGI tasks requires no world knowledge, no understanding of language. Instead each puzzle requires a small set of “core knowledge priors” (goal directedness, objectness, symmetry, rotation, etc.)
At minimum, a solution to ARC-AGI opens up a completely new programming paradigm where programs can perfectly and reliably generalize from an arbitrary set of priors. At maximum, unlocks the tech tree towards AGI.
Our goal with this competition is:
1. Increase the number of researchers working on frontier AGI research (vs tinkering with LLMs). We need new ideas and the solution is likely to come from an outsider!
2. Establish a popular, objective measure of AGI progress that the public can use to understand how close we are to AGI (or not). Every new SOTA score will be published here:
3. Beat ARC-AGI and learn something new about the nature of intelligence.
Happy to answer questions!
6/11/2024, 11:00:02 PM
Show HN: Thread – AI-powered Jupyter Notebook built using React (Score: 150+ in 1 day)
Link:
Comments:
Hey HN, we're building Thread () an open-source Jupyter Notebook that has a bunch of AI features built in. The easiest way to think of Thread is if the chat interface of OpenAI code interpreter was fused into a Jupyter Notebook development environment where you could still edit code or re-run cells. To check it out, you can see a video demo here:
We initially got the idea when building Vizly () a tool that lets non-technical users ask questions from their data. While Vizly is powerful at performing data transformations, as engineers, we often felt that natural language didn't give us enough freedom to edit the code that was generated or to explore the data further for ourselves. That is what gave us the inspiration to start Thread.
We made Thread a pip package (`pip install thread-dev`) because we wanted to make Thread as easily accessible as possible. While there are a lot of notebooks that improve on the notebook development experience, they are often cloud hosted tools that are hard to access as an individual contributor unless your company has signed an enterprise agreement.
With Thread, we are hoping to bring the power of LLMs to the local notebook development environment while blending the editing experience that you can get in a cloud hosted notebook. We have many ideas on the roadmap but instead of building in a vacuum (which we have made the mistake of before) our hope was to get some initial feedback to see if others are as interested in a tool like this as we are.
Would love to hear your feedback and see what you think!
6/11/2024, 10:40:06 PM
Ask HN: What macOS apps/programs do you use daily and recommend? (Score: 150+ in 1 day)
Link:
I'm converting my unused gaming PC into a NAS/Docker container server and my personal device will now be a MacBook Air.
I've got Magnet for easier window management, otherwise not much else and looking for recommendations on other apps to check out.
So, what applications do you use daily on MacOS and why do you love it?
6/11/2024, 6:00:09 AM
Ask HN: I have many PDFs – what is the best local way to leverage AI for search? (Score: 153+ in 8 hours)
Link:
As the title says, I have many PDFs - mostly scans via Scansnap - but also non-scans. These are sensitive in nature, e.g. bills, documents, etc. I would like a local-first AI solution that allows me to say things like: "show me all tax documents for August 2023" or "show my home title". Ideally it is Mac software that can access iCloud too, since that where I store it all. I would prefer to not do any tagging. I would like to optimize on recall over precision, so False Positives in the search results are ok. What are modern approaches to do this, without hacking one up on my own?
5/31/2024, 5:10:08 AM
Show HN: I built a tiny-VPS friendly RSS aggregator and reader (Score: 152+ in 16 hours)
Link:
Comments:
Hi, folks.
As an RSS user, I tried Inoreader and Feedly, then ended up self-hosting a Miniflux instance on my homelab. A few months ago, I moved to another city and had to shut down my homelab for a long time, so I couldn't access my local miniflux. It was quite inconvenient. I decided to self-host my RSS aggregator on a tiny VPS or PaaS such as . However, Miniflux requires a PostgreSQL database, which may isn't suitable for a tiny VPS instance.
So I built fusion with Golang and SQLite. It contains basic features such as Group, Bookmark, Search, Automatically feeds sniffing, Import/Export OPML file, etc. It uses about 80MB of Mem and negligible CPU usage (metrics here: ).
Feel free to share your questions and suggestions.
BTW, I also built an online tool to sniff RSS links from a URL. ()
5/31/2024, 3:40:02 AM
Show HN: Serverless Postgres (Score: 150+ in 1 day)
Link:
Comments:
This is a MVP for Serverless Postgres.
1/ It uses [0], which can automatically pause your database after all connections are released (and start it again when new connections join).
2/ It uses Oriole[1], a Postgres extension with experimental support for S3 / Decoupled Storage[2].
3/ It uses Tigris[3], Globally Distributed S3-Compatible Object Storage. Oriole will automatically backup the data to Tigris using background workers.
I wouldn't recommend using this in production, but I think it's in a good spot to provoke some discussion and ideas. You can get it running on your own machine with the steps provided - connecting to a remote Tigris bucket (can also be an AWS S3 bucket).
[0]
[1]
[2] Oriole Experiemental s3:
[3] Tigris:
5/31/2024, 2:00:18 AM