I thought the idea of self-hosting an LLM server for personal purposes would be fun, and besides, I don’t like the idea of a company storing my ChatGPT prompts or whatever. The cost isn’t though, and I don’t think it’s worth it to upgrade my old GTX 1650 until it’s broken.
An alternative would be calling an LLM server via API, then store my own prompts myself. I tested out some free models in OpenRouter and see if they “remember” me, and they don’t. Guess that’s good enough.
Where would I use an LLM? I would never use it for coding (yet) because LLMs have only helped me 10% of the time, if they did at all. Mostly coding via LLM is great for creating templates and small prototypes, but I don’t see them solving obscure problems that software engineers like me encounter regularly.
I have yet to encounter an LLM that’s good with math (maybe there already is, and I just haven’t discovered it yet, I don’t know). I found myself multiple times arguing with an LLM on certain math problems, because sometimes, I swear to God, the reasoning is flawed.
I’m using an LLM for helping me out on my research projects, mostly for helping me narrow down sources (I’ll definitely make sure they are REAL sources, not generated ones) or creating outlines. I’m definitely open to using LLMs for improving side project workflow and productivity, but that’s another thing I’ll explore.
For the LLM, I chose Deepseek R1. It’s a free model and, political differences aside, it has pretty decent reasoning compared to other free models I’ve tested.
Time to create the UI. I don’t want to bother creating a design so I just cloned this React ChatGPT clone and tweaked a bit of code. It’s pretty decent, despite text formatting issues.

Pretty ironic considering I’m calling Deepseek 😀
Minor UI issues aside, this thing’s still a generic app that calls an LLM. I imagine a minimum viable product of this one would have its local database where I can CRUD some prompts, plus an automatic document generation for my upcoming research projects.
Leave a comment