In my constant pursuit of keeping myself fully off-grid capable, I personally amassed a large collection of books, music, games, movies, food, and survival related articles. This includes a massive collection of digital files closing in on almost 18TB of data to keep my family entertained and educated for hours during a full utility outage or grid-out experience. With the sheer amount of data amassed I have a hard time sorting, titling, and ensuring all the data is easily reached. With a NAS it makes the project more tolerable so everyone can reach the data at any time but loading and reading variations of all the data downloaded on individual mediums like Wikipedia archives and PDF manuals for survival can tough. Even reaching back into my articles to remember what or specifically how I wrote about something can be tough to recall. So... I localized and uploaded everything with an AI.
Ollama is a program that allows deployment of Large Language Models or LLMs like Llama, Gemma, and Qwen on a local system. Ollama’s primary goal is to remove the complexity of setting up and running these massive AI models. Traditionally, running LLMs required significant technical expertise, powerful hardware, and a lot of manual configuration. Which those in the privacy space may not have access to...
EXAMPLE: I run a series of older Thinkpads and the occasional Dell tower, not powerful.
Ollama abstracts away almost all of that and uses a simple and clear Command Line Interface. Ollama also supports many models, it currently supports a growing list of popular open-source LLMs, including:
The downloading and running of which only take a few commands. You can download the model you want and start running it immediately with complete local execution. The models run directly on your computer, meaning your data stays private and you do not rely on internet connectivity once the model is loaded. I was immediately hooked once I tinkered with it for a few days. What sold me on Ollama was...
Additional Resources
MSTY Is an additional layer to Local LLMs that has a fully featured Graphical User Interface in the instance the Ollama's CLI might not be what you are looking for. The main highlight of MSTY, it is a system built around a single, incredibly powerful, and meticulously crafted open source LLM called “Mistral”. It is not just a launcher for other LLMs, it is a complete ecosystem for multiple LLM layers.
Ollama and MSTY are not competing projects, they can be used together. You can use Ollama to easily download and run the Mistral model, and then you could potentially use MSTY’s techniques to further optimize its performance.
Adding your local files as a knowledge stack is simple as well in MSTY. Click the Knowledge Stack button in the left sidebar to begin, it has options to add PDFs, Word documents, text files, code files, spreadsheets, EPUBs and RTF documents.
By clicking the gear or settings icon allows you to adjust the parameters of your Knowledge Stack.
Once complete you can compose your draft, which may take a while (Mine took roughly 8 hours to compose for my basic files and about a week for my collection of books). Make sure it is enabled in the left sidebar and you can now ask basic questions about your files and ask is to recall information from you files.