Getting Started with Local AI/LLMs in Three Easy Steps

Rune Berg
3 min readAug 23, 2024

--

Curious about running your own AI models locally?

Have been tinkering with this lately and like any emerging technology, it can feel chaotic and overwhelming to sort through all the jargon and figure out where to begin.

That’s why I’ve put together this 3-step guide to document the simplest way I found to dive in and start experimenting right away — no coding, no containers, and no unnecessary dependencies required.

Step 1: Install a Backend to Manage Your Models

The first thing you need is a backend to help manage your AI models. Arguably the easiest and most beginner-friendly option is Ollama. They offer highly optimized models that can run even on machines with limited resources.

To get started, download Ollama and run the installation package or script on your machine.

That’s it! You’ve got your LLM backend up and running.

Step 2: Download an AI Model

Now that you have a backend, you’ll need a model to work with. A great starting point is Meta’s Llama — it’s both powerful and flexible.

Open a console on your machine (e.g., PowerShell, Bash, Terminal) and type:

ollama pull llama3.1

The download will begin — it’s a big file, so grab a coffee while you wait. You can also explore other models supported by Ollama in their Library

Step 3: Interact with Your Model

With your backend and model ready, it’s time to start interacting with them. I created ConfiChat to provide a straightforward way to work with these models. It’s a lightweight, standalone app with no dependencies, designed to work across multiple platforms (including mobile) for easy access.

Simply download ConfiChat from the Releases section, extract the files, and run the executable on your machine (there may be a warning prompt the first time since the binaries are unsigned).

And that’s it — you’re all set to start having some fun and experimenting :)

If your downloaded models aren’t appearing in the top-right dropdown, try running this from the command line:

ollama serve

What’s Next?

Now that you’re up and running, you can explore tuning and application settings. I’ve included some guiding text within ConfiChat to help you get started with these options.

Tip: If you want to use images, you’ll need a multimodal model. Download llava, a multimodal model that’s suited for this:

ollama pull llava

Then in ConfiChat, select this model in the dropdown on the upper right side of the interface and you can then drag-and-drop images to the prompt for the model to process them.

--

--

Rune Berg
Rune Berg

No responses yet