helm/llama-cpp/README.md
2025-01-29 07:30:49 +08:00

959 B

TuringPi Llama.cpp Chart

Deploys Llama.cpp server onto your TuringPi cluster, complete with a persistent volume to store the model files, replication and an ingress. Assumes you have followed the instructions at docs.turingpi.com to configure Longhorn, MetaLB and Traefik. By default, uses lmstudio-ai/gemma-2b-it-GGUF model, but this can be overridden with custom values.

Installation

helm install llama-cpp https://elepedus.github.io/llama-cpp/llama-cpp-0.0.1.tgz --namespace=llama-cpp

Usage

By default, the ingress exposes the web UI at llama.cluster.local, at the same IP address as you configured for cluster.local Make sure to update your /etc/hosts file so the new subdomain is accessible:

10.0.0.70 turing-cluster turing-cluster.local llama.cluster llama.cluster.local