Integrate Local Models Deployed by LocalAI
Deploying LocalAI
Starting LocalAI
$ git clone https://github.com/go-skynet/LocalAI $ cd LocalAI/examples/langchain-chroma$ wget https://huggingface.co/skeskinen/ggml/resolve/main/all-MiniLM-L6-v2/ggml-model-q4_0.bin -O models/bert $ wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j$ mv .env.example .env# start with docker-compose $ docker-compose up -d --build # tail the logs & wait until the build completes $ docker logs -f langchain-chroma-api-1 7:16AM INF Starting LocalAI using 4 threads, with models path: /models 7:16AM INF LocalAI version: v1.24.1 (9cc8d9086580bd2a96f5c96a6b873242879c70bc)
Last updated