接入 LocalAI 部署的本地模型
部署 LocalAI
使用前注意事项
开始部署
$ git clone https://github.com/go-skynet/LocalAI $ cd LocalAI/examples/langchain-chroma$ wget https://huggingface.co/skeskinen/ggml/resolve/main/all-MiniLM-L6-v2/ggml-model-q4_0.bin -O models/bert $ wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j$ mv .env.example .env# start with docker-compose $ docker-compose up -d --build # tail the logs & wait until the build completes $ docker logs -f langchain-chroma-api-1 7:16AM INF Starting LocalAI using 4 threads, with models path: /models 7:16AM INF LocalAI version: v1.24.1 (9cc8d9086580bd2a96f5c96a6b873242879c70bc) ┌───────────────────────────────────────────────────┐ │ Fiber v2.48.0 │ │ http://127.0.0.1:8080 │ │ (bound on host 0.0.0.0 and port 8080) │ │ │ │ Handlers ............ 55 Processes ........... 1 │ │ Prefork ....... Disabled PID ................ 14 │ └───────────────────────────────────────────────────┘
Last updated