- First download gguf from https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF ,
llama-2-13b-chat.Q5_K_S.gguf - move this model to
model/llama-2-13b-chat.Q5_K_S.gguf
- Navigate to the frontend directory:
cd frontend - Install node
node:18.17.1 - Install PNPM:
npm install -g pnpm@9.1.2pnpm:9.1.2 - Install Dependencies:
pnpm install - start the application in dev mode:
pnpm dev - production build:
pnpm build - any linter problem
pnpm lint:fixor to checkpnpm lint
- run docker containers
- after making sure that the database and the frontend are running, run the
LlmServiceApplication - to run the Backend linter
mvn spotless:checkto fixmvn spotless:check - Backend uses cpp plugin to run the modal ,
C++11g++cmake
- Run
docker compose up -d - if something is changed in the front end and you want the latest changes
docker compose up -d --no-deps --build build-fe