Trying to run from local llama.cpp server #67
Unanswered
DefamationStation
asked this question in
Questions
Replies: 1 comment
-
|
I found elsewhere that you need to put |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello
I run a server at http://127.0.0.1:8080/v1
and this is my config
default_model = "Llama-3-8b-Q4_K_S"
system_prompt = ""
message_code_theme = "dracula"
[[models]]
name = "Llama-3-8b-Q4_K_S"
api_base = "http://127.0.0.1:8080/v1"
api_key = "api-key-if-required"
but I am getting this error
Beta Was this translation helpful? Give feedback.
All reactions