I don't understand the two llm approach plus telegram here?
It seems like the bot is both creating the telegram interface and using it to be a coding assistant?
I find LM Studio more usable for local setups (desktop/laptop) and I would use directly llama.cpp stack for a (local) server deployment
A bug or misconfiguration with the connection to opencode seemed to be the culprit.
I don't understand the two llm approach plus telegram here?
It seems like the bot is both creating the telegram interface and using it to be a coding assistant?
I find LM Studio more usable for local setups (desktop/laptop) and I would use directly llama.cpp stack for a (local) server deployment
A bug or misconfiguration with the connection to opencode seemed to be the culprit.