Skip to content

Open Webui | SelectorGroupChat: No reply from team after initial message | Sending message of type GroupChatStart to SelectorGroupChatManager | Connection error - MessageHandlerException #7119

@cma2t3r

Description

@cma2t3r

What happened?

Describe the bug
2 issues:

  • "exception": "Connection error.", "type": "MessageHandlerException" -- during response w/o SingleThreadedAgentRuntime()
  • autogen_core:Sending message of type GroupChatStart to SelectorGroupChatManager -- doesn't invoke the team at all

Using an existing/working AG[0.7.5] project to make an Open Webui plugin; I got the connection error, then the GroupChatStart issue while testing custom AgentRuntime and SelectorGroupChatManager classes during debugging.

'''

class MakeModelClient:

def __init__(
    self,
    model: str = "gpt-oss-20b-uncensored",
    model_info: dict = model_info_defaults,
    llm_url: str = None,
    llm_key: str = None,
    **kwargs,
):

    self.model = model
    self.model_info = model_info
    self.kwargs = kwargs
    self.llm_url = llm_url
    self.llm_key = llm_key

def deploy(self) -> OpenAIChatCompletionClient:

    model_client = OpenAIChatCompletionClient(
        api_key=self.llm_key,
        base_url=self.llm_url,
        model=self.model,
        temperature=0,
        seed=45,
        http_client=httpx.AsyncClient(
            timeout=120,
            verify=False,
            http2=False,
        ),
        model_info=self.model_info,
        **self.kwargs,
    )
    return model_client

class ResearchTeam:

def __init__(
    self,
    docker: DockerCommandLineCodeExecutor = None,
    model: str = "ibm/granite-4-h-tiny",
    model_research: str = None,
    model_library: str = None,
    model_science: str = None,
    model_report: str = None,
    termination_condition: str = None,
    max_selector_attempts: int = 3,
    allow_repeated_speaker: bool = True,
    selector_prompt: str = None,
    librarian_agent_name: str = None,
    researcher_agent_name: str = None,
    research_scientist_agent_name: str = None,
    code_executor_agent_name: str = None,
    llm_url: str = None,
    llm_key: str = None,
    **kwargs,
):

    self.docker = docker
    self.model = model
    self.model_research = model_research
    self.model_library = model_library
    self.model_science = model_science
    self.model_report = model_report

    self.max_selector_attempts = max_selector_attempts
    self.termination_condition = termination_condition
    self.allow_repeated_speaker = allow_repeated_speaker
    self.selector_prompt = selector_prompt
    self.librarian_agent_name = librarian_agent_name
    self.researcher_agent_name = researcher_agent_name
    self.research_scientist_agent_name = research_scientist_agent_name
    self.code_executor_agent_name = code_executor_agent_name

    self.llm_url = llm_url
    self.llm_key = llm_key
    self.kwargs = kwargs

async def deploy(
    self, history: Optional[UnboundedChatCompletionContext] = None
) -> SelectorGroupChat:

    self.model_client = MakeModelClient(
        self.model, llm_url=self.llm_url, llm_key=self.llm_key
    ).deploy()
    self.librarian_client = (
        MakeModelClient(
            self.model_library, llm_url=self.llm_url, llm_key=self.llm_key
        ).deploy()
        if self.model_library
        else self.model_client
    )
    self.researcher_client = (
        MakeModelClient(
            self.model_research, llm_url=self.llm_url, llm_key=self.llm_key
        ).deploy()
        if self.model_research
        else self.model_client
    )
    self.research_scientist_client = (
        MakeModelClient(
            self.model_science, llm_url=self.llm_url, llm_key=self.llm_key
        ).deploy()
        if self.model_science
        else self.model_client
    )
    self.report_writer_client = (
        MakeModelClient(
            self.model_report, llm_url=self.llm_url, llm_key=self.llm_key
        ).deploy()
        if self.model_report
        else self.model_client
    )

    self.orchestrator_agent = MakeOrchestratorAgent(self.model_client).deploy()
    self.librarian_agent = await MakeLibrarianAgent(self.librarian_client).deploy()

    self.researcher_agent = await MakeResearcherAgent(
        self.researcher_client
    ).deploy()

    self.research_scientist_agent = await MakeResearchScientistAgent(
        self.research_scientist_client
    ).deploy()

    team = SelectorGroupChat(
        name="Research_Team",
        max_selector_attempts=self.max_selector_attempts,
        model_client=self.model_client,
        termination_condition=self.termination_condition,
        selector_prompt=self.selector_prompt,
        allow_repeated_speaker=self.allow_repeated_speaker,
        selector_func=None,
        model_context=history,
        participants=[
            self.orchestrator_agent,
            self.research_scientist_agent,
            self.researcher_agent,
        ],
        **self.kwargs,
    )
    return team

'''

To Reproduce

.... .... later in the Pipe / Pipeline classes ... ...

self.team = await self._team.deploy()
result = await self.team.run(task=msg)
reply = result.messages[-1].content


issue with SingleThreadedAgentRuntime():

payload={"messages":[{"id":"e24dd84c-cbc5-49d5-abff-97e6a509f781","source":"user","models_usage":null,"metadata":{},"created_at":"2025-11-17T21:35:01.782944Z","content":"What are 5 creative things I could do with my kids' art? I don't want to throw them away, but it's also so much clutter.","role":"user","type":"TextMessage"}],"output_task_messages":true} sender=null receiver=SelectorGroupChatManager_b7bb7017-1708-43fd-8cd9-70c4a2455447/b7bb7017-1708-43fd-8cd9-70c4a2455447 kind=MessageKind.DIRECT delivery_stage=DeliveryStage.SEND type=Message

INFO:autogen_core:Sending message of type GroupChatStart to SelectorGroupChatManager_b7bb7017-1708-43fd-8cd9-70c4a2455447: {'messages': [TextMessage(id='e24dd84c-cbc5-49d5-abff-97e6a509f781', source='user', models_usage=None, metadata={}, created_at=datetime.datetime(2025, 11, 17, 21, 35, 1, 782944, tzinfo=datetime.timezone.utc), content="What are 5 creative things I could do with my kids' art? I don't want to throw them away, but it's also so much clutter.", role='user', type='TextMessage')], 'output_task_messages': True}
INFO:autogen_core:Calling message handler for SelectorGroupChatManager_b7bb7017-1708-43fd-8cd9-70c4a2455447/b7bb7017-1708-43fd-8cd9-70c4a2455447 with message type GroupChatStart sent by Unknown


connection error w/o SingleThreadedAgentRuntime():

INFO:openai._base_client:Retrying request to /chat/completions in 0.936063 seconds
payload={"messages":[{"id":"e24dd84c-cbc5-49d5-abff-97e6a509f781","source":"user","models_usage":null,"metadata":{},"created_at":"2025-11-17T21:35:01.782944Z","content":"What are 5 creative things I could do with my kids' art? I don't want to throw them away, but it's also so much clutter.","role":"user","type":"TextMessage"}],"output_task_messages":true} handling_agent=SelectorGroupChatManager_b7bb7017-1708-43fd-8cd9-70c4a2455447/b7bb7017-1708-43fd-8cd9-70c4a2455447 exception=Connection error. type=MessageHandlerException

Expected behavior
Invoke the team from an Open Webui Pipe / Pipeline integration; expecting a string as a return value.
Both the Pipe and Pipeline display the same behavior; either can be a good way to implement Autogen.

Screenshots

Image

Additional context
Open Webui Pipes docs
https://docs.openwebui.com/features/plugin/functions/pipe/

Open Webui Pipelines docs
https://docs.openwebui.com/features/pipelines/

Which packages was the bug in?

Python AgentChat (autogen-agentchat>=0.4.0)

AutoGen library version.

Python 0.7.5

Other library version.

No response

Model used

ibm/granite-4-h-tiny

Model provider

Other (please specify below)

Other model provider

LMStudio / OpenAI

Python version

3.11

.NET version

None

Operating system

Ubuntu

Metadata

Metadata

Assignees

No one assigned

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions