NoVNC console fails to connect
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
nova-hyper |
New
|
Undecided
|
Unassigned |
Bug Description
I have deployed DevStack with two nodes, one node with Keystone, Cinder, Glance, Horizon, Nova and Neutron, and the second node just with Nova compute and Neutron agent services. When deploying an instance I cannot connect to console via NoVNC. Looking into it I found out that DevStack created a multi cells implementation and created nova-novncproxy for cell 1
from systemd service file <email address hidden>
ExecStart = /usr/bin/
But nova-consoleauth was created connecting to API MQ, from systemd file <email address hidden>
ExecStart = /usr/bin/
I created another nova-consoleauth connected to cell 1 MQ then I realized in that case the rpcapi of nova-consoleauth was not implemented with routing mechanism to the right MQ for the cell, so I changed nova/consoleaut
[root@wltpnj22n
75,76c75
< default_client = rpc.get_
< self.router = rpc.ClientRoute
---
> self.client = rpc.get_
89c88
< if not self.router.
---
> if not self.client.
93c92
< cctxt = self.router.
---
> cctxt = self.client.
97c96
< cctxt = self.router.
---
> cctxt = self.client.
101c100
< cctxt = self.router.
---
> cctxt = self.client.
I was wondering if such code should be there (also checked Rocky) for the multi-cells use case, or something wrong with Devstack version I used or even I am missing something on console access authorization and console token checking should work.
I have attached the rpcapi.py file I have changed.
Thinking about a little more, when we have multi-cells, we should have nova-novncproxy service instance for each cell, and have each instance has its own endpoint (novncproxy_ base_url) . I think it might be an issue how Devstack deploys this use case, this report maybe should be against devstack project.