How to restart RPC component.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
fuel-ccp |
New
|
Undecided
|
Unassigned |
Bug Description
Hi everyone,
RPC component of Fuel-CCP was stopped, suddenly.
(Please see below:
https:/
So, I'd like to restart RPC component(Pod).
I executed the command "kubectl -n ccp delete rpc-1937807526-
(rpc-1937807526
After this, kubernetes re-created "rpc-1937807526
But rpc-1937807526-
# kubectl -n ccp get pod
NAME READY STATUS RESTARTS AGE
database-
database-
database-
etcd-0 1/1 Running 0 21h
keystone-
memcached-
notifications-
nova-api-
nova-conductor-
nova-consoleaut
nova-novncproxy
rpc-1937807526-
rpc-1937807526-
rpc-1937807526-
# ccp status
+------
| service | pod | job | ready | links |
+------
| database | 3/3 | 0/0 | ok | http://
| etcd | 1/1 | 0/0 | ok | http://
| | | | | http://
| keystone | 1/1 | 7/7 | ok | http://
| | | | | http://
| memcached | 1/1 | 0/0 | ok | http://
| notifications | 1/1 | 0/0 | ok | http://
| nova-api | 1/1 | 20/20 | ok | http://
| | | | | http://
| nova-conductor | 1/1 | 0/0 | ok | |
| nova-consoleauth | 1/1 | 0/0 | ok | |
| nova-novncproxy | 1/1 | 0/0 | ok | http://
| rpc | 2/3 | 0/0 | wip | http://
+------
#
When I execute the command 'kubectl -n ccp logs rpc-1937807526-
I found these messages as follows:
# kubectl -n ccp logs rpc-1937807526-
*snip*
[readiness:5956] Ready to return 1
2017-06-08 01:50:03.301 - __main__ - DEBUG - Dependency "etcd/etcd" is not ready yet, retrying
[liveness:5962] Starting liveness probe at 2017-06-08 01:50:06
[liveness:5962] Startup marker missing, probably probe was executed too early
[liveness:5962] Ready to return 0
2017-06-08 01:50:08.318 - __main__ - DEBUG - Dependency "etcd/etcd" is not ready yet, retrying
[readiness:5984] Starting readiness probe at 2017-06-08 01:50:09
[readiness:5984] Startup marker missing, probably probe was executed too early
[readiness:5984] Ready to return 1
2017-06-08 01:50:13.330 - __main__ - DEBUG - Dependency "etcd/etcd" is not ready yet, retrying
[liveness:5990] Starting liveness probe at 2017-06-08 01:50:16
[liveness:5990] Startup marker missing, probably probe was executed too early
[liveness:5990] Ready to return 0
2017-06-08 01:50:18.370 - __main__ - DEBUG - Dependency "etcd/etcd" is not ready yet, retrying
[readiness:6013] Starting readiness probe at 2017-06-08 01:50:19
[readiness:6013] Startup marker missing, probably probe was executed too early
[readiness:6013] Ready to return 1
According to the command "ccp status", etcd service is 'ready:ok'.
But 'Dependency "etcd/etcd" is not ready yet'.
What should I do to make the status of etcd "ready" ?
Thanks.