How to restart RPC component.

Bug #1696683 reported by suzuki
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
fuel-ccp
New
Undecided
Unassigned

Bug Description

Hi everyone,

RPC component of Fuel-CCP was stopped, suddenly.
 (Please see below:
  https://bugs.launchpad.net/fuel-ccp/+bug/1696675)

So, I'd like to restart RPC component(Pod).
I executed the command "kubectl -n ccp delete rpc-1937807526-082sl".
(rpc-1937807526-082sl is the name of pod stopped.)

After this, kubernetes re-created "rpc-1937807526-fgz3j" Pod instead of rpc-1937807526-082sl.
But rpc-1937807526-fgz3j do not become READY (1/1) as follows:

# kubectl -n ccp get pod
NAME READY STATUS RESTARTS AGE
database-1464230903-6rwcq 3/3 Running 0 21h
database-1464230903-fbh1r 3/3 Running 0 21h
database-1464230903-hv9fc 3/3 Running 0 21h
etcd-0 1/1 Running 0 21h
keystone-3081256561-4mhlm 2/2 Running 0 21h
memcached-142475028-b0d89 1/1 Running 0 21h
notifications-69644774-548zg 1/1 Running 0 21h
nova-api-1792472383-mn3tk 2/2 Running 1 20h
nova-conductor-212872002-cm84w 1/1 Running 0 20h
nova-consoleauth-2484155490-f0b9j 1/1 Running 0 18h
nova-novncproxy-192365449-st3mx 2/2 Running 0 18h
rpc-1937807526-4fd75 1/1 Running 0 21h
rpc-1937807526-fgz3j 0/1 Running 1 5h
rpc-1937807526-z70f3 1/1 Running 0 21h

# ccp status
+------------------+-----+-------+-------+----------------------+
| service | pod | job | ready | links |
+------------------+-----+-------+-------+----------------------+
| database | 3/3 | 0/0 | ok | http://0.0.0.0:32215 |
| etcd | 1/1 | 0/0 | ok | http://0.0.0.0:30070 |
| | | | | http://0.0.0.0:32183 |
| keystone | 1/1 | 7/7 | ok | http://0.0.0.0:30372 |
| | | | | http://0.0.0.0:30397 |
| memcached | 1/1 | 0/0 | ok | http://0.0.0.0:30816 |
| notifications | 1/1 | 0/0 | ok | http://0.0.0.0:31065 |
| nova-api | 1/1 | 20/20 | ok | http://0.0.0.0:31636 |
| | | | | http://0.0.0.0:31466 |
| nova-conductor | 1/1 | 0/0 | ok | |
| nova-consoleauth | 1/1 | 0/0 | ok | |
| nova-novncproxy | 1/1 | 0/0 | ok | http://0.0.0.0:31647 |
| rpc | 2/3 | 0/0 | wip | http://0.0.0.0:31684 |
+------------------+-----+-------+-------+----------------------+
#

When I execute the command 'kubectl -n ccp logs rpc-1937807526-fgz3j'
I found these messages as follows:

# kubectl -n ccp logs rpc-1937807526-fgz3j
 *snip*
[readiness:5956] Ready to return 1
2017-06-08 01:50:03.301 - __main__ - DEBUG - Dependency "etcd/etcd" is not ready yet, retrying
[liveness:5962] Starting liveness probe at 2017-06-08 01:50:06
[liveness:5962] Startup marker missing, probably probe was executed too early
[liveness:5962] Ready to return 0
2017-06-08 01:50:08.318 - __main__ - DEBUG - Dependency "etcd/etcd" is not ready yet, retrying
[readiness:5984] Starting readiness probe at 2017-06-08 01:50:09
[readiness:5984] Startup marker missing, probably probe was executed too early
[readiness:5984] Ready to return 1
2017-06-08 01:50:13.330 - __main__ - DEBUG - Dependency "etcd/etcd" is not ready yet, retrying
[liveness:5990] Starting liveness probe at 2017-06-08 01:50:16
[liveness:5990] Startup marker missing, probably probe was executed too early
[liveness:5990] Ready to return 0
2017-06-08 01:50:18.370 - __main__ - DEBUG - Dependency "etcd/etcd" is not ready yet, retrying
[readiness:6013] Starting readiness probe at 2017-06-08 01:50:19
[readiness:6013] Startup marker missing, probably probe was executed too early
[readiness:6013] Ready to return 1

According to the command "ccp status", etcd service is 'ready:ok'.
But 'Dependency "etcd/etcd" is not ready yet'.
What should I do to make the status of etcd "ready" ?

Thanks.

Tags: rpc
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.