Abnormal request id in logs

Bug #1773102 reported by yuanyue
22
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Medium
Radoslav Gerganov
Pike
Fix Committed
Medium
Radoslav Gerganov
Queens
Fix Committed
Medium
Radoslav Gerganov

Bug Description

Description
===========
After VM creation, the request id related to periodic tasks in nova-compute.log is changed to the same as the request id related to the VM creation task.

Steps to reproduce
==================

* nova boot xxx

* check the nova-compute.log on the compute node hosting the VM

* search the request id related to VM creation task

Expected result
===============
The request id related to periodic tasks should be different from the request id related to VM creation task.

Actual result
=============
The request id related to periodic task is changed to the same as the request id related to VM creation task after VM creation task is handled.

Environment
===========
1. OpenStack version
OS:
CentOS

nova version:
openstack-nova-compute-17.0.2-1.el7.noarch

2. hypervisor
   Libvirt + QEMU

2. storage type
   LVM

3. Which networking type did you use?
   Neutron with Linuxbridge

Logs & Configs
==============
1. before nova-compute handling VM creation task:
2018-05-24 03:08:15.264 27469 DEBUG oslo_service.periodic_task [req-c63d0555-7bf1-42da-abb7-556cc9eede8c 809cb6c22acc445c843db1a806d4e817 68b078adaf13420391fdb0fde1608816 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
2018-05-24 03:08:15.265 27469 DEBUG nova.compute.manager [req-c63d0555-7bf1-42da-abb7-556cc9eede8c 809cb6c22acc445c843db1a806d4e817 68b078adaf13420391fdb0fde1608816 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7238
2018-05-24 03:08:18.269 27469 DEBUG oslo_service.periodic_task [req-c63d0555-7bf1-42da-abb7-556cc9eede8c 809cb6c22acc445c843db1a806d4e817 68b078adaf13420391fdb0fde1608816 - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215

2. begin to handle VM creation task:
2018-05-24 03:08:26.244 27469 DEBUG oslo_concurrency.lockutils [req-2d5b3957-9749-40ba-9b94-e8260c7145bf 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Lock "a0ded3b0-0e60-4d82-a516-588871c4917c" acquired by "nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-05-24 03:08:26.312 27469 DEBUG oslo_service.periodic_task [req-2d5b3957-9749-40ba-9b94-e8260c7145bf 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
2018-05-24 03:08:26.312 27469 DEBUG nova.compute.manager [req-2d5b3957-9749-40ba-9b94-e8260c7145bf 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6572
2018-05-24 03:08:26.312 27469 DEBUG nova.compute.manager [req-2d5b3957-9749-40ba-9b94-e8260c7145bf 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6576
2018-05-24 03:08:26.334 27469 DEBUG nova.compute.manager [req-2d5b3957-9749-40ba-9b94-e8260c7145bf 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6648

3. after handling VM creation task (request id is also changed for VM creation task)
2018-05-24 03:08:40.896 27469 INFO nova.compute.manager [req-18b1870e-239a-4adc-9962-40e63fffcda7 809cb6c22acc445c843db1a806d4e817 68b078adaf13420391fdb0fde1608816 - default default] [instance: a0ded3b0-0e60-4d82-a516-588871c4917c] Took 14.34 seconds to build instance.
2018-05-24 03:08:41.278 27469 DEBUG oslo_concurrency.lockutils [req-18b1870e-239a-4adc-9962-40e63fffcda7 809cb6c22acc445c843db1a806d4e817 68b078adaf13420391fdb0fde1608816 - default default] Lock "a0ded3b0-0e60-4d82-a516-588871c4917c" released by "nova.compute.manager._locked_do_build_and_run_instance" :: held 15.033s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2018-05-24 03:08:42.267 27469 DEBUG oslo_service.periodic_task [req-18b1870e-239a-4adc-9962-40e63fffcda7 809cb6c22acc445c843db1a806d4e817 68b078adaf13420391fdb0fde1608816 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
2018-05-24 03:09:04.264 27469 DEBUG oslo_service.periodic_task [req-18b1870e-239a-4adc-9962-40e63fffcda7 809cb6c22acc445c843db1a806d4e817 68b078adaf13420391fdb0fde1608816 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215

Revision history for this message
jichenjc (jichenjc) wrote :

looks not a very limitted impact because DEBUG oslo_service.periodic_task is prefix
and good enough for operator to know they can ignore the req-xxxx

Changed in nova:
assignee: nobody → jichenjc (jichenjc)
importance: Undecided → Low
Revision history for this message
yuanyue (yyuanyuee) wrote :

It can be confusing if a log is generated by execution of a periodic task, but without a prefix of DEBUG oslo_service.periodic_task. An example is as follows. It is not easy to tell the log generated by update_available_resource is that of a peridic task or the VM creation task.

In the older version like Ocata, request id can be used to associate the logs generated by a common session or a subtask. But in the latest version of Q, it is hard to tell what is the point of the request id in logs.

Example:
2018-05-28 02:11:07.879 27469 DEBUG oslo_concurrency.lockutils [req-4b8a9a90-99f2-4e60-a303-87ef130b9b58 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Lock "435f1602-7828-40d6-a003-274a52d3594c" acquired by "nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2018-05-28 02:11:08.151 27469 DEBUG nova.compute.manager [req-4b8a9a90-99f2-4e60-a303-87ef130b9b58 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] [instance: 435f1602-7828-40d6-a003-274a52d3594c] Starting instance... _do_build_and_run_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:1813
2018-05-28 02:11:08.260 27469 DEBUG oslo_service.periodic_task [req-4b8a9a90-99f2-4e60-a303-87ef130b9b58 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
2018-05-28 02:11:08.264 27469 DEBUG oslo_service.periodic_task [req-4b8a9a90-99f2-4e60-a303-87ef130b9b58 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
2018-05-28 02:11:08.317 27469 DEBUG nova.compute.resource_tracker [req-4b8a9a90-99f2-4e60-a303-87ef130b9b58 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Auditing locally available compute resources for com (node: com) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:663
2018-05-28 02:11:08.590 27469 DEBUG nova.compute.resource_tracker [req-4b8a9a90-99f2-4e60-a303-87ef130b9b58 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Hypervisor/Node resource view: name=com free_ram=1617MB free_disk=41GB free_vcpus=4 pci_devices=[...] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:802
2018-05-28 02:11:08.591 27469 DEBUG oslo_concurrency.lockutils [req-4b8a9a90-99f2-4e60-a303-87ef130b9b58 9de813eb53ba4ac982a37df462783d5d 3ce4f026aed1411baa6e8013b13f9257 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273

Revision history for this message
yuanyue (yyuanyuee) wrote :

It can be confusing especially for periodic tasks who share common fucntino calls with operation tasks like VM creation. For example, the periodic task of _heal_instance_info_cache shares the common function call of _get_instance_nw_info with VM creation. It sometimes hard to tell wherther logs generated by _get_instance_nw_info are those of the VM creation task.

Anyway, of course, there always alternatives to solve the problem such as using other identifiers. However, I do not understand why to abondon the useful function of request id in the new version Q of OpenStack.

Revision history for this message
Rikimaru Honjo (honjo-rikimaru-c6) wrote :

I think this is not only a problem for the periodic task log.

I called multiple APIs at same time.
As a result, those request ids were mixed in nova-compute log.
This is too hard to analyze logs!

[Example]
- My Environment
-- master branch
-- Cinder LVM driver

There were 4 instances and 4 attached volumes.
I called 4 volume-detach APIs for these volumes at same time.

$ nova volume-detach 54a7a9d3-71f8-4b5a-bd8a-454a47786aef 0e168698-a54a-4201-b597-9161345b7200 & nova volume-detach 03f75b9f-b3b7-4dbf-9a39-a55a6616f480 7bb7e273-5917-4168-a3db-977660fa4fa3 & nova volume-detach e6374280-d488-46fe-a950-2eedda8f8b9e d9c5bb3e-e5b2-4509-8e68-67e8a6af8d92 & nova volume-detach b5279341-5bb2-4616-9406-39f49fc94a91 ea571852-d40d-49fc-a85c-ba96aa56db25 &

As a result, the requiest id for each request were same value in nova-compute log!

The following log is an example of mixed request id.
Nova-compute ran "blockdev --flushbufs" in detaching volume process.
And, /dev/sda,sdb,sdc,sdd were iSCSI devices for each volumes.
But, request ids of running blockdev were same value in these logs!

May 29 01:47:58 honjo-devstack-2 nova-compute[16200]: DEBUG oslo_concurrency.processutils [None req-3350a0d7-fb01-49cf-823a-62146aff52df demo admin] CMD "blockdev --flushbufs /dev/sdb" returned: 0 in 0.017s {{(pid=26272) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:409}}
[...]
May 29 01:47:59 honjo-devstack-2 nova-compute[16200]: DEBUG oslo_concurrency.processutils [None req-3350a0d7-fb01-49cf-823a-62146aff52df demo admin] CMD "blockdev --flushbufs /dev/sdd" returned: 0 in 0.023s {{(pid=26272) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:409}}
[...]
May 29 01:47:59 honjo-devstack-2 nova-compute[16200]: DEBUG oslo_concurrency.processutils [None req-3350a0d7-fb01-49cf-823a-62146aff52df demo admin] CMD "blockdev --flushbufs /dev/sda" returned: 0 in 0.025s {{(pid=26272) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:409}}
[...]
May 29 01:48:00 honjo-devstack-2 nova-compute[16200]: DEBUG oslo_concurrency.processutils [None req-3350a0d7-fb01-49cf-823a-62146aff52df demo admin] CMD "blockdev --flushbufs /dev/sdc" returned: 0 in 0.028s {{(pid=26272) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:409}}

Revision history for this message
Rikimaru Honjo (honjo-rikimaru-c6) wrote :

s/As a result, the requiest id/As a result, the request id/

Sorry.

Revision history for this message
Rikimaru Honjo (honjo-rikimaru-c6) wrote :

I think that this bug is similar to the following bug.

Cinder Backup is incorrectly logging request ids
https://bugs.launchpad.net/cinder/+bug/1743461

Revision history for this message
jichenjc (jichenjc) wrote :

I had a patch here but not sure it can fix the problem
https://review.openstack.org/#/c/570707/1

also, noticed Rikimaru mentioned additional impact, so remove myself and until I can find a better solution, I will assign myself again

Changed in nova:
assignee: jichenjc (jichenjc) → nobody
jichenjc (jichenjc)
Changed in nova:
importance: Low → Undecided
Revision history for this message
jichenjc (jichenjc) wrote :

I don't know for comment #4 whether it's valid , have you tried to s/&/;/ in the command?
this request id is actually controlled by oslo_middleware.request_id which is not nova specific

nova volume-detach 54a7a9d3-71f8-4b5a-bd8a-454a47786aef 0e168698-a54a-4201-b597-9161345b7200 & nova volume-detach 03f75b9f-b3b7-4dbf-9a39-a55a6616f480 7bb7e273-5917-4168-a3db-977660fa4fa3 & nova volume-detach e6374280-d488-46fe-a950-2eedda8f8b9e d9c5bb3e-e5b2-4509-8e68-67e8a6af8d92 & nova volume-detach b5279341-5bb2-4616-9406-39f49fc94a91 ea571852-d40d-49fc-a85c-ba96aa56db25 &

Revision history for this message
Ben Nemec (bnemec) wrote :

First, it is not clear to me that these two issues are related. I would suggest opening a separate bug against cinder/oslo.log to track the other problem being discussed here.

For the periodic task case, I believe this is a result of the thread-local context being used by default[1]. This means that if a thread processes a user request, its thread-local context will have that request id. Then, if that same thread processes a periodic task that doesn't specify its own context, oslo.log looks up the thread-local context and you get the same request id as the previous user request.

I think https://review.openstack.org/#/c/570707/1 is on the right track. If you don't want periodic tasks to use the thread-local context then you need to explicitly pass a context with a different request id. Alternatively you could use update_store[2] to change the thread-local context at the start of a periodic task so you don't have to remember to pass the context to the log call every time. That's probably less error-prone.

1: https://github.com/openstack/oslo.log/blob/89bbb3fb79ecfa45ed41afee86900131b3e37dc6/oslo_log/formatters.py#L81
2: https://github.com/openstack/oslo.context/blob/f0ad977c1369640ee9e8c9c4586d6dde47312d19/oslo_context/context.py#L297

Revision history for this message
melanie witt (melwitt) wrote :

I have seen this behavior too, so marking as valid.

I do wonder if cells v2 cell targeting is somehow causing this -- each time we do a "target_cell" call, a new RequestContext is created by calling "from_dict()" [1], which will overwrite the thread local context. Maybe what we need to do is carry over the request_id from the original RequestContext when we create the new one, i.e.:

  cctxt = RequestContext.from_dict(context.to_dict())
  cctxt.request_id = context.request_id
  ...

[1] https://github.com/openstack/nova/blob/da16690/nova/context.py#L405

Changed in nova:
importance: Undecided → Medium
status: New → Confirmed
Revision history for this message
melanie witt (melwitt) wrote :

I think I spoke too soon with my suggestion, oslo.context to_dict/from_dict methods take care of carrying over the request_id:

  https://github.com/openstack/oslo.context/blob/b1ba490/oslo_context/context.py#L402

and besides that, as Ben said, the oslo.log will either use the thread local context's request_id or it will use one explicitly passed to it in the log method call.

So maybe what we need to do is stop overwriting the thread local context when we create a new context during target_cell? I can give that a try and see what happens.

Revision history for this message
Doug Hellmann (doug-hellmann) wrote :

It sounds like the periodic tasks that use a RequestContext should construct it with overwrite=False to avoid storing the value in the thread local storage.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/582332

Changed in nova:
assignee: nobody → melanie witt (melwitt)
status: Confirmed → In Progress
Revision history for this message
melanie witt (melwitt) wrote :

Doug, we already do use overwrite=False for periodic tasks (by way of the get_admin_context() method) _but_ we are not using overwrite=False when we create a new RequestContext during any cell database targeting (we use RequestContext to convey which database connection to use). So what I'm trying out now is passing overwrite=False when constructing the RequestContext in the target_cell method.

Revision history for this message
melanie witt (melwitt) wrote :

Just to close the loop on this, I was mistaken about cell targeting affecting changes in logged request_ids. When cell targeting occurs, it's true the thread local context is overwritten, but it will be overwritten by another context with the same request_id. So it shouldn't result in any changing of logged request_id.

Changed in nova:
assignee: melanie witt (melwitt) → nobody
status: In Progress → Confirmed
Revision history for this message
Matt Riedemann (mriedem) wrote :

Bug 1784666 has the proper triage information on the problem, it's an import ordering issue.

Changed in nova:
status: Confirmed → Triaged
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/587754

Changed in nova:
assignee: nobody → John Smith (wang-zengzhi)
status: Triaged → In Progress
Revision history for this message
John Smith (wang-zengzhi) wrote :

Yes, its a import ordering issue.

In bug #1510234 we import oslo service in nova/__init__.py before monkey patche threading to fix "heartbeats stop when time is changed". And oslo service imported oslo logging in which imported oslo context.
In this order, when oslo context imports threading and defines the _request_store where the context is stored as threading.local, it is not the monkey patched version and therefore will be shared among all the greenthreads that share the same python thread.

I tried to fix this question in my patch but I am not sure if the method is suitable.

to review: https://review.openstack.org/587754

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/587772

Changed in nova:
assignee: John Smith (wang-zengzhi) → Radoslav Gerganov (rgerganov)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/587772
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=233ea582f7b58b3188bfa523264e9052eefd00e5
Submitter: Zuul
Branch: master

commit 233ea582f7b58b3188bfa523264e9052eefd00e5
Author: Radoslav Gerganov <email address hidden>
Date: Wed Aug 1 13:54:31 2018 +0300

    Reload oslo_context after calling monkey_patch()

    oslo.context is storing a global thread-local variable which keeps the
    request context for the current thread. If oslo.context is imported
    before calling monkey_patch(), then this thread-local won't be green and
    instead of having one request per green thread, we will have one request
    object which will be overwritten every time when a new context is
    created.

    To workaround the problem, always reload oslo_context.context after
    calling monkey_patch() to make sure it uses green thread locals.

    Change-Id: Id059e5576c3fc78dd893fde15c963e182f1157f6
    Closes-Bug: #1773102

Changed in nova:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.openstack.org/589249

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/pike)

Fix proposed to branch: stable/pike
Review: https://review.openstack.org/589251

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/queens)

Reviewed: https://review.openstack.org/589249
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=410eca71833615a95614c6416e6580e3d72e92b9
Submitter: Zuul
Branch: stable/queens

commit 410eca71833615a95614c6416e6580e3d72e92b9
Author: Radoslav Gerganov <email address hidden>
Date: Wed Aug 1 13:54:31 2018 +0300

    Reload oslo_context after calling monkey_patch()

    oslo.context is storing a global thread-local variable which keeps the
    request context for the current thread. If oslo.context is imported
    before calling monkey_patch(), then this thread-local won't be green and
    instead of having one request per green thread, we will have one request
    object which will be overwritten every time when a new context is
    created.

    To workaround the problem, always reload oslo_context.context after
    calling monkey_patch() to make sure it uses green thread locals.

    Change-Id: Id059e5576c3fc78dd893fde15c963e182f1157f6
    Closes-Bug: #1773102
    (cherry picked from commit 233ea582f7b58b3188bfa523264e9052eefd00e5)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/pike)

Reviewed: https://review.openstack.org/589251
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=bee3e963ef4c55d51f82cf0b9f4a6fb9f9940bc9
Submitter: Zuul
Branch: stable/pike

commit bee3e963ef4c55d51f82cf0b9f4a6fb9f9940bc9
Author: Radoslav Gerganov <email address hidden>
Date: Wed Aug 1 13:54:31 2018 +0300

    Reload oslo_context after calling monkey_patch()

    oslo.context is storing a global thread-local variable which keeps the
    request context for the current thread. If oslo.context is imported
    before calling monkey_patch(), then this thread-local won't be green and
    instead of having one request per green thread, we will have one request
    object which will be overwritten every time when a new context is
    created.

    To workaround the problem, always reload oslo_context.context after
    calling monkey_patch() to make sure it uses green thread locals.

    Change-Id: Id059e5576c3fc78dd893fde15c963e182f1157f6
    Closes-Bug: #1773102
    (cherry picked from commit 233ea582f7b58b3188bfa523264e9052eefd00e5)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 18.0.0.0rc1

This issue was fixed in the openstack/nova 18.0.0.0rc1 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by Zengzhi Wang (<email address hidden>) on branch: master
Review: https://review.openstack.org/587754
Reason: https://review.openstack.org/#/c/587772/
solve the problem.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 16.1.5

This issue was fixed in the openstack/nova 16.1.5 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/nova 17.0.6

This issue was fixed in the openstack/nova 17.0.6 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.opendev.org/582332
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=67c761cb2c32d130bcb4bcb1c97ac65e828f6c80
Submitter: Zuul
Branch: master

commit 67c761cb2c32d130bcb4bcb1c97ac65e828f6c80
Author: melanie witt <email address hidden>
Date: Thu Jul 12 18:27:59 2018 +0000

    Don't overwrite greenthread-local context in host manager

    There are a couple of places in host manager where we create a blank
    RequestContext for internal database operations without passing
    overwrite=False, so they will overwrite the greenthread-local context
    that will be used for logging request_id, replacing them with newly
    generated request_ids.

    This changes the RequestContext creations to get_admin_context calls,
    (which uses overwrite=False) to be more explicit about the admin-ness
    of the internal database operations we're doing in these methods.

    Related-Bug: #1773102

    Change-Id: I752eb677d9ccc5ec65147380efe4067456fa312b

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.