Log data:
2014-10-24 08:08:54.344 26791 INFO nova.compute.resource_tracker [-] Compute_service record updated for 76jay:76jay
2014-10-24 08:09:12.425 26791 AUDIT nova.compute.manager [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Starting instance...
2014-10-24 08:09:12.574 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Attempting claim: memory 64 MB, disk 1 GB
2014-10-24 08:09:12.575 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Total memory: 16031 MB, used: 512.00 MB
2014-10-24 08:09:12.576 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] memory limit: 80155.00 MB, free: 79643.00 MB
2014-10-24 08:09:12.576 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Total disk: 40 GB, used: 0.00 GB
2014-10-24 08:09:12.577 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] disk limit not specified, defaulting to unlimited
2014-10-24 08:09:12.594 26791 AUDIT nova.compute.claims [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Claim successful
2014-10-24 08:09:12.870 26791 INFO nova.scheduler.client.report [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:09:13.031 26791 INFO nova.scheduler.client.report [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:09:17.394 26791 INFO nova.scheduler.client.report [-] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:09:35.610 26791 INFO ncflex.nova.virt.flex.containers [req-ec22c0fb-99d8-432c-ab27-4c807f51b721 None] Starting unprivileged container
2014-10-24 08:09:55.255 26791 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2014-10-24 08:09:55.365 26791 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 16031, total allocated virtual ram (MB): 576
2014-10-24 08:09:55.365 26791 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 39
2014-10-24 08:09:55.366 26791 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 1, total allocated vcpus: 0
2014-10-24 08:09:55.366 26791 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2014-10-24 08:09:55.409 26791 INFO nova.scheduler.client.report [-] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:09:55.410 26791 INFO nova.compute.resource_tracker [-] Compute_service record updated for 76jay:76jay
2014-10-24 08:10:30.901 26791 AUDIT nova.compute.manager [req-637fd204-bf4e-4b49-8a73-698e41ece6f2 None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Get console output
2014-10-24 08:10:39.270 26791 WARNING nova.compute.manager [-] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, current DB power_state: 4, current VM power_state: 4
2014-10-24 08:10:39.631 26791 INFO nova.compute.manager [req-a12cc078-fe01-4714-9df3-4cec5a64554b None] [instance: 92fe1ee2-519f-4335-b939-cec133a3f2ca] Instance is already powered off in the hypervisor when stop is called.
2014-10-24 08:10:57.256 26791 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2014-10-24 08:10:57.358 26791 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 16031, total allocated virtual ram (MB): 576
2014-10-24 08:10:57.359 26791 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 39
2014-10-24 08:10:57.360 26791 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 1, total allocated vcpus: 0
2014-10-24 08:10:57.360 26791 AUDIT nova.compute.resource_tracker [-] PCI stats: []
2014-10-24 08:10:57.394 26791 INFO nova.scheduler.client.report [-] Compute_service record updated for ('76jay', '76jay')
2014-10-24 08:10:57.395 26791 INFO nova.compute.resource_tracker [-] Compute_service record updated for 76jay:76jay
lxc_container 1414138175.973 ERROR lxc_apparmor - lsm/apparmor. c:mount_ feature_ enabled: 54 - Operation not permitted - Error mounting sysfs c:apparmor_ process_ label_set: 186 - If you really want to start this container, set c:apparmor_ process_ label_set: 187 - lxc.aa_ allow_incomplet e = 1 c:apparmor_ process_ label_set: 188 - in your container configuration file __sync_ wait:51 - invalid sequence number 1. expected 4 __lxc_start: 1087 - failed to spawn '92fe1ee2- 519f-4335- b939-cec133a3f2 ca' c:cgm_remove_ cgroup: 503 - call to cgmanager_ remove_ sync failed: invalid request c:cgm_remove_ cgroup: 505 - Error removing hugetlb: 92fe1ee2- 519f-4335- b939-cec133a3f2 ca c:cgm_remove_ cgroup: 503 - call to cgmanager_ remove_ sync failed: invalid request c:cgm_remove_ cgroup: 505 - Error removing perf_event: 92fe1ee2- 519f-4335- b939-cec133a3f2 ca c:cgm_remove_ cgroup: 503 - call to cgmanager_ remove_ sync failed: invalid request c:cgm_remove_ cgroup: 505 - Error removing blkio:92fe1ee2- 519f-4335- b939-cec133a3f2 ca c:cgm_remove_ cgroup: 503 - call to cgmanager_ remove_ sync failed: invalid request c:cgm_remove_ cgroup: 505 - Error removing freezer: 92fe1ee2- 519f-4335- b939-cec133a3f2 ca c:cgm_remove_ cgroup: 503 - call to cgmanager_ remove_ sync failed: invalid request c:cgm_remove_ cgroup: 505 - Error removing devices: 92fe1ee2- 519f-4335- b939-cec133a3f2 ca c:cgm_remove_ cgroup: 503 - call to cgmanager_ remove_ sync failed: invalid request c:cgm_remove_ cgroup: 505 - Error removing memory: 92fe1ee2- 519f-4335- b939-cec133a3f2 ca c:cgm_remove_ cgroup: 503 - call to cgmanager_ remove_ sync failed: invalid request c:cgm_remove_ cgroup: 505 - Error removing cpuacct: 92fe1ee2- 519f-4335- b939-cec133a3f2 ca c:cgm_remove_ cgroup: 503 - call to cgmanager_ remove_ sync failed: invalid request c:cgm_remove_ cgroup: 505 - Error removing cpu:92fe1ee2- 519f-4335- b939-cec133a3f2 ca
lxc_container 1414138175.973 ERROR lxc_apparmor - lsm/apparmor.
lxc_container 1414138175.973 ERROR lxc_apparmor - lsm/apparmor.
lxc_container 1414138175.973 ERROR lxc_apparmor - lsm/apparmor.
lxc_container 1414138175.975 ERROR lxc_sync - sync.c:
lxc_container 1414138175.975 ERROR lxc_start - start.c:
lxc_container 1414138175.976 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.977 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.978 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.978 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.978 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.978 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.
lxc_container 1414138175.979 ERROR lxc_cgmanager - cgmanager.
lxc_c...