[Hyper-V] Dynamic Memory HotAdd memory demand increases very fast to maximum
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
linux-azure (Ubuntu) |
Confirmed
|
High
|
Unassigned |
Bug Description
Issue description: Memory demand increases very fast up to the maximum available memory, call traces show up in /var/log/syslog, the VM becomes unresponsive during the memory consumption, but it becomes responsive right after stress-ng ends its execution, therefore we can say the issue might be in the Dynamic Memory feature.
Platform: host independent
Distribution name and release:
- Bionic (4.15.0-10-generic)
- Linux Azure kernel (4.13.0-1012-azure)
Repro rate: 100%
VM configuration:
Kernel: 4.13.0-1012-azure
CPU: 8 cores
RAM: 2048 MB
Enable Dynamic Memory: Yes
Minimum RAM: 512 MB
Maximum RAM: 8192 MB
Memory buffer: 20%
Memory weight: 100%
Repro steps:
1. Start the VM with the above configuration.
2. Run stress-ng with the following parameters:
stress-ng -m 16 --vm-bytes 256M -t 200 --backoff 10000000
3. In less than 60 seconds the entire memory is consumed.
Additional info:
1. We also tested this on Xenial with default kernel and the issue does not occur.
2. The call trace message from syslog can be seen below.
Mar 14 08:42:17 xenial kernel: [ 257.120261] Call Trace:
Mar 14 08:42:17 xenial kernel: [ 257.120267] dump_stack+
Mar 14 08:42:17 xenial kernel: [ 257.120271] dump_header+
Mar 14 08:42:17 xenial kernel: [ 257.120276] ? security_
Mar 14 08:42:17 xenial kernel: [ 257.120277] oom_kill_
Mar 14 08:42:17 xenial kernel: [ 257.120279] out_of_
Mar 14 08:42:17 xenial kernel: [ 257.120282] __alloc_
Mar 14 08:42:17 xenial kernel: [ 257.120284] __alloc_
Mar 14 08:42:17 xenial kernel: [ 257.120289] alloc_pages_
Mar 14 08:42:17 xenial kernel: [ 257.120292] shmem_alloc_
Mar 14 08:42:17 xenial kernel: [ 257.120295] ? __radix_
Mar 14 08:42:17 xenial kernel: [ 257.120297] shmem_alloc_
Mar 14 08:42:17 xenial kernel: [ 257.120298] ? find_get_
Mar 14 08:42:17 xenial kernel: [ 257.120300] shmem_getpage_
Mar 14 08:42:17 xenial kernel: [ 257.120303] shmem_fault+
Mar 14 08:42:17 xenial kernel: [ 257.120306] ? file_update_
Mar 14 08:42:17 xenial kernel: [ 257.120309] __do_fault+
Mar 14 08:42:17 xenial kernel: [ 257.120311] __handle_
Mar 14 08:42:17 xenial kernel: [ 257.120314] ? set_next_
Mar 14 08:42:17 xenial kernel: [ 257.120316] handle_
Mar 14 08:42:17 xenial kernel: [ 257.120319] __do_page_
Mar 14 08:42:17 xenial kernel: [ 257.120320] do_page_
Mar 14 08:42:17 xenial kernel: [ 257.120323] ? page_fault+
Mar 14 08:42:17 xenial kernel: [ 257.120325] page_fault+
Mar 14 08:42:17 xenial kernel: [ 257.120327] RIP: 0033:0x4493ad
Mar 14 08:42:17 xenial kernel: [ 257.120328] RSP: 002b:00007ffd3f
Mar 14 08:42:17 xenial kernel: [ 257.120329] RAX: 00000000c6a6a9e7 RBX: 00000001a8ae9d07 RCX: 0000000021b83c00
Mar 14 08:42:17 xenial kernel: [ 257.120330] RDX: 00007f2ad911d000 RSI: 00000000271ea9e7 RDI: 00000000271e9660
Mar 14 08:42:17 xenial kernel: [ 257.120331] RBP: 00007f2adab70000 R08: 000000001bda3103 R09: 0000000048373eca
Mar 14 08:42:17 xenial kernel: [ 257.120331] R10: 0000000048373eca R11: 00007f2acab70000 R12: 0000000000000000
Mar 14 08:42:17 xenial kernel: [ 257.120332] R13: 8309310348373eca R14: 56b63c1fe6166568 R15: 00007f2acab70000
Mar 14 08:42:17 xenial kernel: [ 257.120333] Mem-Info:
Mar 14 08:42:17 xenial kernel: [ 257.120337] active_anon:249563 inactive_anon:83607 isolated_anon:65
Mar 14 08:42:17 xenial kernel: [ 257.120337] active_file:54 inactive_file:74 isolated_file:0
Mar 14 08:42:17 xenial kernel: [ 257.120337] unevictable:2561 dirty:0 writeback:68 unstable:0
Mar 14 08:42:17 xenial kernel: [ 257.120337] slab_reclaimabl
Mar 14 08:42:17 xenial kernel: [ 257.120337] mapped:132260 shmem:332303 pagetables:2888 bounce:0
Mar 14 08:42:17 xenial kernel: [ 257.120337] free:12983 free_pcp:493 free_cma:0
Mar 14 08:42:17 xenial kernel: [ 257.120340] Node 0 active_
Mar 14 08:42:17 xenial kernel: [ 257.120341] Node 0 DMA free:7404kB min:392kB low:488kB high:584kB active_anon:5616kB inactive_
Mar 14 08:42:17 xenial kernel: [ 257.120345] lowmem_reserve[]: 0 1754 1754 1754 1754
Mar 14 08:42:17 xenial kernel: [ 257.120347] Node 0 DMA32 free:44528kB min:44660kB low:55824kB high:66988kB active_
Mar 14 08:42:17 xenial kernel: [ 257.120351] lowmem_reserve[]: 0 0 0 0 0
Mar 14 08:42:17 xenial kernel: [ 257.120353] Node 0 DMA: 1*4kB (U) 9*8kB (UME) 10*16kB (UME) 4*32kB (UM) 4*64kB (UM) 5*128kB (UME) 4*256kB (UM) 2*512kB (ME) 2*1024kB (ME) 1*2048kB (E) 0*4096kB = 7404kB
Mar 14 08:42:17 xenial kernel: [ 257.120363] Node 0 DMA32: 12*4kB (UME) 175*8kB (UE) 138*16kB (UE) 91*32kB (UM) 27*64kB (UM) 49*128kB (UM) 19*256kB (UM) 8*512kB (M) 21*1024kB (M) 0*2048kB 0*4096kB = 45032kB
summary: |
[Hyper-V] Dynamic Memory HotAdd memory demand increases very fast to - maximum and logs show call trace messages + maximum |
Changed in linux (Ubuntu): | |
status: | Incomplete → Confirmed |
Changed in linux (Ubuntu): | |
importance: | Undecided → High |
affects: | linux (Ubuntu) → linux-azure (Ubuntu) |
tags: | added: kernel-da-key kernel-hyper-v |
description: | updated |
This bug is missing log files that will aid in diagnosing the problem. While running an Ubuntu kernel (not a mainline or third-party kernel) please enter the following command in a terminal window:
apport-collect 1756129
and then change the status of the bug to 'Confirmed'.
If, due to the nature of the issue you have encountered, you are unable to run this command, please add a comment stating that fact and change the bug status to 'Confirmed'.
This change has been made by an automated script, maintained by the Ubuntu Kernel Team.