[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h9k4kzcv.fsf@yhuang-dev.intel.com>
Date: Mon, 30 Nov 2015 10:14:24 +0800
From: "Huang\, Ying" <ying.huang@...ux.intel.com>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: lkp@...org, LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
Vitaly Wool <vitalywool@...il.com>,
David Rientjes <rientjes@...gle.com>,
Christoph Lameter <cl@...ux.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [lkp] [mm, page_alloc] d0164adc89: -100.0% fsmark.app_overhead
Mel Gorman <mgorman@...hsingularity.net> writes:
> On Fri, Nov 27, 2015 at 09:14:52AM +0800, Huang, Ying wrote:
>> Hi, Mel,
>>
>> Mel Gorman <mgorman@...hsingularity.net> writes:
>>
>> > On Thu, Nov 26, 2015 at 08:56:12AM +0800, kernel test robot wrote:
>> >> FYI, we noticed the below changes on
>> >>
>> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>> >> commit d0164adc89f6bb374d304ffcc375c6d2652fe67d ("mm, page_alloc:
>> >> distinguish between being unable to sleep, unwilling to sleep and
>> >> avoiding waking kswapd")
>> >>
>> >> Note: the testing machine is a virtual machine with only 1G memory.
>> >>
>> >
>> > I'm not actually seeing any problem here. Is this a positive report or
>> > am I missing something obvious?
>>
>> Sorry the email subject is generated automatically and I forget to
>> change it to some meaningful stuff before sending out. From the testing
>> result, we found the commit make the OOM possibility increased from 0%
>> to 100% on this machine with small memory. I also added proc-vmstat
>> information data too to help diagnose it.
>>
>
> There is no reference to OOM possibility in the email that I can see. Can
> you give examples of the OOM messages that shows the problem sites? It was
> suspected that there may be some callers that were accidentally depending
> on access to emergency reserves. If so, either they need to be fixed (if
> the case is extremely rare) or a small reserve will have to be created
> for callers that are not high priority but still cannot reclaim.
>
> Note that I'm travelling a lot over the next two weeks so I'll be slow to
> respond but I will get to it.
Here is the kernel log, the full dmesg is attached too. The OOM
occurs during fsmark testing.
Best Regards,
Huang, Ying
[ 31.453514] kworker/u4:0: page allocation failure: order:0, mode:0x2200000
[ 31.463570] CPU: 0 PID: 6 Comm: kworker/u4:0 Not tainted 4.3.0-08056-gd0164ad #1
[ 31.466115] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 31.477146] Workqueue: writeback wb_workfn (flush-253:0)
[ 31.481450] 0000000000000000 ffff880035ac75e8 ffffffff8140a142 0000000002200000
[ 31.492582] ffff880035ac7670 ffffffff8117117b ffff880037586b28 ffff880000000040
[ 31.507631] ffff88003523b270 0000000000000040 ffff880035abc800 ffffffff00000000
[ 31.510568] Call Trace:
[ 31.511828] [<ffffffff8140a142>] dump_stack+0x4b/0x69
[ 31.513391] [<ffffffff8117117b>] warn_alloc_failed+0xdb/0x140
[ 31.523163] [<ffffffff81174ec4>] __alloc_pages_nodemask+0x874/0xa60
[ 31.524949] [<ffffffff811bcb62>] alloc_pages_current+0x92/0x120
[ 31.526659] [<ffffffff811c73e4>] new_slab+0x3d4/0x480
[ 31.536134] [<ffffffff811c7c36>] __slab_alloc+0x376/0x470
[ 31.537541] [<ffffffff814e0ced>] ? alloc_indirect+0x1d/0x50
[ 31.543268] [<ffffffff81338221>] ? xfs_submit_ioend_bio+0x31/0x40
[ 31.545104] [<ffffffff814e0ced>] ? alloc_indirect+0x1d/0x50
[ 31.546982] [<ffffffff811c8e8d>] __kmalloc+0x20d/0x260
[ 31.548334] [<ffffffff814e0ced>] alloc_indirect+0x1d/0x50
[ 31.549805] [<ffffffff814e0fec>] virtqueue_add_sgs+0x2cc/0x3a0
[ 31.555396] [<ffffffff81573a30>] __virtblk_add_req+0xb0/0x1f0
[ 31.556846] [<ffffffff8117a121>] ? pagevec_lookup_tag+0x21/0x30
[ 31.558318] [<ffffffff813e5d72>] ? blk_rq_map_sg+0x1e2/0x4f0
[ 31.563880] [<ffffffff81573c82>] virtio_queue_rq+0x112/0x280
[ 31.565307] [<ffffffff813e9de7>] __blk_mq_run_hw_queue+0x1d7/0x370
[ 31.571005] [<ffffffff813e9bef>] blk_mq_run_hw_queue+0x9f/0xc0
[ 31.572472] [<ffffffff813eb10a>] blk_mq_insert_requests+0xfa/0x1a0
[ 31.573982] [<ffffffff813ebdb3>] blk_mq_flush_plug_list+0x123/0x140
[ 31.583686] [<ffffffff813e1777>] blk_flush_plug_list+0xa7/0x200
[ 31.585138] [<ffffffff813e1c49>] blk_finish_plug+0x29/0x40
[ 31.586542] [<ffffffff81215f85>] wb_writeback+0x185/0x2c0
[ 31.592429] [<ffffffff812166a5>] wb_workfn+0xf5/0x390
[ 31.594037] [<ffffffff81091297>] process_one_work+0x157/0x420
[ 31.599804] [<ffffffff81091ef9>] worker_thread+0x69/0x4a0
[ 31.601484] [<ffffffff81091e90>] ? rescuer_thread+0x380/0x380
[ 31.611368] [<ffffffff8109746f>] kthread+0xef/0x110
[ 31.612953] [<ffffffff81097380>] ? kthread_park+0x60/0x60
[ 31.619418] [<ffffffff818bce8f>] ret_from_fork+0x3f/0x70
[ 31.621221] [<ffffffff81097380>] ? kthread_park+0x60/0x60
[ 31.635226] Mem-Info:
[ 31.636569] active_anon:4942 inactive_anon:1643 isolated_anon:0
[ 31.636569] active_file:23196 inactive_file:110131 isolated_file:251
[ 31.636569] unevictable:92329 dirty:2865 writeback:1925 unstable:0
[ 31.636569] slab_reclaimable:10588 slab_unreclaimable:3390
[ 31.636569] mapped:2848 shmem:1687 pagetables:876 bounce:0
[ 31.636569] free:1932 free_pcp:218 free_cma:0
[ 31.667096] Node 0 DMA free:3948kB min:60kB low:72kB high:88kB active_anon:264kB inactive_anon:128kB active_file:1544kB inactive_file:5296kB unevictable:3136kB isolated(anon):0kB isolated(file):236kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:440kB shmem:128kB slab_reclaimable:588kB slab_unreclaimable:304kB kernel_stack:112kB pagetables:80kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:3376 all_unreclaimable? no
[ 31.708140] lowmem_reserve[]: 0 972 972 972
[ 31.710104] Node 0 DMA32 free:3780kB min:3824kB low:4780kB high:5736kB active_anon:19504kB inactive_anon:6444kB active_file:91240kB inactive_file:435228kB unevictable:366180kB isolated(anon):0kB isolated(file):768kB present:1032064kB managed:997532kB mlocked:0kB dirty:11460kB writeback:7700kB mapped:10952kB shmem:6620kB slab_reclaimable:41764kB slab_unreclaimable:13256kB kernel_stack:2752kB pagetables:3424kB unstable:0kB bounce:0kB free_pcp:872kB local_pcp:232kB free_cma:0kB writeback_tmp:0kB pages_scanned:140404 all_unreclaimable? no
[ 31.743737] lowmem_reserve[]: 0 0 0 0
[ 31.745320] Node 0 DMA: 7*4kB (UME) 2*8kB (UM) 2*16kB (ME) 1*32kB (E) 0*64kB 2*128kB (ME) 2*256kB (ME) 2*512kB (UM) 2*1024kB (ME) 0*2048kB 0*4096kB = 3948kB
[ 31.757513] Node 0 DMA32: 1*4kB (U) 0*8kB 4*16kB (UME) 3*32kB (UE) 3*64kB (UM) 1*128kB (U) 1*256kB (U) 0*512kB 3*1024kB (UME) 0*2048kB 0*4096kB = 3812kB
[ 31.766470] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 31.772953] 227608 total pagecache pages
[ 31.774127] 0 pages in swap cache
[ 31.775428] Swap cache stats: add 0, delete 0, find 0/0
[ 31.776785] Free swap = 0kB
[ 31.777799] Total swap = 0kB
[ 31.779569] 262014 pages RAM
[ 31.780584] 0 pages HighMem/MovableOnly
[ 31.781744] 8654 pages reserved
[ 31.790944] 0 pages hwpoisoned
[ 31.792008] SLUB: Unable to allocate memory on node -1 (gfp=0x2080000)
[ 31.793537] cache: kmalloc-128, object size: 128, buffer size: 128, default order: 0, min order: 0
[ 31.796088] node 0: slabs: 27, objs: 864, free: 0
Download attachment "dmesg.xz" of type "application/x-xz" (15176 bytes)
Powered by blists - more mailing lists