lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k2ovyl4y.fsf@yhuang-dev.intel.com>
Date:	Thu, 03 Dec 2015 16:46:53 +0800
From:	"Huang\, Ying" <ying.huang@...ux.intel.com>
To:	Mel Gorman <mgorman@...hsingularity.net>
Cc:	Michal Hocko <mhocko@...nel.org>, lkp@...org,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Rik van Riel <riel@...hat.com>,
	Vitaly Wool <vitalywool@...il.com>,
	David Rientjes <rientjes@...gle.com>,
	Christoph Lameter <cl@...ux.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Vlastimil Babka <vbabka@...e.cz>,
	Will Deacon <will.deacon@....com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [lkp] [mm, page_alloc] d0164adc89: -100.0% fsmark.app_overhead

Mel Gorman <mgorman@...hsingularity.net> writes:

> On Wed, Dec 02, 2015 at 03:15:29PM +0100, Michal Hocko wrote:
>> > > I didn't mention this allocation failure because I am not sure it is
>> > > really related.
>> > > 
>> > 
>> > I'm fairly sure it is. The failure is an allocation site that cannot
>> > sleep but did not specify __GFP_HIGH.
>> 
>> yeah but this was the case even before your patch. As the caller used
>> GFP_ATOMIC then it got __GFP_ATOMIC after your patch so it still
>> managed to do ALLOC_HARDER. I would agree if this was an explicit
>> GFP_NOWAIT. Unless I am missing something your patch hasn't changed the
>> behavior for this particular allocation.
>> 
>
> You're right. I think it's this hunk that is the problem.
>
> @@ -1186,7 +1186,7 @@ static struct request *blk_mq_map_request(struct
> request_queue *q,
>                 ctx = blk_mq_get_ctx(q);
>                 hctx = q->mq_ops->map_queue(q, ctx->cpu);
>                 blk_mq_set_alloc_data(&alloc_data, q,
> -                               __GFP_WAIT|GFP_ATOMIC, false, ctx, hctx);
> +                               __GFP_WAIT|__GFP_HIGH, false, ctx, hctx);
>                 rq = __blk_mq_alloc_request(&alloc_data, rw);
>                 ctx = alloc_data.ctx;
>                 hctx = alloc_data.hctx;
>
> This specific path at this patch is not waking kswapd any more when it
> should. A series of allocations there could hit the watermarks and never wake
> kswapd and then be followed by an atomic allocation failure that woke kswapd.
>
> This bug gets fixed later by the commit 71baba4b92dc ("mm, page_alloc:
> rename __GFP_WAIT to __GFP_RECLAIM") so it's not a bug in the current
> kernel. However, it happens to break bisection and would be caught if each
> individual commit was tested.
>
> Your __GFP_HIGH patch is still fine although not the direct fix for this
> specific problem. Commit 71baba4b92dc is.
>
> Ying, does the page allocation failure messages happen when the whole
> series is applied? i.e. is 4.4-rc3 ok?

There are allocation errors for 4.4-rc3 too. dmesg is attached.

[   54.970840] kworker/u4:0: page allocation failure: order:0, mode:0x2204000
[   54.973161] CPU: 1 PID: 6 Comm: kworker/u4:0 Not tainted 4.4.0-rc3 #1
[   54.975179] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[   54.980562] Workqueue: writeback wb_workfn (flush-253:0)
[   54.982289]  0000000000000000 ffff880035ac7098 ffffffff81410ad2 0000000002204000
[   54.985438]  ffff880035ac7120 ffffffff811724cb ffff880037586b50 0000000000000040
[   54.988746]  0000000000000000 0000000000000040 ffffffff00ac70e0 ffffffff00000000
[   54.991669] Call Trace:
[   54.992754]  [<ffffffff81410ad2>] dump_stack+0x4b/0x69
[   54.994433]  [<ffffffff811724cb>] warn_alloc_failed+0xdb/0x140
[   54.996134]  [<ffffffff811762b9>] __alloc_pages_nodemask+0x949/0xb40
[   54.997846]  [<ffffffff811be2c2>] alloc_pages_current+0x92/0x120
[   54.999494]  [<ffffffff811c8bfc>] new_slab+0x3dc/0x480
[   55.001007]  [<ffffffff811c9421>] ___slab_alloc+0x351/0x440
[   55.002588]  [<ffffffff814e7fcd>] ? alloc_indirect+0x1d/0x50
[   55.004268]  [<ffffffff8132270f>] ? xfs_btree_insrec+0x47f/0x700
[   55.005926]  [<ffffffff813206e9>] ? xfs_btree_delrec+0x5a9/0x1380
[   55.007592]  [<ffffffff814e7fcd>] ? alloc_indirect+0x1d/0x50
[   55.009281]  [<ffffffff811c9530>] __slab_alloc+0x20/0x40
[   55.010827]  [<ffffffff811ca03d>] __kmalloc+0x20d/0x260
[   55.012367]  [<ffffffff814e7fcd>] alloc_indirect+0x1d/0x50
[   55.014040]  [<ffffffff814e82cc>] virtqueue_add_sgs+0x2cc/0x3a0
[   55.015683]  [<ffffffff8157b240>] __virtblk_add_req+0xb0/0x1f0
[   55.017300]  [<ffffffff81307944>] ? xfs_alloc_update_counters+0x44/0x50
[   55.019759]  [<ffffffff813ec524>] ? blk_rq_map_sg+0x1f4/0x510
[   55.021371]  [<ffffffff8157b492>] virtio_queue_rq+0x112/0x280
[   55.022978]  [<ffffffff813f0647>] __blk_mq_run_hw_queue+0x1d7/0x370
[   55.024674]  [<ffffffff813f044f>] blk_mq_run_hw_queue+0x9f/0xc0
[   55.026307]  [<ffffffff813f196a>] blk_mq_insert_requests+0xfa/0x1a0
[   55.028003]  [<ffffffff813f26b3>] blk_mq_flush_plug_list+0x123/0x140
[   55.029710]  [<ffffffff813e7c77>] blk_flush_plug_list+0xa7/0x200
[   55.031363]  [<ffffffff813f1bec>] blk_sq_make_request+0x1dc/0x3b0
[   55.033025]  [<ffffffff813e5b0c>] ? generic_make_request_checks+0x24c/0x530
[   55.034829]  [<ffffffff813e6341>] generic_make_request+0xf1/0x1e0
[   55.036508]  [<ffffffff813e6497>] submit_bio+0x67/0x150
[   55.038034]  [<ffffffff8133cd61>] xfs_submit_ioend_bio+0x31/0x40
[   55.039797]  [<ffffffff8133ce51>] xfs_submit_ioend+0xe1/0x120
[   55.041410]  [<ffffffff8133e183>] xfs_vm_writepage+0x2a3/0x630
[   55.043038]  [<ffffffff811771a3>] __writepage+0x13/0x30
[   55.044568]  [<ffffffff811794bc>] write_cache_pages+0x22c/0x4c0
[   55.046214]  [<ffffffff81177190>] ? wb_position_ratio+0x1f0/0x1f0
[   55.047888]  [<ffffffff81179793>] generic_writepages+0x43/0x60
[   55.049520]  [<ffffffff8133d54e>] xfs_vm_writepages+0x4e/0x60
[   55.051130]  [<ffffffff8117a49e>] do_writepages+0x1e/0x30
[   55.052694]  [<ffffffff81216b05>] __writeback_single_inode+0x45/0x2e0
[   55.054410]  [<ffffffff812172e1>] writeback_sb_inodes+0x271/0x4d0
[   55.056075]  [<ffffffff812175c9>] __writeback_inodes_wb+0x89/0xc0
[   55.057749]  [<ffffffff812178ea>] wb_writeback+0x23a/0x2c0
[   55.059312]  [<ffffffff8121806f>] wb_workfn+0x20f/0x390
[   55.060846]  [<ffffffff81091257>] process_one_work+0x157/0x420
[   55.062466]  [<ffffffff81091eb9>] worker_thread+0x69/0x4a0
[   55.064030]  [<ffffffff81091e50>] ? rescuer_thread+0x380/0x380
[   55.065655]  [<ffffffff8109741f>] kthread+0xef/0x110
[   55.067141]  [<ffffffff81097330>] ? kthread_park+0x60/0x60
[   55.068713]  [<ffffffff818c5f8f>] ret_from_fork+0x3f/0x70
[   55.070255]  [<ffffffff81097330>] ? kthread_park+0x60/0x60
[   55.071830] Mem-Info:
[   55.072889] active_anon:5024 inactive_anon:1642 isolated_anon:0
[   55.072889]  active_file:22984 inactive_file:106547 isolated_file:25
[   55.072889]  unevictable:93684 dirty:7732 writeback:13026 unstable:0
[   55.072889]  slab_reclaimable:13082 slab_unreclaimable:3604
[   55.072889]  mapped:2866 shmem:1687 pagetables:876 bounce:0
[   55.072889]  free:1931 free_pcp:45 free_cma:0
[   55.081979] Node 0 DMA free:3948kB min:60kB low:72kB high:88kB active_anon:328kB inactive_anon:124kB active_file:1680kB inactive_file:1704kB unevictable:6936kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15908kB mlocked:0kB dirty:52kB writeback:76kB mapped:440kB shmem:124kB slab_reclaimable:472kB slab_unreclaimable:232kB kernel_stack:80kB pagetables:84kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[   55.093512] lowmem_reserve[]: 0 972 972 972
[   55.095362] Node 0 DMA32 free:3776kB min:3824kB low:4780kB high:5736kB active_anon:19768kB inactive_anon:6444kB active_file:90256kB inactive_file:424484kB unevictable:367800kB isolated(anon):0kB isolated(file):100kB present:1032064kB managed:997492kB mlocked:0kB dirty:30876kB writeback:52028kB mapped:11024kB shmem:6624kB slab_reclaimable:51856kB slab_unreclaimable:14184kB kernel_stack:2864kB pagetables:3420kB unstable:0kB bounce:0kB free_pcp:180kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:100 all_unreclaimable? no
[   55.107443] lowmem_reserve[]: 0 0 0 0
[   55.109213] Node 0 DMA: 4*4kB (U) 3*8kB (UE) 2*16kB (UM) 1*32kB (E) 0*64kB 2*128kB (UM) 2*256kB (UM) 2*512kB (ME) 2*1024kB (ME) 0*2048kB 0*4096kB = 3944kB
[   55.114800] Node 0 DMA32: 90*4kB (UE) 13*8kB (UE) 10*16kB (UME) 7*32kB (UME) 46*64kB (ME) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3792kB
[   55.120027] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[   55.122668] 224906 total pagecache pages
[   55.123990] 0 pages in swap cache
[   55.125207] Swap cache stats: add 0, delete 0, find 0/0
[   55.126738] Free swap  = 0kB
[   55.127890] Total swap = 0kB
[   55.129039] 262014 pages RAM
[   55.130190] 0 pages HighMem/MovableOnly
[   55.131495] 8664 pages reserved
[   55.132690] 0 pages hwpoisoned
[   55.133868] SLUB: Unable to allocate memory on node -1 (gfp=0x2080000)
[   55.135607]   cache: kmalloc-2048, object size: 2048, buffer size: 2048, default order: 3, min order: 0
[   55.138454]   node 0: slabs: 22, objs: 352, free: 0

Best Regards,
Huang, Ying

Download attachment "dmesg.xz" of type "application/x-xz" (15032 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ