[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171130140804.74lgpkvmvnzx4dlm@wfg-t540p.sh.intel.com>
Date: Thu, 30 Nov 2017 22:08:04 +0800
From: Fengguang Wu <fengguang.wu@...el.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>,
Johannes Weiner <hannes@...xchg.org>,
linux-kernel@...r.kernel.org, lkp@...org,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Subject: Re: dd: page allocation failure: order:0,
mode:0x1080020(GFP_ATOMIC), nodemask=(null)
>>> [ 78.848629] dd: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)
>>> [ 78.857841] dd cpuset=/ mems_allowed=0-1
>>> [ 78.862502] CPU: 0 PID: 6131 Comm: dd Tainted: G O 4.15.0-rc1 #1
>>> [ 78.870437] Call Trace:
>>> [ 78.873610] <IRQ>
>>> [ 78.876342] dump_stack+0x5c/0x7b
>>> [ 78.880414] warn_alloc+0xbe/0x150
>>> [ 78.884550] __alloc_pages_slowpath+0xda7/0xdf0
>>> [ 78.889822] ? xhci_urb_enqueue+0x23d/0x580
>>> [ 78.894713] __alloc_pages_nodemask+0x269/0x280
>>> [ 78.899891] page_frag_alloc+0x11c/0x150
>>> [ 78.904471] __netdev_alloc_skb+0xa0/0x110
>>> [ 78.909277] rx_submit+0x3b/0x2e0
>>> [ 78.913256] rx_complete+0x196/0x2d0
>>> [ 78.917560] __usb_hcd_giveback_urb+0x86/0x100
>>> [ 78.922681] xhci_giveback_urb_in_irq+0x86/0x100
>>> [ 78.928769] ? ip_rcv+0x261/0x390
>>> [ 78.932739] xhci_td_cleanup+0xe7/0x170
>>> [ 78.937308] handle_tx_event+0x297/0x1190
>>> [ 78.941990] xhci_irq+0x300/0xb80
>>> [ 78.945968] ? pciehp_isr+0x46/0x320
>>> [ 78.950870] __handle_irq_event_percpu+0x3a/0x1a0
>>> [ 78.956311] handle_irq_event_percpu+0x20/0x50
>>> [ 78.961466] handle_irq_event+0x3d/0x60
>>> [ 78.965962] handle_edge_irq+0x71/0x190
>>> [ 78.970480] handle_irq+0xa5/0x100
>>> [ 78.974565] do_IRQ+0x41/0xc0
>>> [ 78.978206] ? pagevec_move_tail_fn+0x350/0x350
>>> [ 78.983412] common_interrupt+0x96/0x96
>>
>>Unfortunatelly we are missing the most imporatant information, the
>>meminfo. We cannot tell much without it. Maybe collecting /proc/vmstat
>>during the test will tell us more.
>
>Attached the JSON format per-second vmstat records.
>It feels more readable than the raw dumps.
And here is the meminfo lines.
Thanks,
Fengguang
Download attachment "meminfo.json.gz" of type "application/gzip" (4256 bytes)
Powered by blists - more mailing lists