[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87vaklyqwq.fsf@linux.intel.com>
Date: Thu, 14 Sep 2017 13:19:17 -0700
From: Andi Kleen <ak@...ux.intel.com>
To: Tariq Toukan <tariqt@...lanox.com>
Cc: David Miller <davem@...emloft.net>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Eric Dumazet <eric.dumazet@...il.com>,
Alexei Starovoitov <ast@...com>,
Saeed Mahameed <saeedm@...lanox.com>,
Eran Ben Elisha <eranbe@...lanox.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>, linux-mm <linux-mm@...ck.org>
Subject: Re: Page allocator bottleneck
Tariq Toukan <tariqt@...lanox.com> writes:
>
> Congestion in this case is very clear.
> When monitored in perf top:
> 85.58% [kernel] [k] queued_spin_lock_slowpath
Please look at the callers. Spinlock profiles without callers
are usually useless because it's just blaming the messenger.
Most likely the PCP lists are too small for your extreme allocation
rate, so it goes back too often to the shared pool.
You can play with the vm.percpu_pagelist_fraction setting.
-Andi
Powered by blists - more mailing lists