lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Sep 2017 11:23:20 +0100
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Tariq Toukan <tariqt@...lanox.com>
Cc:     David Miller <davem@...emloft.net>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Eric Dumazet <eric.dumazet@...il.com>,
        Alexei Starovoitov <ast@...com>,
        Saeed Mahameed <saeedm@...lanox.com>,
        Eran Ben Elisha <eranbe@...lanox.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...e.com>, linux-mm <linux-mm@...ck.org>
Subject: Re: Page allocator bottleneck

On Thu, Sep 14, 2017 at 07:49:31PM +0300, Tariq Toukan wrote:
> Insights:
> Major degradation between #1 and #2, not getting any close to linerate!
> Degradation is fixed between #2 and #3.
> This is because page allocator cannot stand the higher allocation rate.
> In #2, we also see that the addition of rings (cores) reduces BW (!!), as
> result of increasing congestion over shared resources.
> 

Unfortunately, no surprises there. 

> Congestion in this case is very clear.
> When monitored in perf top:
> 85.58% [kernel] [k] queued_spin_lock_slowpath
> 

While it's not proven, the most likely candidate is the zone lock and
that should be confirmed using a call-graph profile. If so, then the
suggestion to tune to the size of the per-cpu allocator would mitigate
the problem.

> I think that page allocator issues should be discussed separately:
> 1) Rate: Increase the allocation rate on a single core.
> 2) Scalability: Reduce congestion and sync overhead between cores.
> 
> This is clearly the current bottleneck in the network stack receive flow.
> 
> I know about some efforts that were made in the past two years.
> For example the ones from Jesper et al.:
> - Page-pool (not accepted AFAIK).

Indeed not and it would also need driver conversion.

> - Page-allocation bulking.

Prototypes exist but it's pointless without the pool or driver
conversion so it's in the back burner for the moment.

> - Optimize order-0 allocations in Per-Cpu-Pages.
> 

This had a prototype that was reverted as it must be able to cope with
both irq and noirq contexts. Unfortunately I never found the time to
revisit it but a split there to handle both would mitigate the problem.
Probably not enough to actually reach line speed though so tuning of the
per-cpu allocator sizes would still be needed. I don't know when I'll
get the chance to revisit it. I'm travelling all next week and am mostly
occupied with other work at the moment that is consuming all my
concentration.

> I am not an mm expert, but wanted to raise the issue again, to combine the
> efforts and hear from you guys about status and possible directions.

The recent effort to reduce overhead from stats will help mitigate the
problem. Finishing the page pool, the bulk allocator and converting drivers
would be the most likely successful path forward but it's currently stalled
as everyone that was previously involved is too busy.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ