lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Aug 2020 10:28:49 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Uladzislau Rezki <urezki@...il.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
        linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Matthew Wilcox <willy@...radead.org>,
        "Theodore Y . Ts'o" <tytso@....edu>,
        Joel Fernandes <joel@...lfernandes.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag

On Mon 17-08-20 00:56:55, Uladzislau Rezki wrote:
[...]
> Michal asked to provide some data regarding how many pages we need and how
> "lockless allocation" behaves when it comes to success vs failed scenarios.
> 
> Please see below some results. The test case is a tight loop of 1 000 000 allocations
> doing kmalloc() and kfree_rcu():

It would be nice to cover some more realistic workloads as well.

> sudo ./test_vmalloc.sh run_test_mask=2048 single_cpu_test=1
> 
> <snip>
>  for (i = 0; i < 1 000 000; i++) {
>   p = kmalloc(sizeof(*p), GFP_KERNEL);
>   if (!p)
>    return -1;
> 
>   p->array[0] = 'a';
>   kvfree_rcu(p, rcu);
>  }
> <snip>
> 
> wget ftp://vps418301.ovh.net/incoming/1000000_kmalloc_kfree_rcu_proc_percpu_pagelist_fractio_is_0.png

If I understand this correctly then this means that failures happen very
often because pcp pages are not recycled quicklly enough.

> wget ftp://vps418301.ovh.net/incoming/1000000_kmalloc_kfree_rcu_proc_percpu_pagelist_fractio_is_8.png

1/8 of the memory in pcp lists is quite large and likely not something
used very often.

Both these numbers just make me think that a dedicated pool of page
pre-allocated for RCU specifically might be a better solution. I still
haven't read through that branch of the email thread though so there
might be some pretty convincing argments to not do that.

> Also i would like to underline, that kfree_rcu() reclaim logic can be improved further,
> making the drain logic more efficient when it comes to time, thus to reduce a footprint
> as a result number of required pages.
> 
> --
> Vlad Rezki

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ