lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0355157-d70a-893b-5b85-b8cb90e03361@linux.alibaba.com>
Date:   Mon, 24 Aug 2020 18:04:25 +0800
From:   xunlei <xlpang@...ux.alibaba.com>
To:     Pekka Enberg <penberg@...il.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>, Christoph Lameter <cl@...ux.com>,
        Wen Yang <wenyang@...ux.alibaba.com>,
        Roman Gushchin <guro@...com>,
        Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
        David Rientjes <rientjes@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH v2 0/3] mm/slub: Fix count_partial() problem

On 2020/8/20 下午10:02, Pekka Enberg wrote:
> On Mon, Aug 10, 2020 at 3:18 PM Xunlei Pang <xlpang@...ux.alibaba.com> wrote:
>>
>> v1->v2:
>> - Improved changelog and variable naming for PATCH 1~2.
>> - PATCH3 adds per-cpu counter to avoid performance regression
>>   in concurrent __slab_free().
>>
>> [Testing]
>> On my 32-cpu 2-socket physical machine:
>> Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
>> perf stat --null --repeat 10 -- hackbench 20 thread 20000
>>
>> == original, no patched
>>       19.211637055 seconds time elapsed                                          ( +-  0.57% )
>>
>> == patched with patch1~2
>>  Performance counter stats for 'hackbench 20 thread 20000' (10 runs):
>>
>>       21.731833146 seconds time elapsed                                          ( +-  0.17% )
>>
>> == patched with patch1~3
>>  Performance counter stats for 'hackbench 20 thread 20000' (10 runs):
>>
>>       19.112106847 seconds time elapsed                                          ( +-  0.64% )
>>
>>
>> Xunlei Pang (3):
>>   mm/slub: Introduce two counters for partial objects
>>   mm/slub: Get rid of count_partial()
>>   mm/slub: Use percpu partial free counter
>>
>>  mm/slab.h |   2 +
>>  mm/slub.c | 124 +++++++++++++++++++++++++++++++++++++++++++-------------------
>>  2 files changed, 89 insertions(+), 37 deletions(-)
> 
> We probably need to wrap the counters under CONFIG_SLUB_DEBUG because
> AFAICT all the code that uses them is also wrapped under it.

/sys/kernel/slab/***/partial sysfs also uses it, I can wrap it with
CONFIG_SLUB_DEBUG or CONFIG_SYSFS for backward compatibility.

> 
> An alternative approach for this patch would be to somehow make the
> lock in count_partial() more granular, but I don't know how feasible
> that actually is.
> 
> Anyway, I am OK with this approach:
> 
> Reviewed-by: Pekka Enberg <penberg@...nel.org>

Thanks!

> 
> You still need to convince Christoph, though, because he had
> objections over this approach.

Christoph, what do you think, or any better suggestion to address this
*in production* issue?

> 
> - Pekka
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ