lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9811b473-e09f-c2aa-cdd8-c71c34fe4707@linux.alibaba.com>
Date:   Mon, 24 Aug 2020 17:59:46 +0800
From:   xunlei <xlpang@...ux.alibaba.com>
To:     Pekka Enberg <penberg@...il.com>,
        Christopher Lameter <cl@...ux.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Wen Yang <wenyang@...ux.alibaba.com>,
        Yang Shi <yang.shi@...ux.alibaba.com>,
        Roman Gushchin <guro@...com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
        David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial
 objects

On 2020/8/20 PM9:58, Pekka Enberg wrote:
> Hi Christopher,
> 
> On Tue, Aug 11, 2020 at 3:52 PM Christopher Lameter <cl@...ux.com> wrote:
>>
>> On Fri, 7 Aug 2020, Pekka Enberg wrote:
>>
>>> Why do you consider this to be a fast path? This is all partial list
>>> accounting when we allocate/deallocate a slab, no? Just like
>>> ___slab_alloc() says, I assumed this to be the slow path... What am I
>>> missing?
>>
>> I thought these were per object counters? If you just want to count the
>> number of slabs then you do not need the lock at all. We already have a
>> counter for the number of slabs.
> 
> The patch attempts to speed up count_partial(), which holds on to the
> "n->list_lock" (with IRQs off) for the whole duration it takes to walk
> the partial slab list:
> 
>         spin_lock_irqsave(&n->list_lock, flags);
>         list_for_each_entry(page, &n->partial, slab_list)
>                 x += get_count(page);
>         spin_unlock_irqrestore(&n->list_lock, flags);
> 
> It's counting the number of *objects*, but the counters are only
> updated in bulk when we add/remove a slab to/from the partial list.
> The counter updates are therefore *not* in the fast-path AFAICT.
> 
> Xunlei, please correct me if I'm reading your patches wrong.

Yes, it's all in slow-path.

> 
> On Tue, Aug 11, 2020 at 3:52 PM Christopher Lameter <cl@...ux.com> wrote:
>>> No objections to alternative fixes, of course, but wrapping the
>>> counters under CONFIG_DEBUG seems like just hiding the actual issue...
>>
>> CONFIG_DEBUG is on by default. It just compiles in the debug code and
>> disables it so we can enable it with a kernel boot option. This is because
>> we have had numerous issues in the past with "production" kernels that
>> could not be recompiled with debug options. So just running the prod
>> kernel with another option will allow you to find hard to debug issues in
>> a full scale producton deployment with potentially proprietary modules
>> etc.
> 
> Yeah, it's been too long since I last looked at the code and did not
> realize even count_partial() is wrapped in CONFIG_DEBUG. So by all

Besides CONFIG_DEBUG, count_partial() is also wrapped in CONFIG_SYSFS.

> means, let's also wrap the counters with that too.
> 
> - Pekka
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ