lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <35a0b75a-f348-d21c-4ff4-fadba0c4db02@huawei.com>
Date:   Tue, 3 Aug 2021 22:24:39 +0800
From:   Kefeng Wang <wangkefeng.wang@...wei.com>
To:     Shakeel Butt <shakeelb@...gle.com>
CC:     Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        "David Rientjes" <rientjes@...gle.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        "Michal Hocko" <mhocko@...e.com>, Roman Gushchin <guro@...com>,
        Wang Hai <wanghai38@...wei.com>,
        Muchun Song <songmuchun@...edance.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free


On 2021/7/29 22:03, Shakeel Butt wrote:
> On Wed, Jul 28, 2021 at 11:52 PM Kefeng Wang <wangkefeng.wang@...wei.com> wrote:
>>
>> On 2021/7/28 23:53, Shakeel Butt wrote:
>>> SLUB uses page allocator for higher order allocations and update
>>> unreclaimable slab stat for such allocations. At the moment, the bulk
>>> free for SLUB does not share code with normal free code path for these
>>> type of allocations and have missed the stat update. So, fix the stat
>>> update by common code. The user visible impact of the bug is the
>>> potential of inconsistent unreclaimable slab stat visible through
>>> meminfo and vmstat.
>>>
>>> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
>>> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
>>> ---
>>>    mm/slub.c | 22 ++++++++++++----------
>>>    1 file changed, 12 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index 6dad2b6fda6f..03770291aa6b 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -3238,6 +3238,16 @@ struct detached_freelist {
>>>        struct kmem_cache *s;
>>>    };
>>>
>>> +static inline void free_nonslab_page(struct page *page)
>>> +{
>>> +     unsigned int order = compound_order(page);
>>> +
>>> +     VM_BUG_ON_PAGE(!PageCompound(page), page);
>> Could we add WARN_ON here, or we got nothing when CONFIG_DEBUG_VM is
>> disabled.
> I don't have a strong opinion on this. Please send a patch with
> reasoning if you want WARN_ON_ONCE here.

Ok, we met a BUG_ON(!PageCompound(page)) in kfree() twice in lts4.4, we 
are still debugging it.

It's different to analyses due to no vmcore, and can't be reproduced.

WARN_ON() here could help us to notice the issue.

Also is there any experience or known fix/way to debug this kinds of 
issue? memory corruption?

Any suggestion will be appreciated, thanks.



> .
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ