lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1e01092b-140d-2bab-aeba-321a74a194ee@linux.com>
Date: Thu, 14 Mar 2024 16:45:04 -0700 (PDT)
From: "Christoph Lameter (Ampere)" <cl@...ux.com>
To: Jianfeng Wang <jianfeng.w.wang@...cle.com>
cc: Vlastimil Babka <vbabka@...e.cz>, 
    Chengming Zhou <chengming.zhou@...ux.dev>, 
    David Rientjes <rientjes@...gle.com>, penberg@...nel.org, 
    iamjoonsoo.kim@....com, akpm@...ux-foundation.org, 
    roman.gushchin@...ux.dev, 42.hyeyoo@...il.com, linux-mm@...ck.org, 
    linux-kernel@...r.kernel.org
Subject: Re: [PATCH] slub: avoid scanning all partial slabs in
 get_slabinfo()

On Wed, 13 Mar 2024, Jianfeng Wang wrote:

> I am not sure that the RCU change will solve the lockup problem.
> The reason is that iterating a super long list of partial slabs is a problem by itself, e.g., on a
> non-preemptive kernel, then count_partial() can be stuck in the loop for a while, which can cause problems.
>
> Also, even if we check the list ownership for slabs, we may spend too much time in the loop if no updater shows up,
> or fail and re-do many times the loop if several updates happen. The latter can exacerbate this lockup issue. So,
> in the end, reading /proc/slabinfo can take a super long time just for a counter that may be changing all the time.

Well we could also cache the values somehow to avoid the scans? invalidate 
the counter if something significant happens.


> Thus, I prefer the "guesstimate" approach, even if the number is inaccurate or biased. Let me know if this makes sense.

Come up with a patch and then lets see how well it works.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ