lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2794de3c-673b-45ea-8897-df1ada9c6717@kernel.org>
Date: Fri, 19 Jul 2024 17:07:01 +0200
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: tj@...nel.org, cgroups@...r.kernel.org, shakeel.butt@...ux.dev,
 hannes@...xchg.org, lizefan.x@...edance.com, longman@...hat.com,
 kernel-team@...udflare.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V7 1/2] cgroup/rstat: Avoid thundering herd problem by
 kswapd across NUMA nodes



On 17/07/2024 02.30, Yosry Ahmed wrote:
> On Thu, Jul 11, 2024 at 6:28 AM Jesper Dangaard Brouer <hawk@...nel.org> wrote:
>>
>> Avoid lock contention on the global cgroup rstat lock caused by kswapd
>> starting on all NUMA nodes simultaneously. At Cloudflare, we observed
>> massive issues due to kswapd and the specific mem_cgroup_flush_stats()
>> call inlined in shrink_node, which takes the rstat lock.
>>
>> On our 12 NUMA node machines, each with a kswapd kthread per NUMA node,
>> we noted severe lock contention on the rstat lock. This contention
>> causes 12 CPUs to waste cycles spinning every time kswapd runs.
>> Fleet-wide stats (/proc/N/schedstat) for kthreads revealed that we are
>> burning an average of 20,000 CPU cores fleet-wide on kswapd, primarily
>> due to spinning on the rstat lock.
>>
>> Help reviewers follow code: __alloc_pages_slowpath calls wake_all_kswapds
>> causing all kswapdN threads to wake up simultaneously. The kswapd thread
>> invokes shrink_node (via balance_pgdat) triggering the cgroup rstat flush
>> operation as part of its work. This results in kernel self-induced rstat
>> lock contention by waking up all kswapd threads simultaneously. Leveraging
>> this detail: balance_pgdat() have NULL value in target_mem_cgroup, this
>> cause mem_cgroup_flush_stats() to do flush with root_mem_cgroup.
>>
>> To avoid this kind of thundering herd problem, kernel previously had a
>> "stats_flush_ongoing" concept, but this was removed as part of commit
>> 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). This patch
>> reintroduce and generalized the concept to apply to all users of cgroup
>> rstat, not just memcg.
>>
>> If there is an ongoing rstat flush, and current cgroup is a descendant,
>> then it is unnecessary to do the flush. For callers to still see updated
>> stats, wait for ongoing flusher to complete before returning, but add
>> timeout as stats are already inaccurate given updaters keeps running.
>>
>> Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing").
>> Signed-off-by: Jesper Dangaard Brouer <hawk@...nel.org>
> 
> Thanks for working on this, Jesper! I love the data you collected here!
> 
> I think the commit subject and message should be changed to better
> describe the patch. This is a patch that exclusively modifies cgroup
> code, yet the subject is about kswapd. This change affects all users
> of rstat flushing.
> 
> I think a better subject would be:
> "cgroup/rstat: avoid flushing if there is an ongoing overlapping
> flush" or similar.
> 

Took this for V8.
 
https://lore.kernel.org/all/172139415725.3084888.13770938453137383953.stgit@firesoul

> The commit message should first describe the cgroup change, and then
> use kswapd as a brief example/illustration of how the problem
> manifests in practice. You should also include a brief summary of the
> numbers you collected from prod.
> 

Update desc in V8

>> ---
>> V6: https://lore.kernel.org/all/172052399087.2357901.4955042377343593447.stgit@firesoul/
>> V5: https://lore.kernel.org/all/171956951930.1897969.8709279863947931285.stgit@firesoul/
>> V4: https://lore.kernel.org/all/171952312320.1810550.13209360603489797077.stgit@firesoul/
>> V3: https://lore.kernel.org/all/171943668946.1638606.1320095353103578332.stgit@firesoul/
>> V2: https://lore.kernel.org/all/171923011608.1500238.3591002573732683639.stgit@firesoul/
>> V1: https://lore.kernel.org/all/171898037079.1222367.13467317484793748519.stgit@firesoul/
>> RFC: https://lore.kernel.org/all/171895533185.1084853.3033751561302228252.stgit@firesoul/
>>
>>   include/linux/cgroup-defs.h |    2 +
>>   kernel/cgroup/rstat.c       |   95 ++++++++++++++++++++++++++++++++++++++-----
>>   2 files changed, 85 insertions(+), 12 deletions(-)
>>
>> diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
[...]
>> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
>> index fb8b49437573..fe2a81a310bb 100644
>> --- a/kernel/cgroup/rstat.c
>> +++ b/kernel/cgroup/rstat.c
[...]
>>   static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop)
>> @@ -299,6 +316,53 @@ static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop)
>>          spin_unlock_irq(&cgroup_rstat_lock);
>>   }
>>
>> +#define MAX_WAIT       msecs_to_jiffies(100)
>> +/* Trylock helper that also checks for on ongoing flusher */
>> +static bool cgroup_rstat_trylock_flusher(struct cgroup *cgrp)
>> +{
>> +       struct cgroup *ongoing;
>> +       bool locked;
>> +
>> +       /* Check if ongoing flusher is already taking care of this, if
> 
> nit: I think commonly the comment would start on a new line after /*.
> 

We use this comment style in networking code.
I've updated it to follow this subsystem.


>> +        * we are a descendant skip work, but wait for ongoing flusher
>> +        * to complete work.
>> +        */
>> +retry:
>> +       ongoing = READ_ONCE(cgrp_rstat_ongoing_flusher);
>> +       if (ongoing && cgroup_is_descendant(cgrp, ongoing)) {
>> +               wait_for_completion_interruptible_timeout(
>> +                       &ongoing->flush_done, MAX_WAIT);
>> +               /* TODO: Add tracepoint here */
>> +               return false;
>> +       }
>> +
>> +       locked = __cgroup_rstat_trylock(cgrp, -1);
>> +       if (!locked) {
>> +               /* Contended: Handle loosing race for ongoing flusher */
> 
> nit: losing
>

Thanks for catching this subtle wording issue.


>> +               if (!ongoing && READ_ONCE(cgrp_rstat_ongoing_flusher))
>> +                       goto retry;
>> +
>> +               __cgroup_rstat_lock(cgrp, -1, false);
>> +       }
>> +       /* Obtained lock, record this cgrp as the ongoing flusher */
> 
> Do we want a comment here to explain why there could be an existing
> ongoing flusher (i.e. due to multiple ongoing flushers)? I think it's
> not super obvious.

Extended this in V8.

> 
>> +       ongoing = READ_ONCE(cgrp_rstat_ongoing_flusher);
>> +       if (!ongoing) {
>> +               reinit_completion(&cgrp->flush_done);
>> +               WRITE_ONCE(cgrp_rstat_ongoing_flusher, cgrp);
>> +       }
>> +       return true; /* locked */
> 
> Would it be better to explain the return value of the function in the
> comment above it?
> 

Fixed this in V8.

>> +}
>> +
>> +static void cgroup_rstat_unlock_flusher(struct cgroup *cgrp)
>> +{
>> +       /* Detect if we are the ongoing flusher */
> 
> I think this is a bit obvious.
> 

True, removed comment.

>> +       if (cgrp == READ_ONCE(cgrp_rstat_ongoing_flusher)) {
>> +               WRITE_ONCE(cgrp_rstat_ongoing_flusher, NULL);
>> +               complete_all(&cgrp->flush_done);
>> +       }
>> +       __cgroup_rstat_unlock(cgrp, -1);
>> +}
>> +
[...]


Thanks for going through and commenting on the code! :-)

--Jesper

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ