lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJD7tkZg4N9k7dUnTSJ06fjPdB9Ei=6JDjHW5UU_J91euyboSw@mail.gmail.com>
Date: Wed, 17 Jul 2024 09:04:07 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>
Cc: Shakeel Butt <shakeel.butt@...ux.dev>, tj@...nel.org, cgroups@...r.kernel.org, 
	hannes@...xchg.org, lizefan.x@...edance.com, longman@...hat.com, 
	kernel-team@...udflare.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V4 2/2] cgroup/rstat: Avoid thundering herd problem by
 kswapd across NUMA nodes

On Wed, Jul 17, 2024 at 12:46 AM Jesper Dangaard Brouer <hawk@...nel.org> wrote:
>
>
>
> On 16/07/2024 23.54, Yosry Ahmed wrote:
> > On Mon, Jul 8, 2024 at 8:26 AM Jesper Dangaard Brouer <hawk@...nel.org> wrote:
> >>
> >>
> >> On 28/06/2024 11.39, Jesper Dangaard Brouer wrote:
> >>>
> >>>
> >>> On 28/06/2024 01.34, Shakeel Butt wrote:
> >>>> On Thu, Jun 27, 2024 at 11:18:56PM GMT, Jesper Dangaard Brouer wrote:
> >>>>> Avoid lock contention on the global cgroup rstat lock caused by kswapd
> >>>>> starting on all NUMA nodes simultaneously. At Cloudflare, we observed
> >>>>> massive issues due to kswapd and the specific mem_cgroup_flush_stats()
> >>>>> call inlined in shrink_node, which takes the rstat lock.
> >>>>>
> >> [...]
> >>>>>    static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu);
> >>>>> @@ -312,6 +315,45 @@ static inline void __cgroup_rstat_unlock(struct
> >>>>> cgroup *cgrp, int cpu_in_loop)
> >>>>>        spin_unlock_irq(&cgroup_rstat_lock);
> >>>>>    }
> >>>>> +#define MAX_WAIT    msecs_to_jiffies(100)
> >>>>> +/* Trylock helper that also checks for on ongoing flusher */
> >>>>> +static bool cgroup_rstat_trylock_flusher(struct cgroup *cgrp)
> >>>>> +{
> >>>>> +    bool locked = __cgroup_rstat_trylock(cgrp, -1);
> >>>>> +    if (!locked) {
> >>>>> +        struct cgroup *cgrp_ongoing;
> >>>>> +
> >>>>> +        /* Lock is contended, lets check if ongoing flusher is already
> >>>>> +         * taking care of this, if we are a descendant.
> >>>>> +         */
> >>>>> +        cgrp_ongoing = READ_ONCE(cgrp_rstat_ongoing_flusher);
> >>>>> +        if (cgrp_ongoing && cgroup_is_descendant(cgrp, cgrp_ongoing)) {
> >>>>
> >>>> I wonder if READ_ONCE() and cgroup_is_descendant() needs to happen
> >>>> within in rcu section. On a preemptable kernel, let's say we got
> >>>> preempted in between them, the flusher was unrelated and got freed
> >>>> before we get the CPU. In that case we are accessing freed memory.
> >>>>
> >>>
> >>> I have to think about this some more.
> >>>
> >>
> >> I don't think this is necessary. We are now waiting (for completion) and
> >> not skipping flush, because as part of take down function
> >> cgroup_rstat_exit() is called, which will call cgroup_rstat_flush().
> >>
> >>
> >>    void cgroup_rstat_exit(struct cgroup *cgrp)
> >>    {
> >>          int cpu;
> >>          cgroup_rstat_flush(cgrp);
> >>
> >>
> >
> > Sorry for the late response, I was traveling for a bit. I will take a
> > look at your most recent version shortly. But I do have a comment
> > here.
> >
> > I don't see how this addresses Shakeel's concern. IIUC, if the cgroup
> > was freed after READ_ONCE() (and cgroup_rstat_flush() was called),
> > then cgroup_is_descendant() will access freed memory. We are not
> > holding the lock here so we are not preventing cgroup_rstat_flush()
> > from being called for the freed cgroup, right?
>
> If we go back to only allowing root-cgroup to be ongoing-flusher, then
> we could do a cgroup_rstat_flush(root) in cgroup_rstat_exit() to be sure
> nothing is left waiting for completion scheme. Right?

I am still not sure I understand how this helps.

We still need to call cgroup_is_descendant() because in cgroup v1 we
may have multiple root cgroups, right?

So it is still possible that the cgroup is freed after READ_ONCE() and
cgroup_is_descendant() accesses freed memory. Unless of course we have
other guarantees that the root cgroups will not go away.

Since at this point we are not holding the rstat lock, or actually
waiting for the ongoing flush (yet), I don't see how any
cgroup_rstat_flush() calls in the cgroup exit paths will help.

I actually think RCU may not help either for non-root cgroups, because
we call cgroup_rstat_flush() in cgroup_rstat_exit(), which is called
*after* the RCU grace period, and the cgroup is freed right away after
that. We may need to replace kfree(cgrp) with kfree_rcu(cgrp) in
css_free_rwork_fn().

>
> IMHO the code is getting too complicated with sub-cgroup's as ongoing
> flushers which also required having 'completion' queues per cgroup.
> We should go back to only doing this for the root-cgroup.

Because of multiple root cgroups in cgroup v1, we may still need that
anyway, right?

Please let me know if I am missing something.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ