lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkaFhG39LHUNuKmxj2LEvojavOnpnREXz2vvuLrbBAHyEA@mail.gmail.com>
Date:   Wed, 29 Mar 2023 21:26:24 -0700
From:   Yosry Ahmed <yosryahmed@...gle.com>
To:     Hugh Dickins <hughd@...gle.com>
Cc:     Tejun Heo <tj@...nel.org>, Shakeel Butt <shakeelb@...gle.com>,
        Josef Bacik <josef@...icpanda.com>,
        Jens Axboe <axboe@...nel.dk>,
        Zefan Li <lizefan.x@...edance.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Muchun Song <muchun.song@...ux.dev>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vasily Averin <vasily.averin@...ux.dev>,
        cgroups@...r.kernel.org, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        bpf@...r.kernel.org
Subject: Re: [RFC PATCH 1/7] cgroup: rstat: only disable interrupts for the
 percpu lock

Thanks for a great discussion, Tejun and Hugh.

On Wed, Mar 29, 2023 at 1:38 PM Hugh Dickins <hughd@...gle.com> wrote:
>
> On Wed, 29 Mar 2023, Tejun Heo wrote:
>
> > Hello, Hugh. How have you been?
> >
> > On Wed, Mar 29, 2023 at 12:22:24PM -0700, Hugh Dickins wrote:
> > > Hi Tejun,
> > > Butting in here, I'm fascinated.  This is certainly not my area, I know
> > > nothing about rstat, but this is the first time I ever heard someone
> > > arguing for more disabling of interrupts rather than less.
> > >
> > > An interrupt coming in while holding a contended resource can certainly
> > > add to latencies, that I accept of course.  But until now, I thought it
> > > was agreed best practice to disable irqs only regretfully, when strictly
> > > necessary.
> > >
> > > If that has changed, I for one want to know about it.  How should we
> > > now judge which spinlocks should disable interrupts and which should not?
> > > Page table locks are currently my main interest - should those be changed?
> >
> > For rstat, it's a simple case because the global lock here wraps around
> > per-cpu locks which have to be irq-safe, so the only difference we get
> > between making the global irq-unsafe and keeping it so but releasing
> > inbetween is:
> >
> >  Global lock held: G
> >  IRQ disabled: I
> >  Percpu lock held: P
> >
> > 1. IRQ unsafe
> >
> >  GGGGGGGGGGGGGGG~~GGGGG
> >  IIII IIII IIII ~~ IIII
> >  PPPP PPPP PPPP ~~ PPPP
> >
> > 2. IRQ safe released inbetween cpus
> >
> >  GGGG GGGG GGGG ~~ GGGG
> >  IIII IIII IIII ~~ IIII
> >  PPPP PPPP PPPP ~~ PPPP
> >
> > #2 seems like the obvious thing to do here given how the lock is used and
> > each P section may take a bit of time.
>
> Many thanks for the detailed response.  I'll leave it to the rstat folks,
> to agree or disagree with your analysis there.

Thanks for the analysis, Tejun, it does indeed make sense. I perf'd
releasing and reacquiring the lock at each CPU boundary and the
overhead seems to be minimal. It would be higher with contention, but
all memcg flushers should be held back by the memcg code, and flushers
outside memcg are not frequent (reading blkcg and cpu base stats from
user space, and when a cgroup is being removed).

I realized that after v2 of this patch series [1], we would only end
up with two atomic flushing contexts, mem_cgroup_wb_stats() and
mem_cgroup_usage(). The latter is already disabling irqs for other
reasons, so anything we do within the rstat core code doesn't really
help, it needs to be addressed separately. So only the call site in
mem_cgroup_wb_stats() would benefit from not having irqs disabled
throughout the flush.

I will hold off on sending a patch until I observe that this call site
is causing us pain and/or other atomic call sites emerge (or we have
to revert one of the ones we made non-atomic), so that we don't hurt
other flushers unnecessarily. Does this make sense to you?

[1] https://lore.kernel.org/linux-mm/20230328221644.803272-1-yosryahmed@google.com/

>
> >
> > So, in the rstat case, the choice is, at least to me, obvious, but even for
> > more generic cases where the bulk of actual work isn't done w/ irq disabled,
> > I don't think the picture is as simple as "use the least protected variant
> > possible" anymore because the underlying hardware changed.
> >
> > For an SMP kernel running on an UP system, "the least protected variant" is
> > the obvious choice to make because you don't lose anything by holding a
> > spinlock longer than necessary. However, as you increase the number of CPUs,
> > there rises a tradeoff between local irq servicing latency and global lock
> > contention.
> >
> > Imagine a, say, 128 cpu system with a few cores servicing relatively high
> > frequency interrupts. Let's say there's a mildly hot lock. Usually, it shows
> > up in the system profile but only just. Let's say something happens and the
> > irq rate on those cores went up for some reason to the point where it
> > becomes a rather common occurrence when the lock is held on one of those
> > cpus, irqs are likely to intervene lengthening how long the lock is held,
> > sometimes, signficantly. Now because the lock is on average held for much
> > longer, it become a lot hotter as more CPUs would stall on it and depending
> > on luck or lack thereof these stalls can span many CPUs on the system for
> > quite a while. This is actually something we saw in production.
> >
> > So, in general, there's a trade off between local irq service latency and
> > inducing global lock contention when using unprotected locks. With more and
> > more CPUs, the balance keeps shifting. The balance still very much depends
> > on the specifics of a given lock but yeah I think it's something we need to
> > be a lot more careful about now.
>
> And this looks a very plausible argument to me: I'll let it sink in.
>
> But I hadn't heard that the RT folks were clamouring for more irq disabling:
> perhaps they partition their machines with more care, and are not devotees
> of high CPU counts.
>
> What I hope is that others will chime in one way or the other -
> it does sound as if a reappraisal of the balances is overdue.
>
> Thanks,
> Hugh (disabling interrupts for as long as he can)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ