lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZ45VDjYyorgZ38unRkMeoy44OcCpPq_kdnMWEam3vssA@mail.gmail.com>
Date:   Tue, 29 Aug 2023 09:04:37 -0700
From:   Yosry Ahmed <yosryahmed@...gle.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Waiman Long <longman@...hat.com>,
        Shakeel Butt <shakeelb@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Muchun Song <muchun.song@...ux.dev>,
        Ivan Babrou <ivan@...udflare.com>, Tejun Heo <tj@...nel.org>,
        linux-mm@...ck.org, cgroups@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] mm: memcg: use non-unified stats flushing for
 userspace reads

On Tue, Aug 29, 2023 at 8:17 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Tue 29-08-23 11:05:28, Waiman Long wrote:
> > On 8/29/23 03:27, Michal Hocko wrote:
> > > On Mon 28-08-23 13:27:23, Waiman Long wrote:
> > > > On 8/28/23 13:07, Yosry Ahmed wrote:
> > > > > > Here I agree with you. Let's go with the approach which is easy to
> > > > > > undo for now. Though I prefer the new explicit interface for flushing,
> > > > > > that step would be very hard to undo. Let's reevaluate if the proposed
> > > > > > approach shows negative impact on production traffic and I think
> > > > > > Cloudflare folks can give us the results soon.
> > > > > Do you prefer we also switch to using a mutex (with preemption
> > > > > disabled) to avoid the scenario Michal described where flushers give
> > > > > up the lock and sleep resulting in an unbounded wait time in the worst
> > > > > case?
> > > > Locking with mutex with preemption disabled is an oxymoron.
> > > I believe Yosry wanted to disable preemption _after_ the lock is taken
> > > to reduce the time spent while it is held. The idea to use the mutex is
> > > to reduce spinning and more importantly to get rid of lock dropping
> > > part. It is not really clear (but unlikely) we can drop it while
> > > preserving the spinlock as the thing scales with O(#cgroups x #cpus)
> > > in the worst case.
> >
> > As I have said later in my email, I am not against disabling preemption
> > selectively on some parts of the lock critical section where preemption is
> > undesirable. However, I am against disabling preemption for the whole
> > duration of the code where the mutex lock is held as it defeats the purpose
> > of using mutex in the first place.
>
> I certainly agree this is an antipattern.

So I guess the verdict is to avoid using a mutex here for now. I sent
a v2 which includes an additional small patch suggested by Michal
Koutny and an updated changelog for this patch to document this
discussion and possible alternatives we can do if things go wrong with
this approach:

https://lore.kernel.org/lkml/20230828233319.340712-1-yosryahmed@google.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ