lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5906501e-4dff-4c66-7ab3-e9193d312270@redhat.com>
Date:   Tue, 29 Aug 2023 11:05:28 -0400
From:   Waiman Long <longman@...hat.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Yosry Ahmed <yosryahmed@...gle.com>,
        Shakeel Butt <shakeelb@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Muchun Song <muchun.song@...ux.dev>,
        Ivan Babrou <ivan@...udflare.com>, Tejun Heo <tj@...nel.org>,
        linux-mm@...ck.org, cgroups@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] mm: memcg: use non-unified stats flushing for
 userspace reads

On 8/29/23 03:27, Michal Hocko wrote:
> On Mon 28-08-23 13:27:23, Waiman Long wrote:
>> On 8/28/23 13:07, Yosry Ahmed wrote:
>>>> Here I agree with you. Let's go with the approach which is easy to
>>>> undo for now. Though I prefer the new explicit interface for flushing,
>>>> that step would be very hard to undo. Let's reevaluate if the proposed
>>>> approach shows negative impact on production traffic and I think
>>>> Cloudflare folks can give us the results soon.
>>> Do you prefer we also switch to using a mutex (with preemption
>>> disabled) to avoid the scenario Michal described where flushers give
>>> up the lock and sleep resulting in an unbounded wait time in the worst
>>> case?
>> Locking with mutex with preemption disabled is an oxymoron.
> I believe Yosry wanted to disable preemption _after_ the lock is taken
> to reduce the time spent while it is held. The idea to use the mutex is
> to reduce spinning and more importantly to get rid of lock dropping
> part. It is not really clear (but unlikely) we can drop it while
> preserving the spinlock as the thing scales with O(#cgroups x #cpus)
> in the worst case.

As I have said later in my email, I am not against disabling preemption 
selectively on some parts of the lock critical section where preemption 
is undesirable. However, I am against disabling preemption for the whole 
duration of the code where the mutex lock is held as it defeats the 
purpose of using mutex in the first place.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ