[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZP8SDdjut9VEVpps@dhcp22.suse.cz>
Date: Mon, 11 Sep 2023 15:11:41 +0200
From: Michal Hocko <mhocko@...e.com>
To: Wei Xu <weixugc@...gle.com>
Cc: Yosry Ahmed <yosryahmed@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Ivan Babrou <ivan@...udflare.com>, Tejun Heo <tj@...nel.org>,
Michal Koutný <mkoutny@...e.com>,
Waiman Long <longman@...hat.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
Greg Thelen <gthelen@...gle.com>
Subject: Re: [PATCH v4 4/4] mm: memcg: use non-unified stats flushing for
userspace reads
On Thu 07-09-23 17:52:12, Wei Xu wrote:
[...]
> I tested this patch on a machine with 384 CPUs using a microbenchmark
> that spawns 10K threads, each reading its memory.stat every 100
> milliseconds.
This is rather extreme case but I wouldn't call it utterly insane
though.
> Most of memory.stat reads take 5ms-10ms in kernel, with
> ~5% reads even exceeding 1 second.
Just curious, what would numbers look like if the mutex is removed and
those threads would be condending on the existing spinlock with lock
dropping in place and removed. Would you be willing to give it a shot?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists