[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <74c53382-5c31-41e9-94a2-0a7f88c0d2a5@kernel.org>
Date: Sat, 20 Jul 2024 17:05:53 +0200
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Yosry Ahmed <yosryahmed@...gle.com>, Shakeel Butt <shakeel.butt@...ux.dev>
Cc: tj@...nel.org, cgroups@...r.kernel.org, hannes@...xchg.org,
lizefan.x@...edance.com, longman@...hat.com, kernel-team@...udflare.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V7 1/2] cgroup/rstat: Avoid thundering herd problem by
kswapd across NUMA nodes
On 20/07/2024 06.52, Yosry Ahmed wrote:
> On Fri, Jul 19, 2024 at 9:52 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>>
>> On Fri, Jul 19, 2024 at 3:48 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>>>
>>> On Fri, Jul 19, 2024 at 09:54:41AM GMT, Jesper Dangaard Brouer wrote:
>>>>
>>>>
>>>> On 19/07/2024 02.40, Shakeel Butt wrote:
>>>>> Hi Jesper,
>>>>>
>>>>> On Wed, Jul 17, 2024 at 06:36:28PM GMT, Jesper Dangaard Brouer wrote:
>>>>>>
>>>>> [...]
>>>>>>
>>>>>>
>>>>>> Looking at the production numbers for the time the lock is held for level 0:
>>>>>>
>>>>>> @locked_time_level[0]:
>>>>>> [4M, 8M) 623 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
>>>>>> [8M, 16M) 860 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>>>>>> [16M, 32M) 295 |@@@@@@@@@@@@@@@@@ |
>>>>>> [32M, 64M) 275 |@@@@@@@@@@@@@@@@ |
>>>>>>
>>>>>
>>>>> Is it possible to get the above histogram for other levels as well?
>>>>
>>>> Data from other levels available in [1]:
>>>> [1]
>>>> https://lore.kernel.org/all/8c123882-a5c5-409a-938b-cb5aec9b9ab5@kernel.org/
>>>>
>>>> IMHO the data shows we will get most out of skipping level-0 root-cgroup
>>>> flushes.
>>>>
>>>
>>> Thanks a lot of the data. Are all or most of these locked_time_level[0]
>>> from kswapds? This just motivates me to strongly push the ratelimited
>>> flush patch of mine (which would be orthogonal to your patch series).
>>
There are also others flushing level 0.
Extended the bpftrace oneliner to also capture the process 'comm' name.
(I reduced 'kworker' to one entry in below, e.g pattern 'kworker/u392:19').
grep 'level\[' out01.bpf_oneliner_locked_time | awk -F/ '{print $1}' |
sort | uniq
@locked_time_level[0, cadvisor]:
@locked_time_level[0, consul]:
@locked_time_level[0, kswapd0]:
@locked_time_level[0, kswapd10]:
@locked_time_level[0, kswapd11]:
@locked_time_level[0, kswapd1]:
@locked_time_level[0, kswapd2]:
@locked_time_level[0, kswapd3]:
@locked_time_level[0, kswapd4]:
@locked_time_level[0, kswapd5]:
@locked_time_level[0, kswapd6]:
@locked_time_level[0, kswapd7]:
@locked_time_level[0, kswapd8]:
@locked_time_level[0, kswapd9]:
@locked_time_level[0, kworker
@locked_time_level[0, lassen]:
@locked_time_level[0, thunderclap-san]:
@locked_time_level[0, xdpd]:
@locked_time_level[1, cadvisor]:
@locked_time_level[2, cadvisor]:
@locked_time_level[2, kworker
@locked_time_level[2, memory-saturati]:
@locked_time_level[2, systemd]:
@locked_time_level[2, thread-saturati]:
@locked_time_level[3, cadvisor]:
@locked_time_level[3, cat]:
@locked_time_level[3, kworker
@locked_time_level[3, memory-saturati]:
@locked_time_level[3, systemd]:
@locked_time_level[3, thread-saturati]:
@locked_time_level[4, cadvisor]:
>> Jesper and I were discussing a better ratelimiting approach, whether
>> it's measuring the time since the last flush, or only skipping if we
>> have a lot of flushes in a specific time frame (using __ratelimit()).
>> I believe this would be better than the current memcg ratelimiting
>> approach, and we can remove the latter.
>>
>> WDYT?
>
> Forgot to link this:
> https://lore.kernel.org/lkml/CAJD7tkZ5nxoa7aCpAix1bYOoYiLVfn+aNkq7jmRAZqsxruHYLw@mail.gmail.com/
>
I agree that ratelimiting is orthogonal to this patch, and that we
really need to address this in follow up patchset.
The proposed mem_cgroup_flush_stats_ratelimited patch[1] helps, but is
limited to memory area. I'm proposing a more generic solution in [2]
that helps all users of rstat.
It is time based, because it makes sense to observe the time it takes to
flush root (service rate), and then limit how quickly after another
flusher can run (limiting arrival rate). From practical queue theory we
intuitively know that we should keep arrival rate below service rate,
else queuing happens.
--Jesper
[1] "memcg: use ratelimited stats flush in the reclaim"
-
https://lore.kernel.org/all/20240615081257.3945587-1-shakeel.butt@linux.dev/
[2] "cgroup/rstat: introduce ratelimited rstat flushing"
-
https://lore.kernel.org/all/171328990014.3930751.10674097155895405137.stgit@firesoul/
Powered by blists - more mailing lists