[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YnjmPAToTR0C5o8x@dhcp22.suse.cz>
Date: Mon, 9 May 2022 12:00:28 +0200
From: Michal Hocko <mhocko@...e.com>
To: CGEL <cgel.zte@...il.com>
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, willy@...radead.org,
shy828301@...il.com, roman.gushchin@...ux.dev, shakeelb@...gle.com,
linmiaohe@...wei.com, william.kucharski@...cle.com,
peterx@...hat.com, hughd@...gle.com, vbabka@...e.cz,
songmuchun@...edance.com, surenb@...gle.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
cgroups@...r.kernel.org, Yang Yang <yang.yang29@....com.cn>
Subject: Re: [PATCH] mm/memcg: support control THP behaviour in cgroup
On Sat 07-05-22 02:05:25, CGEL wrote:
[...]
> If there are many containers to run on one host, and some of them have high
> performance requirements, administrator could turn on thp for them:
> # docker run -it --thp-enabled=always
> Then all the processes in those containers will always use thp.
> While other containers turn off thp by:
> # docker run -it --thp-enabled=never
I do not know. The THP config space is already too confusing and complex
and this just adds on top. E.g. is the behavior of the knob
hierarchical? What is the policy if parent memcg says madivise while
child says always? How does the per-application configuration aligns
with all that (e.g. memcg policy madivise but application says never via
prctl while still uses some madvised - e.g. via library).
> By doing this we could promote important containers's performance with less
> footprint of thp.
Do we really want to provide something like THP based QoS? To me it
sounds like a bad idea and if the justification is "it might be useful"
then I would say no. So you really need to come with a very good usecase
to promote this further.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists