[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171208124333.GV20234@dhcp22.suse.cz>
Date: Fri, 8 Dec 2017 13:43:33 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Daniel Jordan <daniel.m.jordan@...cle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
aaron.lu@...el.com, akpm@...ux-foundation.org,
dave.hansen@...ux.intel.com, mgorman@...hsingularity.net,
mike.kravetz@...cle.com, pasha.tatashin@...cle.com,
steven.sistare@...cle.com, tim.c.chen@...el.com
Subject: Re: [RFC PATCH v3 1/7] ktask: add documentation
On Wed 06-12-17 15:32:48, Daniel Jordan wrote:
> On 12/06/2017 09:35 AM, Michal Hocko wrote:
[...]
> > There is also no mention about other
> > characteristics (e.g. power management), resource isloataion etc. So > let me ask again. How do you control that the parallelized operation
> > doesn't run outside of the limit imposed to the calling context?
>
> The current code doesn't do this, and the answer is the same for the rest of
> your questions.
I really believe this should be addressed before this can be considered
for merging. While what you have might be sufficient for early boot
initialization stuff I am not sure the amount of code is really
justified by that usecase alone. Any runtime enabled parallelized work
really have to care about the rest of the system. The last thing you
really want to see is to make a highly utilized system overloaded just
because of some optimization. And I do not see how can you achive that
with a limit on the number of paralelization threads.
> For resource isolation, I'll experiment with moving ktask threads into and
> out of the cgroup of the calling thread.
>
> Do any resources not covered by cgroup come to mind? I'm trying to think if
> I've left anything out.
This is mostly about cpu so dealing with the cpu cgroup controller
should do the work.
[...]
> Anyway, I think scalability bottlenecks should be weighed with the rest of
> this. It seems wrong that the kernel should always assume that one thread
> is enough to free all of a process's memory or evict all the pages of a file
> system no matter how much work there is to do.
Well, this will be always a double edge sword. Sure if you have spare
cycles (whatever that means) than using them is really nice. But the
last thing you really want is to turn an optimization into an
utilization nightmare where few processes dominant the whole machine
even though they could be easily contained normally inside a single
execution context.
Your work targets larger machines and I understand that you are mainly
focused on a single large workload running on that machine but there are
many others running with many smaller workloads which would like to be
independent. Not everything is a large DB running on a large HW.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists