[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6af3aff5-8747-5276-f20a-9853321e445a@oracle.com>
Date: Wed, 6 Dec 2017 09:21:33 -0500
From: Daniel Jordan <daniel.m.jordan@...cle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
aaron.lu@...el.com, dave.hansen@...ux.intel.com,
mgorman@...hsingularity.net, mhocko@...nel.org,
mike.kravetz@...cle.com, pasha.tatashin@...cle.com,
steven.sistare@...cle.com, tim.c.chen@...el.com
Subject: Re: [RFC PATCH v3 2/7] ktask: multithread CPU-intensive kernel work
Thanks for looking at this, Andrew. Responses below.
On 12/05/2017 05:21 PM, Andrew Morton wrote:
> On Tue, 5 Dec 2017 14:52:15 -0500 Daniel Jordan <daniel.m.jordan@...cle.com> wrote:
>
>> ktask is a generic framework for parallelizing CPU-intensive work in the
>> kernel. The intended use is for big machines that can use their CPU power to
>> speed up large tasks that can't otherwise be multithreaded in userland. The
>> API is generic enough to add concurrency to many different kinds of tasks--for
>> example, zeroing a range of pages or evicting a list of inodes--and aims to
>> save its clients the trouble of splitting up the work, choosing the number of
>> threads to use, maintaining an efficient concurrency level, starting these
>> threads, and load balancing the work between them.
>>
>> The Documentation patch earlier in this series has more background.
>>
>> Introduces the ktask API; consumers appear in subsequent patches.
>>
>> Based on work by Pavel Tatashin, Steve Sistare, and Jonathan Adams.
>>
>> ...
>>
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -319,6 +319,18 @@ config AUDIT_TREE
>> depends on AUDITSYSCALL
>> select FSNOTIFY
>>
>> +config KTASK
>> + bool "Multithread cpu-intensive kernel tasks"
>> + depends on SMP
>> + depends on NR_CPUS > 16
>
> Why this?
Good question. I picked 16 to represent a big machine, but as with most
cutoffs it's somewhat arbitrary.
> It would make sense to relax (or eliminate) this at least for the
> development/test period, so more people actually run and test the new
> code.
Ok, that makes sense. I'll remove it for now.
Since many (most?) distributions ship with a high NR_CPUS, maybe
deciding whether to enable the framework at runtime based on online CPUs
and memory is a better option. A static branch might do it.
>
>> + default n
>> + help
>> + Parallelize expensive kernel tasks such as zeroing huge pages. This
>> + feature is designed for big machines that can take advantage of their
>> + cpu count to speed up large kernel tasks.
>> +
>> + If unsure, say 'N'.
>> +
>> source "kernel/irq/Kconfig"
>> source "kernel/time/Kconfig"
>>
>>
>> ...
>>
>> +/*
>> + * Initialize internal limits on work items queued. Work items submitted to
>> + * cmwq capped at 80% of online cpus both system-wide and per-node to maintain
>> + * an efficient level of parallelization at these respective levels.
>> + */
>> +bool ktask_rlim_init(void)
>
> Why not static __init?
I forgot both. I added them, thanks.
>
>> +{
>> + int node;
>> + unsigned nr_node_cpus;
>> +
>> + spin_lock_init(&ktask_rlim_lock);
>
> This can be done at compile time. Unless there's a real reason for
> ktask_rlim_init to be non-static, non-__init, in which case I'm
> worried: reinitializing a static spinlock is weird.
You're right, I should have used DEFINE_SPINLOCK. This is fixed.
The patch at the bottom covers these changes and gets rid of a mismerge
in this patch.
Daniel
diff --git a/init/Kconfig b/init/Kconfig
index 2a7b120de4d4..28c234791819 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -322,15 +322,12 @@ config AUDIT_TREE
config KTASK
bool "Multithread cpu-intensive kernel tasks"
depends on SMP
- depends on NR_CPUS > 16
- default n
+ default y
help
Parallelize expensive kernel tasks such as zeroing huge
pages. This
feature is designed for big machines that can take advantage
of their
cpu count to speed up large kernel tasks.
- If unsure, say 'N'.
-
source "kernel/irq/Kconfig"
source "kernel/time/Kconfig"
diff --git a/kernel/ktask.c b/kernel/ktask.c
index 7b075075b56b..4db38fe59bdb 100644
--- a/kernel/ktask.c
+++ b/kernel/ktask.c
@@ -29,7 +29,7 @@
#include <linux/workqueue.h>
/* Resource limits on the amount of workqueue items queued through
ktask. */
-spinlock_t ktask_rlim_lock;
+static DEFINE_SPINLOCK(ktask_rlim_lock);
/* Work items queued on all nodes (includes NUMA_NO_NODE) */
size_t ktask_rlim_cur;
size_t ktask_rlim_max;
@@ -382,14 +382,6 @@ int ktask_run_numa(struct ktask_node *nodes, size_t
nr_nodes,
return KTASK_RETURN_SUCCESS;
mutex_init(&kt.kt_mutex);
-
- kt.kt_nthreads = ktask_nthreads(kt.kt_total_size,
- ctl->kc_min_chunk_size,
- ctl->kc_max_threads);
-
- kt.kt_chunk_size = ktask_chunk_size(kt.kt_total_size,
- ctl->kc_min_chunk_size,
kt.kt_nthreads);
-
init_completion(&kt.kt_ktask_done);
kt.kt_nthreads = ktask_prepare_threads(nodes, nr_nodes, &kt,
&to_queue);
@@ -449,13 +441,11 @@ EXPORT_SYMBOL_GPL(ktask_run);
* cmwq capped at 80% of online cpus both system-wide and per-node to
maintain
* an efficient level of parallelization at these respective levels.
*/
-bool ktask_rlim_init(void)
+static bool __init ktask_rlim_init(void)
{
int node;
unsigned nr_node_cpus;
- spin_lock_init(&ktask_rlim_lock);
-
ktask_rlim_node_cur = kcalloc(num_possible_nodes(),
sizeof(size_t),
GFP_KERNEL);
Powered by blists - more mailing lists