lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZHnI93EMp+Aq8UAA@dhcp22.suse.cz>
Date:   Fri, 2 Jun 2023 12:48:23 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Marcelo Tosatti <mtosatti@...hat.com>
Cc:     Christoph Lameter <cl@...ux.com>,
        Aaron Tomlin <atomlin@...mlin.com>,
        Frederic Weisbecker <frederic@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Vlastimil Babka <vbabka@...e.cz>, Tejun Heo <tj@...nel.org>,
        Lai Jiangshan <jiangshanlai@...il.com>
Subject: Re: [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper

You should be CCing WQ maintainers on changes like this one (now added).

On Tue 30-05-23 11:52:37, Marcelo Tosatti wrote:
> Add a schedule_on_each_cpumask function, equivalent to
> schedule_on_each_cpu but accepting a cpumask to operate.

IMHO it is preferable to add a new function along with its user so that
the usecase is more clear.
 
> Signed-off-by: Marcelo Tosatti <mtosatti@...hat.com>
> 
> ---
> 
> Index: linux-vmstat-remote/kernel/workqueue.c
> ===================================================================
> --- linux-vmstat-remote.orig/kernel/workqueue.c
> +++ linux-vmstat-remote/kernel/workqueue.c
> @@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun
>  	return 0;
>  }
>  
> +
> +/**
> + * schedule_on_each_cpumask - execute a function synchronously on each
> + * CPU in "cpumask", for those which are online.
> + *
> + * @func: the function to call
> + * @mask: the CPUs which to call function on
> + *
> + * schedule_on_each_cpu() executes @func on each specified CPU that is online,
> + * using the system workqueue and blocks until all such CPUs have completed.
> + * schedule_on_each_cpu() is very slow.
> + *
> + * Return:
> + * 0 on success, -errno on failure.
> + */
> +int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask)
> +{
> +	int cpu;
> +	struct work_struct __percpu *works;
> +	cpumask_var_t effmask;
> +
> +	works = alloc_percpu(struct work_struct);
> +	if (!works)
> +		return -ENOMEM;
> +
> +	if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) {
> +		free_percpu(works);
> +		return -ENOMEM;
> +	}
> +
> +	cpumask_and(effmask, cpumask, cpu_online_mask);
> +
> +	cpus_read_lock();
> +
> +	for_each_cpu(cpu, effmask) {

Is the cpu_online_mask dance really necessary? Why cannot you simply do
for_each_online_cpu here? flush_work on unqueued work item should just
return, no?

Also there is no synchronization with the cpu hotplug so cpu_online_mask
can change under your feet so this construct seem unsafe to me.

> +		struct work_struct *work = per_cpu_ptr(works, cpu);
> +
> +		INIT_WORK(work, func);
> +		schedule_work_on(cpu, work);
> +	}
> +
> +	for_each_cpu(cpu, effmask)
> +		flush_work(per_cpu_ptr(works, cpu));
> +
> +	cpus_read_unlock();
> +	free_percpu(works);
> +	free_cpumask_var(effmask);
> +	return 0;
> +}
> +
>  /**
>   * execute_in_process_context - reliably execute the routine with user context
>   * @fn:		the function to execute
> Index: linux-vmstat-remote/include/linux/workqueue.h
> ===================================================================
> --- linux-vmstat-remote.orig/include/linux/workqueue.h
> +++ linux-vmstat-remote/include/linux/workqueue.h
> @@ -450,6 +450,7 @@ extern void __flush_workqueue(struct wor
>  extern void drain_workqueue(struct workqueue_struct *wq);
>  
>  extern int schedule_on_each_cpu(work_func_t func);
> +extern int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask);
>  
>  int execute_in_process_context(work_func_t fn, struct execute_work *);
>  
> 

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ