lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 8 Mar 2010 20:01:42 +0100
From:	Oleg Nesterov <oleg@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	linux-kernel@...r.kernel.org, rusty@...tcorp.com.au,
	sivanich@....com, heiko.carstens@...ibm.com,
	torvalds@...ux-foundation.org, mingo@...e.hu, peterz@...radead.org,
	dipankar@...ibm.com, josh@...edesktop.org,
	paulmck@...ux.vnet.ibm.com, akpm@...ux-foundation.org
Subject: Re: [PATCH 1/4] cpuhog: implement cpuhog

On 03/09, Tejun Heo wrote:
>
> Implement a simplistic per-cpu maximum priority cpu hogging mechanism
> named cpuhog.  A callback can be scheduled to run on one or multiple
> cpus with maximum priority monopolozing those cpus.  This is primarily
> to replace and unify RT workqueue usage in stop_machine and scheduler
> migration_thread which currently is serving multiple purposes.
>
> Four functions are provided - hog_one_cpu(), hog_one_cpu_nowait(),
> hog_cpus() and try_hog_cpus().
>
> This is to allow clean sharing of resources among stop_cpu and all the
> migration thread users.  One cpuhog thread per cpu is created which is
> currently named "hog/CPU".  This will eventually replace the migration
> thread and take on its name.

Heh. In no way I can ack (or even review) the changes in sched.c, but
personally I like this idea.

And I think cpuhog can have more users. Say, wait_task_context_switch()
could use hog_one_cpu() to force the context switch instead of looping,
perhaps.

A simple question,

> +struct cpuhog_done {
> +	atomic_t		nr_todo;	/* nr left to execute */
> +	bool			executed;	/* actually executed? */
> +	int			ret;		/* collected return value */
> +	struct completion	completion;	/* fired if nr_todo reaches 0 */
> +};
> +
> +static void cpuhog_signal_done(struct cpuhog_done *done, bool executed)
> +{
> +	if (done) {
> +		if (executed)
> +			done->executed = true;
> +		if (atomic_dec_and_test(&done->nr_todo))
> +			complete(&done->completion);
> +	}
> +}

So, ->executed becomes T if at least one cpuhog_thread() thread calls ->fn(),

> +int __hog_cpus(const struct cpumask *cpumask, cpuhog_fn_t fn, void *arg)
> +{
> ...
> +
> +	wait_for_completion(&done.completion);
> +	return done.executed ? done.ret : -ENOENT;
> +}

Is this really right?

I mean, perhaps it makes more sense if ->executed was set only if _all_
CPUs from cpumask "ack" this call?


I guess, currently this doesn't matter, stop_machine() uses cpu_online_mask
and we can't race with hotplug.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ