lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 7 Jan 2013 07:04:22 -0800
From:	Tejun Heo <tj@...nel.org>
To:	Viresh Kumar <viresh.kumar@...aro.org>
Cc:	rostedt@...dmis.org, Vincent Guittot <vincent.guittot@...aro.org>,
	pjt@...gle.com, paul.mckenney@...aro.org, tglx@...utronix.de,
	suresh.b.siddha@...el.com, venki@...gle.com, mingo@...hat.com,
	peterz@...radead.org, Arvind.Chauhan@....com,
	linaro-dev@...ts.linaro.org, patches@...aro.org,
	pdsw-power-team@....com, linux-kernel@...r.kernel.org,
	linux-rt-users@...r.kernel.org
Subject: Re: [PATCH V2 Resend 3/4] workqueue: Schedule work on non-idle cpu
 instead of current one

Hello, Viresh.

On Mon, Jan 07, 2013 at 03:28:33PM +0530, Viresh Kumar wrote:
> Firstly the root cause of this patchset.
> 
> Myself and some others in Linaro are working on ARM future cores:
> big.LITTLE systems.
> Here we have few very powerful, high power consuming cores (big,
> currently A15's) and
> few very efficient ones (LITTLE, currently A7's).
> 
> The ultimate goal is to save as much power as possible without compromising
> much with performance. For, that we wanted most of the stuff to run on LITTLE
> cores and some performance-demanding stuff on big Cores. There are
> multiple things
> going around in this direction. Now, we thought A15's or big cores
> shouldn't be used
> for running small tasks like timers/workqueues and hence this patch is
> an attempt
> towards reaching that goal.

I see.  My understanding of big.little is very superficial so there
are very good chances that I'm utterly wrong, but to me it seems like
more of a stop-gap solution than something which will have staying
power in the sense that the complexity doesn't seem to have any
inherent reason other than the failure to make the big ones power
efficient enough while idle or under minor load.

Maybe this isn't specific to big.little and heterogeneous cores will
be the way of future with big.little being just a forefront of the
trend.  And even if that's not the case, it definitely doesn't mean
that we can't accomodate big.little, but, at this point, it does make
me a bit more reluctant to make wholesale changes specifically for
big.little.

> Over that we can do some load balancing of works within multiple alive
> cpus, so that
> it can get done quickly. Also, we shouldn't start using an idle cpu
> just for processing
> work :)

The latter part "not using idle cpu just for processing work" does
apply to homogeneous systems too but as I wrote earlier work items
don't spontaneously happen on an idle core.  Something, including
timer, needs to kick it.  So, by definition, a CPU already isn't idle
when a work item starts execution on it.  What am I missing here?

> I have another idea that we can try:
> 
> queue_work_on_any_cpu().
>
> With this we would not break any existing code and can try to migrate
> old users to
> this new infrastructure (atleast the ones which are rearming works from their
> work_handlers). What do you say?

Yeah, this could be a better solution, I think.  Plus, it's not like
finding the optimal cpu is free.

> To take care of the cache locality issue, we can pass an argument to
> this routine,
> that can provide
> - the mask of cpus to schedule this work on
>   OR
> - Sched Level (SD_LEVEL) of cpus to run it.

Let's start simple for now.  If we really need it, we can always add
more later.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ