lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130322150519.GJ25978@kernel.dk>
Date:	Fri, 22 Mar 2013 09:05:19 -0600
From:	Jens Axboe <axboe@...nel.dk>
To:	Viresh Kumar <viresh.kumar@...aro.org>
Cc:	pjt@...gle.com, paul.mckenney@...aro.org, tglx@...utronix.de,
	tj@...nel.org, suresh.b.siddha@...el.com, venki@...gle.com,
	mingo@...hat.com, peterz@...radead.org, rostedt@...dmis.org,
	linaro-kernel@...ts.linaro.org, robin.randhawa@....com,
	Steve.Bannister@....com, Liviu.Dudau@....com,
	charles.garcia-tobin@....com, Arvind.Chauhan@....com,
	linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V3 6/7] block: queue work on any cpu

On Mon, Mar 18 2013, Viresh Kumar wrote:
> block layer uses workqueues for multiple purposes. There is no real dependency
> of scheduling these on the cpu which scheduled them.
> 
> On a idle system, it is observed that and idle cpu wakes up many times just to
> service this work. It would be better if we can schedule it on a cpu which isn't
> idle to save on power.
> 
> By idle cpu (from scheduler's perspective) we mean:
> - Current task is idle task
> - nr_running == 0
> - wake_list is empty
> 
> This patch replaces schedule_work() and queue_[delayed_]work() with
> queue_[delayed_]work_on_any_cpu() siblings.
> 
> These routines would look for the closest (via scheduling domains) non-idle cpu
> (non-idle from schedulers perspective). If the current cpu is not idle or all
> cpus are idle, work will be scheduled on local cpu.

What are the performance implications of this? From that perspective,
scheduling on a local core is the better choice. Hopefully it's already
running on the local socket of the device, keeping it on that node would
be preferable.

For the delayed functions, the timer is typically added on the current
CPU. It's hard to know what the state of idleness of CPUs would be when
the delay has expired. How are you handling this?

I can see the win from a power consumption point of view, but it quickly
gets a little more complicated than that.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ