lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1348568074.git.viresh.kumar@linaro.org>
Date:	Tue, 25 Sep 2012 16:06:05 +0530
From:	Viresh Kumar <viresh.kumar@...aro.org>
To:	linux-kernel@...r.kernel.org
Cc:	pjt@...gle.com, paul.mckenney@...aro.org, tglx@...utronix.de,
	tj@...nel.org, suresh.b.siddha@...el.com, venki@...gle.com,
	mingo@...hat.com, peterz@...radead.org, robin.randhawa@....com,
	Steve.Bannister@....com, Arvind.Chauhan@....com,
	amit.kucheria@...aro.org, vincent.guittot@...aro.org,
	linaro-dev@...ts.linaro.org, patches@...aro.org,
	Viresh Kumar <viresh.kumar@...aro.org>
Subject: [PATCH 0/3] Create sched_select_cpu() and use it in workqueues

In order to save power, it would be useful to schedule work onto non-IDLE cpus
instead of waking up an IDLE one.

To achieve this, we need scheduler to guide kernel frameworks (like: timers &
workqueues) on which is the most preferred CPU that must be used for these
tasks.

This patchset is about implementing this concept.

The first patch adds sched_select_cpu() routine which returns the preferred cpu
which is non-idle. It accepts max level of sched domain, upto which we can
choose a CPU from. It can accept following options: SD_SIBLING, SD_MC, SD_BOOK,
SD_CPU or SD_NUMA.

Second and Third patch are about adapting this change in workqueue framework.

Earlier discussions over this concept were done at last LPC:
http://summit.linuxplumbersconf.org/lpc-2012/meeting/90/lpc2012-sched-timer-workqueue/

Figures:
--------

Test case 1:
- Performed on TC2 with ubuntu-devel
- Boot TC2 and run
 $ trace-cmd record -e workqueue_execute_start

This will trace only the points, where the work actually runs.

Do, this for 150 seconds.

Results:
---------
Domain 0: CPU 0-1
Domain 1: CPU 2-4


Base Kernel: Without my modifications:
-------------------------------------

CPU             No. of works run by CPU
-----           -----------------------
CPU0:           7
CPU1:           445
CPU2:           444
CPU3:           315
CPU4:           226


With my modifications:
----------------------

CPU             No. of works run by CPU
----            -----------------------
CPU0:           31
CPU2:           797
CPU3:           274
CPU4:           86


Test case 2:
-----------
I have created a small module, which does following:
- Create one work for each CPU (using queue_work_on(), so must schedule on that
  cpu)
- Above work, will queue "n" works for each cpu with queue_work(). These works
  are tracked within the module and results are printed at the end.

This gave similar results, with n ranging from 10 to 1000.

Viresh Kumar (3):
  sched: Create sched_select_cpu() to give preferred CPU for power
    saving
  workqueue: create __flush_delayed_work to avoid duplicating code
  workqueue: Schedule work on non-idle cpu instead of current one

 arch/arm/Kconfig      | 11 +++++++
 include/linux/sched.h | 11 +++++++
 kernel/sched/core.c   | 88 +++++++++++++++++++++++++++++++++++++++------------
 kernel/workqueue.c    | 36 ++++++++++++++-------
 4 files changed, 115 insertions(+), 31 deletions(-)

-- 
1.7.12.rc2.18.g61b472e


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ