lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1405293352-9305-1-git-send-email-yuyang.du@intel.com>
Date:	Mon, 14 Jul 2014 07:15:50 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	mingo@...hat.com, peterz@...radead.org,
	linux-kernel@...r.kernel.org
Cc:	pjt@...gle.com, bsegall@...gle.com, arjan.van.de.ven@...el.com,
	len.brown@...el.com, rafael.j.wysocki@...el.com,
	alan.cox@...el.com, mark.gross@...el.com, fengguang.wu@...el.com,
	Yuyang Du <yuyang.du@...el.com>
Subject: [PATCH 0/2 v2] sched: Rewrite per entity runnable load average tracking 

This patchset is really imbalanced in size.

The 1/2 patch is not simply resending, but for two reasons: 1) this rewrite
does not include rq's runnable load_avg, and 2) more importantly, I want to
reduce the size of the 2/2 patch, this is the only way I know how.

The 2/2 patch is very big. Sorry for that, but as this patch is a rewrite, I can
only do it in an all-or-nothing manner. Splitting it will not make each small one
compile or correctly function.

I'd like to thank PeterZ and Ben for their help in fixing the issues and improving
the quality in this version. And Fengguang and his 0Day in finding compile errors
in different configurations.

v2 changes:

- Batch update the tg->load_avg, making sure it is up-to-date before update_cfs_shares
- Remove migrating task from the old CPU/cfs_rq, and do so with atomic operations
- Retrack lod_avg of group's entities (if any), since we need it in task_h_load calc,
  and do it along with its own cfs_rq's update
- Fix 32bit overflow issue of cfs_rq's load_avg, now it is 64bit, should be safe
- Change load.weight in effective_load which uses runnable load_avg consistently

Yuyang Du (2):
  sched: Remove update_rq_runnable_avg
  sched: Rewrite per entity runnable load average tracking

 include/linux/sched.h |   13 +-
 kernel/sched/debug.c  |   32 +--
 kernel/sched/fair.c   |  644 ++++++++++++++++++++-----------------------------
 kernel/sched/proc.c   |    2 +-
 kernel/sched/sched.h  |   34 +--
 5 files changed, 287 insertions(+), 438 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ