lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150818234214.GC24261@byungchulpark-X58A-UD3R>
Date:	Wed, 19 Aug 2015 08:42:14 +0900
From:	Byungchul Park <byungchul.park@....com>
To:	"T. Zhou" <t.s.zhou@...mail.com>
Cc:	mingo@...nel.org, peterz@...radead.org,
	linux-kernel@...r.kernel.org, yuyang.du@...el.com
Subject: Re: [PATCH v2 1/3] sched: sync a se with its cfs_rq when attaching
 and dettaching

On Wed, Aug 19, 2015 at 12:32:43AM +0800, T. Zhou wrote:
> Hi,
> 
> On Mon, Aug 17, 2015 at 04:45:50PM +0900, byungchul.park@....com wrote:
> > From: Byungchul Park <byungchul.park@....com>
> > 
> > current code is wrong with cfs_rq's avg loads when changing a task's
> > cfs_rq to another. i tested with "echo pid > cgroup" and found that
> > e.g. cfs_rq->avg.load_avg became larger and larger whenever i changed
> > a cgroup to another again and again. we have to sync se's avg loads
> > with both *prev* cfs_rq and next cfs_rq when changing its group.
> > 
> 
> my simple think about above, may be nothing or wrong, just ignore it.
> 
> if a load balance migration happened just before cgroup change, prev
> cfs_rq and next cfs_rq will be on different cpu. migrate_task_rq_fair()

hello,

two oerations, migration and cgroup change, are protected by lock.
therefore it would never happen. :)

thanks,
byungchul

> and update_cfs_rq_load_avg() will sync and remove se's load avg from
> prev cfs_rq. whether or not queued, well done. dequeue_task() decay se
> and pre_cfs before calling task_move_group_fair(). after set cfs_rq in
> task_move_group_fair(), if queued, se's load avg do not add to next
> cfs_rq(try set last_update_time to 0 like migration to add), if !queued,
> also need to add se's load avg to next cfs_rq.
> 
> if no load balance migration happened when change cgroup. prev cfs_rq
> and next cfs_rq may be on same cpu(not sure), this time, need to remove
> se's load avg by ourself, also need to add se's load avg on next cfs_rq.
> 
> thinks,
> -- 
> Tao
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ