lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xm26k37nye7d.fsf@sword-of-the-dawn.mtv.corp.google.com>
Date:	Tue, 08 Jul 2014 10:04:22 -0700
From:	bsegall@...gle.com
To:	Yuyang Du <yuyang.du@...el.com>
Cc:	Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
	linux-kernel@...r.kernel.org, rafael.j.wysocki@...el.com,
	arjan.van.de.ven@...el.com, len.brown@...el.com,
	alan.cox@...el.com, mark.gross@...el.com, pjt@...gle.com,
	fengguang.wu@...el.com
Subject: Re: [PATCH 2/2] sched: Rewrite per entity runnable load average tracking

Yuyang Du <yuyang.du@...el.com> writes:

> Thanks, Ben,
>
> On Mon, Jul 07, 2014 at 03:25:07PM -0700, bsegall@...gle.com wrote:
>> Also something to note is that cfs_rq->load_avg just takes samples of
>> load.weight every 1us, which seems unfortunate. We thought this was ok
>> for p->se.load.weight, because it isn't really likely for userspace to
>> be calling nice(2) all the time, but wake/sleeps are more frequent,
>> particularly on newer cpus. Still, it might not be /that/ bad.
>
> The sampling of cfs_rq->load.weight should be equivalent to the current
> code in that at the end of day cfs_rq->load.weight worth of runnable would 
> contribute to runnable_load_avg/blocked_load_avg for both the current and
> the rewrite.

Yes, but the point is that it looks at load.weight when delta/1024 > 0
and assumes that it has had that load.weight the entire time, when this
might not actually be true. The largest error I can think of is if you
have a very high weight task that tends to run for very short periods of
time, you could end up losing their entire contribution to load. This
may be acceptable, I'm not certain.

>  
>> Also, as a nitpick/annoyance this does a lot of
>>  if (entity_is_task(se)) __update_load_avg(... se ...)
>>  __update_load_avg(... cfs_rq_of(se) ...)
>> which is just a waste of the avg struct on every group se, and all it
>> buys you is the ability to not have a separate rq->avg struct (removed
>> by patch 1) instead of rq->cfs.avg.
>
> I actually struggled on this issue. As we only need a sched_avg for task (not
> entity), and a sched_avg for cfs_rq, I planned to move entity avg to
> task. Good?

Yeah, that can probably be done perfectly fine.

>
> So left are the migrate_task_rq_fair() not holding lock issue and cfs_rq->avg.load_avg
> overflow issue. I need some time to study them.
>
> Overall, I think none of these issues are originally caused by combination/split
> of runnable and blocked. It is just a matter of how synchronized we want to be
> (this rewrite is the most synchronized), and the workaround I need to borrow from the
> current codes.

If you can manage to find a to deal with the migration issue without it,
that would be great, I'm just pretty sure that's why we did the split.

>
> Yuyang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ