lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtBrh1thRMtA2Y5S17M_8d2BtBsCyCdWOAyFkr8-9jeKVA@mail.gmail.com>
Date:	Thu, 2 Jun 2016 09:29:53 +0200
From:	Vincent Guittot <vincent.guittot@...aro.org>
To:	Yuyang Du <yuyang.du@...el.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	Benjamin Segall <bsegall@...gle.com>,
	Paul Turner <pjt@...gle.com>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Dietmar Eggemann <dietmar.eggemann@....com>
Subject: Re: [PATCH v3 2/5] sched/fair: Skip detach and attach new group task

On 1 June 2016 at 21:21, Yuyang Du <yuyang.du@...el.com> wrote:
> On Wed, Jun 01, 2016 at 02:20:09PM +0200, Vincent Guittot wrote:
>> On 1 June 2016 at 05:41, Yuyang Du <yuyang.du@...el.com> wrote:
>> > Vincent reported that the first task to a new task group's cfs_rq will
>> > be attached in attach_task_cfs_rq() and once more when it is enqueued
>> > (see https://lkml.org/lkml/2016/5/25/388).
>> >
>> > Actually, it is much worse. The load is currently attached mostly twice
>> > every time when we switch to fair class or change task groups. These two
>> > scenarios are concerned, which we will descripbe in the following
>> > respectively
>>
>> AFAICT and according to tests that i have done around these 2 use
>> cases, the task is attached only once during a switched to fair and a
>> sched_move_task. Have you face such situation during tests ? What is
>> the sequence that generates this issue ?
>>
>> >
>> > 1) Switch to fair class:
>> >
>> > The sched class change is done like this:
>> >
>> >         if (queued)
>> >           enqueue_task();
>> >         check_class_changed()
>> >           switched_from()
>> >           switched_to()
>> >
>> > If the task is on_rq, it should have already been enqueued, which
>> > MAY have attached the load to the cfs_rq, if so, we shouldn't attach
>>
>> No, it can't. The only way to attach task during enqueue is if
>> last_update_time has been reset which is not the case during a
>> switched_to_fair
>
> My response to your above two comments:
>
> As I said, there can be four possibilities going through the above sequences:
>
> (1) on_rq, (2) !on_rq, (a) was fair class (representing last_update_time != 0),
> (b) never was fair class (representing last_update_time == 0, but may not be
> limited to this)
>
> Crossing them, we have (1)(a), (1)(b), (2)(a), and (2)(b).
>
> Some will attach twice, which are (1)(b) and (2)(b), the other will attach
> once, which are (1)(a) and (2)(a). The difficult part is they can be attached
> at different places.

ok for (1)(b) but not for (2)(b) and it's far from "attached mostly
twice every time"

The root cause is that the last_update_time is initialize to 0 which
have a special meaning for the load_avg. We should better initialize
it to something different like for cfs_rq

>
> So, the simplest sulution is to reset the task's last_update_time to 0, when
> the task is switched from fair. Then let task enqueue do the load attachment,
> only once at this place under all circumstances.

IMHO, the better solution is to not initialize last_update_time to
something different from 0 which has a special meaning

>
>> > 2) Change between fair task groups:
>> >
>> > The task groups are changed like this:
>> >
>> >         if (queued)
>> >           dequeue_task()
>> >         task_move_group()
>> >         if (queued)
>> >           enqueue_task()
>> >
>> > Unlike the switch to fair class, if the task is on_rq, it will be enqueued
>> > after we move task groups, so the simplest solution is to reset the
>> > task's last_update_time when we do task_move_group(), and then let
>> > enqueue_task() do the load attachment.
>>
>> Same for this sequence, the task is explicitly attached only once
>> during the task_move_group but never during the enqueue.
>
> Your patch said there can be twice, :)

My patch says the 1st task that is attached on a cfs rq wil be
attached twice not "The load is currently attached mostly twice
every time when we switch to fair class or change task groups. "  You
say that it's happen mostly every time and i disagree on that.

For Change between fair task group, i still don't see how it can
attached mostly twice every time

>
>> So you want to delay the attach during the enqueue ?
>
> Yes, despite of delay or not delay, the key is to only attach at enqueue(),
> this is the simplest solution.
>
>> But what happen
>> if the task was not enqueue when it has been moved between groups ?
>> The load_avg of the task stays frozen during the period because its
>> last_update_time is reset
>
> That is the !on_rq case. By "frozen", you mean it won't be decayed, right?
> Yes, this is the downside. But what if the task will never be enqueued,

That's a big downside IMO

> that legacy load does not mean anything in this case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ