lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161108200229.GA24076@ARMvm>
Date:   Tue, 8 Nov 2016 20:02:29 +0000
From:   Juri Lelli <juri.lelli@....com>
To:     Luca Abeni <luca.abeni@...tn.it>
Cc:     linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Claudio Scordino <claudio@...dence.eu.com>,
        Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC v3 1/6] Track the active utilisation

On 08/11/16 20:09, Luca Abeni wrote:
> Hi again,
> 
> On Tue, 8 Nov 2016 18:53:09 +0000
> Juri Lelli <juri.lelli@....com> wrote:
> [...]
> > > > Also, AFAIU, do_exit() works on current and the TASK_DEAD case is
> > > > handled in finish_task_switch(), so I don't think we are taking
> > > > care of the "task is dying" condition.
> > > Ok, so I am missing something... The state is set to TASK_DEAD, and
> > > then schedule() is called... So, __schedule() sees the dying task as
> > > "prev" and invokes deactivate_task() with the DEQUEUE_SLEEP flag...
> > > After that, finish_task_switch() calls task_dead_dl(). Is this
> > > wrong? If not, why aren't we taking care of the "task is dying"
> > > condition?
> > > 
> > 
> > No, I think you are right. But, semantically this cleanup goes in
> > task_dead_dl(), IMHO.
> Just to be sure I understand correctly: you suggest to add a check for
> "state == TASK_DEAD" (skipping the cleanup if the condition is true) in
> dequeue_task_dl(), and to add a sub_running_bw() in task_dead_dl()...
> Is this understanding correct?

This is more ugly, I know. It makes probably sense though if we then need it in
the next patch. But you are saying the contrary (we don't actually need it), so
in that case we might just want to add a comment here explaining why we handle
the "task is dying" case together with "task is going to sleep" (so that I
don't forget? :).

> 
> > It's most probably moot if it complicates
> > things, but it might be helpful to differentiate the case between a
> > task that is actually going to sleep (and for which we want to
> > activate the timer) and a task that is dying (and for which we want
> > to release bw immediately).
> I suspect the two cases should be handled in the same way :)
> 
> > So, it actually matters for next patch,
> > not here. But, maybe we want to do things clean from start?
> You mean, because patch 2/6 adds
> +       if (hrtimer_active(&p->dl.inactive_timer)) {
> +               raw_spin_lock_irq(&task_rq(p)->lock);
> +               sub_running_bw(&p->dl, dl_rq_of_se(&p->dl));
> +               raw_spin_unlock_irq(&task_rq(p)->lock);
> +       }
> in task_dead_dl()? I suspect this hunk is actually unneeded (worse, it
> is wrong :). I am trying to remember why it is there, but I cannot find
> any reason... In the next days, I'll run some tests to check if that
> hunk is actually needed. If yes, then I'll modify patch 1/6 as you
> suggest; if it is not needed, I'll remove it from patch 2/6 and I'll
> not do this change to patch 1/6... Is this ok?
> 

I guess yes, if we don't need to differentiate. Maybe just add a comment as I
am saying above?

Thanks,

- Juri

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ