lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Nov 2016 20:09:56 +0100
From:   Luca Abeni <luca.abeni@...tn.it>
To:     Juri Lelli <juri.lelli@....com>
Cc:     linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Claudio Scordino <claudio@...dence.eu.com>,
        Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC v3 1/6] Track the active utilisation

Hi again,

On Tue, 8 Nov 2016 18:53:09 +0000
Juri Lelli <juri.lelli@....com> wrote:
[...]
> > > Also, AFAIU, do_exit() works on current and the TASK_DEAD case is
> > > handled in finish_task_switch(), so I don't think we are taking
> > > care of the "task is dying" condition.
> > Ok, so I am missing something... The state is set to TASK_DEAD, and
> > then schedule() is called... So, __schedule() sees the dying task as
> > "prev" and invokes deactivate_task() with the DEQUEUE_SLEEP flag...
> > After that, finish_task_switch() calls task_dead_dl(). Is this
> > wrong? If not, why aren't we taking care of the "task is dying"
> > condition?
> > 
> 
> No, I think you are right. But, semantically this cleanup goes in
> task_dead_dl(), IMHO.
Just to be sure I understand correctly: you suggest to add a check for
"state == TASK_DEAD" (skipping the cleanup if the condition is true) in
dequeue_task_dl(), and to add a sub_running_bw() in task_dead_dl()...
Is this understanding correct?

> It's most probably moot if it complicates
> things, but it might be helpful to differentiate the case between a
> task that is actually going to sleep (and for which we want to
> activate the timer) and a task that is dying (and for which we want
> to release bw immediately).
I suspect the two cases should be handled in the same way :)

> So, it actually matters for next patch,
> not here. But, maybe we want to do things clean from start?
You mean, because patch 2/6 adds
+       if (hrtimer_active(&p->dl.inactive_timer)) {
+               raw_spin_lock_irq(&task_rq(p)->lock);
+               sub_running_bw(&p->dl, dl_rq_of_se(&p->dl));
+               raw_spin_unlock_irq(&task_rq(p)->lock);
+       }
in task_dead_dl()? I suspect this hunk is actually unneeded (worse, it
is wrong :). I am trying to remember why it is there, but I cannot find
any reason... In the next days, I'll run some tests to check if that
hunk is actually needed. If yes, then I'll modify patch 1/6 as you
suggest; if it is not needed, I'll remove it from patch 2/6 and I'll
not do this change to patch 1/6... Is this ok?



			Thanks,
				Luca


> 
> > 
> > > Peter, does what I'm saying make any sense? :)
> > > 
> > > I still have to set up things here to test these patches (sorry,
> > > I was travelling), but could you try to create some tasks and
> > > that kill them from another shell to see if the accounting
> > > deviates or not? Or did you already do this test?
> > I think this is one of the tests I tried... 
> > I have to check if I changed this code after the test (but I do not
> > think I did). Anyway, tomorrow I'll write a script for automating
> > this test, and I'll leave it running for some hours.
> > 
> 
> OK, thanks. As said I think that you actually handle the case already,
> but I'll try to setup testing as well soon.
> 
> Thanks,
> 
> - Juri

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ