lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Nov 2016 18:53:09 +0000
From:   Juri Lelli <juri.lelli@....com>
To:     Luca Abeni <luca.abeni@...tn.it>
Cc:     linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Claudio Scordino <claudio@...dence.eu.com>,
        Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC v3 1/6] Track the active utilisation

On 08/11/16 19:17, Luca Abeni wrote:
> Hi Juri,
> 
> On Tue, 8 Nov 2016 17:56:35 +0000
> Juri Lelli <juri.lelli@....com> wrote:
> [...]
> > > > >  static void switched_to_dl(struct rq *rq, struct task_struct
> > > > > *p) {
> > > > > +	add_running_bw(&p->dl, &rq->dl);
> > > > >  
> > > > >  	/* If p is not queued we will update its parameters at
> > > > > next wakeup. */ if (!task_on_rq_queued(p))  
> > > > 
> > > > Don't we also need to remove bw in task_dead_dl()?
> > > I think task_dead_dl() is invoked after invoking dequeue_task_dl(),
> > > which takes care of this... Or am I wrong? (I think I explicitly
> > > tested this, and modifications to task_dead_dl() turned out to be
> > > unneeded)
> > > 
> > 
> > Mmm. You explicitly check that TASK_ON_RQ_MIGRATING or DEQUEUE_SLEEP
> > (which btw can be actually put together with an or condition), so I
> > don't think that any of those turn out to be true when the task dies.
> I might be very wrong here, but I think do_exit() just does something
> like
> 	tsk->state = TASK_DEAD;
> and then invokes schedule(), and __schedule() does
>         if (!preempt && prev->state) {
>                 if (unlikely(signal_pending_state(prev->state, prev))) {
>                         prev->state = TASK_RUNNING;
>                 } else {
>                         deactivate_task(rq, prev, DEQUEUE_SLEEP);
> 			[...]
> so dequeue_task_dl() will see DEQUEUE_SLEEP... Or am I misunderstanding
> what you are saying?
> 
> > Also, AFAIU, do_exit() works on current and the TASK_DEAD case is
> > handled in finish_task_switch(), so I don't think we are taking care
> > of the "task is dying" condition.
> Ok, so I am missing something... The state is set to TASK_DEAD, and
> then schedule() is called... So, __schedule() sees the dying task as
> "prev" and invokes deactivate_task() with the DEQUEUE_SLEEP flag...
> After that, finish_task_switch() calls task_dead_dl(). Is this wrong?
> If not, why aren't we taking care of the "task is dying" condition?
> 

No, I think you are right. But, semantically this cleanup goes in
task_dead_dl(), IMHO. It's most probably moot if it complicates things,
but it might be helpful to differentiate the case between a task that is
actually going to sleep (and for which we want to activate the timer)
and a task that is dying (and for which we want to release bw
immediately). So, it actually matters for next patch, not here. But,
maybe we want to do things clean from start?

> 
> > Peter, does what I'm saying make any sense? :)
> > 
> > I still have to set up things here to test these patches (sorry, I was
> > travelling), but could you try to create some tasks and that kill them
> > from another shell to see if the accounting deviates or not? Or did
> > you already do this test?
> I think this is one of the tests I tried... 
> I have to check if I changed this code after the test (but I do not
> think I did). Anyway, tomorrow I'll write a script for automating this
> test, and I'll leave it running for some hours.
> 

OK, thanks. As said I think that you actually handle the case already,
but I'll try to setup testing as well soon.

Thanks,

- Juri

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ