lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170111221646.4da97555@sweethome>
Date:   Wed, 11 Jan 2017 22:16:46 +0100
From:   luca abeni <luca.abeni@...tannapisa.it>
To:     Juri Lelli <juri.lelli@....com>
Cc:     Daniel Bristot de Oliveira <bristot@...hat.com>,
        linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Claudio Scordino <claudio@...dence.eu.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Tommaso Cucinotta <tommaso.cucinotta@...up.it>
Subject: Re: [RFC v4 0/6] CPU reclaiming for SCHED_DEADLINE

On Wed, 11 Jan 2017 15:06:47 +0000
Juri Lelli <juri.lelli@....com> wrote:

> On 11/01/17 13:39, Luca Abeni wrote:
> > Hi Juri,
> > (I reply from my new email address)
> > 
> > On Wed, 11 Jan 2017 12:19:51 +0000
> > Juri Lelli <juri.lelli@....com> wrote:
> > [...]  
> > > > > For example, with my taskset, with a hypothetical perfect
> > > > > balance of the whole runqueue, one possible scenario is:
> > > > >
> > > > >    CPU    0    1     2     3
> > > > > # TASKS   3    3     3     2
> > > > >
> > > > > In this case, CPUs 0 1 2 are with 100% of local utilization.
> > > > > Thus, the current task on these CPUs will have their runtime
> > > > > decreased by GRUB. Meanwhile, the luck tasks in the CPU 3
> > > > > would use an additional time that they "globally" do not have
> > > > > - because the system, globally, has a load higher than the
> > > > > 66.6...% of the local runqueue. Actually, part of the time
> > > > > decreased from tasks on [0-2] are being used by the tasks on
> > > > > 3, until the next migration of any task, which will change
> > > > > the luck tasks... but without any guaranty that all tasks
> > > > > will be the luck one on every activation, causing the problem.
> > > > >
> > > > > Does it make sense?    
> > > > 
> > > > Yes; but my impression is that gEDF will migrate tasks so that
> > > > the distribution of the reclaimed CPU bandwidth is almost
> > > > uniform... Instead, you saw huge differences in the
> > > > utilisations (and I do not think that "compressing" the
> > > > utilisations from 100% to 95% can decrease the utilisation of a
> > > > task from 33% to 25% / 26%... :) 
> > > 
> > > I tried to replicate Daniel's experiment, but I don't see such a
> > > skewed allocation. They get a reasonably uniform bandwidth and the
> > > trace looks fairly good as well (all processes get to run on the
> > > different processors at some time).  
> > 
> > With some effort, I replicated the issue noticed by Daniel... I
> > think it also depends on the CPU speed (and on good or bad luck :),
> > but the "unfair" CPU allocation can actually happen.  
> 
> Yeah, actual allocation in general varies. I guess the question is: do
> we care? We currently don't load balance considering utilizations,
> only dynamic deadlines matter.

Right... But the problem is that with the version of GRUB I proposed
this unfairness can result in some tasks receiving less CPU time than
the guaranteed amount (because some other tasks receive much more). I
think there are at least two possible ways to fix this (without
changing the migration strategy), and I am working on them...
(hopefully, I'll post something in next week)


> > > I was expecting that the task could consume 0.5 worth of bandwidth
> > > with the given global limit. Is the current behaviour intended?
> > > 
> > > If we want to change this behaviour maybe something like the
> > > following might work?
> > > 
> > >  delta_exec = (delta * to_ratio((1ULL << 20) -
> > > rq->dl.non_deadline_bw, rq->dl.running_bw)) >> 20  
> > My current patch does
> > 	(delta * rq->dl.running_bw * rq->dl.deadline_bw_inv) >> 20
> > >> 8; where rq->dl.deadline_bw_inv has been set to
> > 	to_ratio(global_rt_runtime(), global_rt_period()) >> 12;
> > 	
> > This seems to work fine, and should introduce less overhead than
> > to_ratio().
> >   
> 
> Sure, we don't want to do divisions if we can. Why the intermediate
> right shifts, though?

I wrote it like this to remember that ">> 20" comes from how
"to_ratio()" computes the utilization, and the additional ">> 8"
comes from the fact that deadline_bw_inv is shifted left by 8, to avoid
losing precision (I used 8 insted of 20 so that the computation can be
- hopefully - performed on 32 bits... Of course I can revise this if
needed).

If needed I can change the ">> 20 >> 8" in ">> 28", or remove the
">> 12" from the deadline_bw_inv conmputation (so that we can use
">> 40" or ">> 20 >> 20" in grub_reclaim()).


			Thanks,
				Luca

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ