lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171222130118.GO18612@localhost.localdomain>
Date:   Fri, 22 Dec 2017 14:01:18 +0100
From:   Juri Lelli <juri.lelli@...hat.com>
To:     Patrick Bellasi <patrick.bellasi@....com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        Ingo Molnar <mingo@...hat.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Todd Kjos <tkjos@...roid.com>,
        Joel Fernandes <joelaf@...gle.com>
Subject: Re: [PATCH v3 0/6] cpufreq: schedutil: fixes for flags updates

On 22/12/17 12:50, Patrick Bellasi wrote:
> On 22-Dec 13:43, Juri Lelli wrote:
> > On 22/12/17 12:38, Patrick Bellasi wrote:
> > > On 22-Dec 13:19, Peter Zijlstra wrote:
> > > > On Fri, Dec 22, 2017 at 12:07:37PM +0000, Patrick Bellasi wrote:
> > > > > > I was thinking that since dl is a 'global' scheduler the reservation
> > > > > > would be too and thus the freq just needs a single CPU to be observed;
> > > > > 
> > > > > AFAIU global is only the admission control (which is something worth a
> > > > > thread by itself...) while the dl_se->dl_bw are aggregated into the
> > > > > dl_rq->running_bw, which ultimately represents the DL bandwidth
> > > > > required for just a CPU.
> > > > 
> > > > Oh urgh yes, forgot that.. then the dl freq stuff isn't strictly correct
> > > > I think. But yes, that's another thread.
> > > 
> > > Mmm... maybe I don't get your point... I was referring to the global
> > > admission control of DL. If you have for example 3 60% DL tasks on a
> > > 2CPU system, AFAIU the CBS will allow the tasks in the system (since
> > > the overall utilization is 180 < 200 * 0.95) although that workload is
> > > not necessarily schedule (for example if the tasks wakeups at the
> > > same time one of them will miss its deadline).
> > > 
> > > But, yeah... maybe I'm completely wrong or, in any case, it's for a
> > > different thread...
> > > 
> > > > > > but I suppose there's nothing stopping anybody from splitting a clock
> > > > > > domain down the middle scheduling wise. So yes, good point.
> > > > > 
> > > > > That makes sense... moreover, using the global utilization, we would
> > > > > end up asking for capacities which cannot be provided by a single CPU.
> > > > 
> > > > Yes, but that _should_ not be a problem if you clock them all high
> > > > enough. But this gets to be complicated real fast I think.
> > > 
> > > IMO the current solution with Juri's patches is working as expected:
> > > we know how many DL tasks are runnable on a CPU and we properly
> > > account for their utilization.
> > > 
> > > The only "issue/limitation" is (eventually) the case described above.
> > > Dunno if we can enqueue 2 60% DL tasks on the same CPU... in that case
> > > we will ask for 120% Utilization?
> > 
> > In general it depends on the other parameters, deadline and period.
> 
> Right, but what about the case dealdine==period, with 60% utilization?
> AFAIU, 3 DL tasks with same parameters like above will be accepted on
> a 2 CPU system, isn't it?
> 
> And thus, in that case, we can end up with a 120% utlization request
> from DL for a single CPU... but, considering it's lunch o'clock,
> I'm likely missing something...

Nope. CBS on SMP only gives you bounded tardiness (at least with the AC
kernel does). Some deadlines might be missed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ