lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Jul 2022 09:29:12 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Qais Yousef <qais.yousef@....com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        linux-kernel@...r.kernel.org, Xuewen Yan <xuewen.yan94@...il.com>,
        Wei Wang <wvw@...gle.com>,
        Jonathan JMChen <Jonathan.JMChen@...iatek.com>,
        Hank <han.lin@...iatek.com>, Lukasz Luba <lukasz.luba@....com>
Subject: Re: [PATCH 1/7] sched/uclamp: Fix relationship between uclamp and
 migration margin

On Fri, 15 Jul 2022 at 12:37, Qais Yousef <qais.yousef@....com> wrote:
>
> On 07/13/22 14:39, Vincent Guittot wrote:
>
> [...]
>
> > > > That's why I have mentioned that I have thermal pressure and irq in
> > > > mind. I'm speaking about performance level but not about bandwidth and
> > > > time sharing.
> > >
> > > irq pressure has no impact on the cpu's ability to get any OPP, no? It purely
> > > reduces the bandwidth availability for CFS tasks AFAIU. So the task's ability
> > > to achieve a performance level has no correlation with irq pressure IMO. Unless
> > > I missed something.
> >
> > The way irq is accounted in pelt might impact the result. TBH, i
> > haven't looked in details what would be the impact
>
> I can't see how irq can impact what performance level we can achieve on any
> CPU. It should just impact bandwidth?

It impacts the cpu and task utilization as your task utilization is
expressed in the range of the time not used by IRQ so could be lower
than what you think when you compare with uclamp and decide what to do

>
> [...]
>
> > > > more concerned by the thermal pressure as I mentioned previously. As
> > > > an example the thermal pressure reflects the impact on the performance
> > > > while task is running.
> > >
> > > Like we discussed on that RT email thread. If you have a 1024 task, tiny
> > > thermal pressure will make it look like it won't fit anywhere.
> >
> > maybe another big core without pressure. Otherwise if the task can
>
> Isn't thermal pressure per perf domain?

>From a scheduler PoV, we don't have any rule on this

>
> > accept a lower compute capacity why not setting uclamp_min to a lower
> > value like 900
>
> Well if the system has lost its top 10% and you're still running as fast as
> the system can possibly do, what better can you do?
>
> I can't see how comparing uclamp with thermal pressure will help.
>
> In feec() we pick the highest spare capacity CPU. So if the bigs were split
> into 1 per perf domain and truly one of them can become severely throttled
> while the other isn't as you're trying to say, then this distribution will pick
> the highest spare capacity one.

The cpu with highest spare capacity might not be the one with highest
performance

>
> fits_capacity() just says this CPU is a candidate that we can consider.
>
> [...]
>
> > > > TaskA usually runs 4 ms every 8ms but wants to ensure a running time
> > > > around 5ms. Task A asks for a uclamp_min of 768.
> > > > medium cpu capacity_orig is 800 but runs at half its max freq because
> > > > of thermal mitigation then your task will runs more than 8ms
> > >
> > > If thermal pressure is 50%, then capacity_of() is 400. A 50% task will have
> > > util_avg of 512, which is much larger than 0.8 * 400. So this is dealt with
> > > already in this code, no?
> >
> > May be my example is not perfect but apply a mitigation of 20% and you
> > fall in the case
>
>         capacity_orig_of(medium) = 800
>         capacity_of(medium) = 800 * 0.8 - sum_of_(irq, rt) pressure :: <= 640
>
>         migration_margin * capacity_of(medium) = 0.8 * 640 = 512 === p->util_avg
>
> So this task will struggle still to run on the medium even under 20% pressure.

you are nitpicking. 19.75% should be ok

>
> I can see your point for sure that we could have scenarios where we should pick
> a bigger CPU. But my counter point is that if there's a meaningful thermal
> pressure we are screwed already and uclamp can't save the day.

uclamp can save it by triggering the search of another cpu with lower pressure

>
> I'll repeat my question, how would you encode the relationship?
>
> Consider these scenarios:
>
>
>         capaity_orig_of(little) = 400
>         capaity_orig_of(medium) = 800
>         capaity_orig_of(big) = 1024
>
>         p0->util_avg = 300
>         p0->uclamp_min = 800
>
>         p1->util_avg = 300
>         p1->uclamp_min = 1024
>
>
> When there's 10% thermal pressure on all CPUs.
>
> Does p1 fit on big still? Fit here means the  big is a viable candidate from
> uclamp point of view.

I agree that this one is tricky because if all cpus are throttled,
there is no cpu but it's worth looking for the big cpu with lowest
throttling otherwise

>
> How would you define the relationship so that p0 will not fit the medium, but
> p1 still fits the big.

I would compare uclamp_min with capacity_orig() - thermal pressure to
decide if we should look for another cpu

>
> What happens when thermal pressure is 1%? Should p0 still fit on the medium
> then? As Lukasz highlighted in other email threads, the decay of thermal
> pressure signal has a very long tail.
>
>
> Thanks!
>
> --
> Qais Yousef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ