lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 19 Sep 2019 16:32:53 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Qais Yousef <qais.yousef@....com>
Cc:     Jing-Ting Wu <jing-ting.wu@...iatek.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        Matthias Brugger <matthias.bgg@...il.com>,
        wsd_upstream@...iatek.com,
        linux-kernel <linux-kernel@...r.kernel.org>,
        LAK <linux-arm-kernel@...ts.infradead.org>,
        linux-mediatek@...ts.infradead.org
Subject: Re: [PATCH 1/1] sched/rt: avoid contend with CFS task

On Thu, 19 Sep 2019 at 16:23, Qais Yousef <qais.yousef@....com> wrote:
>
> On 09/19/19 14:27, Vincent Guittot wrote:
> > > > > But for requirement of performance, I think it is better to differentiate between idle CPU and CPU has CFS task.
> > > > >
> > > > > For example, we use rt-app to evaluate runnable time on non-patched environment.
> > > > > There are (NR_CPUS-1) heavy CFS tasks and 1 RT Task. When a CFS task is running, the RT task wakes up and choose the same CPU.
> > > > > The CFS task will be preempted and keep runnable until it is migrated to another cpu by load balance.
> > > > > But load balance is not triggered immediately, it will be triggered until timer tick hits with some condition satisfied(ex. rq->next_balance).
> > > >
> > > > Yes you will have to wait for the next tick that will trigger an idle
> > > > load balance because you have an idle cpu and 2 runnable tack (1 RT +
> > > > 1CFS) on the same CPU. But you should not wait for more than  1 tick
> > > >
> > > > The current load_balance doesn't handle correctly the situation of 1
> > > > CFS and 1 RT task on same CPU while 1 CPU is idle. There is a rework
> > > > of the load_balance that is under review on the mailing list that
> > > > fixes this problem and your CFS task should migrate to the idle CPU
> > > > faster than now
> > > >
> > >
> > > Period load balance should be triggered when current jiffies is behind
> > > rq->next_balance, but rq->next_balance is not often exactly the same
> > > with next tick.
> > > If cpu_busy, interval = sd->balance_interval * sd->busy_factor, and
> >
> > But if there is an idle CPU on the system, the next idle load balance
> > should apply shortly because the busy_factor is not used for this CPU
> > which is  not busy.
> > In this case, the next_balance interval is sd_weight which is probably
> > 4ms at cluster level and 8ms at system level in your case. This means
> > between 1 and 2 ticks
>
> But if the CFS task we're preempting was latency sensitive - this 1 or 2 tick
> is too late of a recovery.
>
> So while it's good we recover, but a preventative approach would be useful too.
> Just saying :-) I'm still not sure if this is the best longer term approach.

like using a rt task ?

>
> --
> Qais Yousef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ