lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 2 Nov 2022 17:10:08 -0700
From:   Josh Don <joshdon@...gle.com>
To:     Michal Koutný <mkoutny@...e.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched: async unthrottling for cfs bandwidth

Hi Michal,

Thanks for taking a look.

On Wed, Nov 2, 2022 at 9:59 AM Michal Koutný <mkoutny@...e.com> wrote:
>
> Hello.
>
> On Wed, Oct 26, 2022 at 03:44:49PM -0700, Josh Don <joshdon@...gle.com> wrote:
> > To fix this, we can instead unthrottle cfs_rq's asynchronously via a
> > CSD. Each cpu is responsible for unthrottling itself, thus sharding the
> > total work more fairly across the system, and avoiding hard lockups.
>
> FIFO behavior of the cfs_b->throttled_cfs_rq is quite important to
> ensure fairness of throttling (historically when it FIFO wasn't honored,
> it caused some cfs_rq starving issues).
>
> Despite its name, distribute_cfs_runtime() doesn't distribute the
> runtime, the time is pulled inside assign_cfs_rq_runtime() (but that's
> already on target cpu).
> Currently, it's all synchronized under cfs_b->lock but with your change,
> throttled cfs_rq would be dissolved among cpus that'd run concurrently
> (assign_cfs_rq_runtime() still takes cfs_b->lock but it won't be
> necessarily in the unthrottling order).

I don't think my patch meaningfully regresses this; the prior state
was also very potentially unfair in a similar way.

Without my patch, distribute_cfs_runtime() will unthrottle the
cfs_rq's, and as you point out, it doesn't actually give them any real
quota, it lets assign_cfs_rq_runtime() take care of that. But this
happens asynchronously on those cpus. If they are idle, they wait for
an IPI from the resched_curr() in unthrottled_cfs_rq(), otherwise they
simply wait until potentially the next rescheduling point. So we are
currently far from ever being guaranteed that the order the cpus pull
actual quota via assign_cfs_rq_runtime() matches the order they were
unthrottled from the list.

> > +static inline void __unthrottle_cfs_rq_async(struct cfs_rq *cfs_rq)
> > [...]
> > +     if (rq == this_rq()) {
> > +             unthrottle_cfs_rq(cfs_rq);
> > +             return;
> > +     }
>
> It was pointed out to me that generic_exec_single() does something
> similar.
> Wouldn't the flow bandwidth control code be simpler relying on that?

We already hold rq lock so we couldn't rely on the
generic_exec_single() special case since that would double lock.

> Also, can a particular cfs_rq be on both cfs_b->throttled_csd_list and
> cfs_b->throttled_cfs_rq lists at any moment?
> I wonder if having a single list_head node in cfs_rq would be feasible
> (and hence enforcing this constraint in data).

That's an interesting idea, this could be rewritten so that
distribute() pulls the entity off this list and moves it to the
throttled_csd_list; we never have an actual need to have entities on
both lists at the same time.

I'll wait to see if Peter has any comments, but that could be made in
a v3 for this patch.

Best,
Josh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ