lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPtwhKrswHQ1Ue2YO2hJi7h-Dsk6eGPiQ2UmLCq1AxGxMoHr2w@mail.gmail.com>
Date:   Thu, 3 Oct 2019 19:05:56 -0700
From:   Xuewei Zhang <xueweiz@...gle.com>
To:     Phil Auld <pauld@...hat.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Anton Blanchard <anton@...abs.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        linux-kernel@...r.kernel.org, stable@...r.kernel.org,
        trivial@...nel.org, Neel Natu <neelnatu@...gle.com>,
        Hao Luo <haoluo@...gle.com>
Subject: Re: [PATCH] sched/fair: scale quota and period without losing
 quota/period ratio precision

+cc neelnatu@...gle.com and haoluo@...gle.com, they helped a lot
for this issue. Sorry I forgot to include them when sending out the patch.

On Thu, Oct 3, 2019 at 5:55 PM Phil Auld <pauld@...hat.com> wrote:
>
> Hi,
>
> On Thu, Oct 03, 2019 at 05:12:43PM -0700 Xuewei Zhang wrote:
> > quota/period ratio is used to ensure a child task group won't get more
> > bandwidth than the parent task group, and is calculated as:
> > normalized_cfs_quota() = [(quota_us << 20) / period_us]
> >
> > If the quota/period ratio was changed during this scaling due to
> > precision loss, it will cause inconsistency between parent and child
> > task groups. See below example:
> > A userspace container manager (kubelet) does three operations:
> > 1) Create a parent cgroup, set quota to 1,000us and period to 10,000us.
> > 2) Create a few children cgroups.
> > 3) Set quota to 1,000us and period to 10,000us on a child cgroup.
> >
> > These operations are expected to succeed. However, if the scaling of
> > 147/128 happens before step 3), quota and period of the parent cgroup
> > will be changed:
> > new_quota: 1148437ns, 1148us
> > new_period: 11484375ns, 11484us
> >
> > And when step 3) comes in, the ratio of the child cgroup will be 104857,
> > which will be larger than the parent cgroup ratio (104821), and will
> > fail.
> >
> > Scaling them by a factor of 2 will fix the problem.
>
> I have no issues with the concept. We went around a bit about the actual
> numbers and made it an approximation.
>
> >
> > Fixes: 2e8e19226398 ("sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup")
> > Signed-off-by: Xuewei Zhang <xueweiz@...gle.com>
> > ---
> >  kernel/sched/fair.c | 36 ++++++++++++++++++++++--------------
> >  1 file changed, 22 insertions(+), 14 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 83ab35e2374f..b3d3d0a231cd 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -4926,20 +4926,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
> >               if (++count > 3) {
> >                       u64 new, old = ktime_to_ns(cfs_b->period);
> >
> > -                     new = (old * 147) / 128; /* ~115% */
> > -                     new = min(new, max_cfs_quota_period);
> > -
> > -                     cfs_b->period = ns_to_ktime(new);
> > -
> > -                     /* since max is 1s, this is limited to 1e9^2, which fits in u64 */
> > -                     cfs_b->quota *= new;
> > -                     cfs_b->quota = div64_u64(cfs_b->quota, old);
> > -
> > -                     pr_warn_ratelimited(
> > -     "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n",
> > -                             smp_processor_id(),
> > -                             div_u64(new, NSEC_PER_USEC),
> > -                             div_u64(cfs_b->quota, NSEC_PER_USEC));
> > +                     /*
> > +                      * Grow period by a factor of 2 to avoid lossing precision.
> > +                      * Precision loss in the quota/period ratio can cause __cfs_schedulable
> > +                      * to fail.
> > +                      */
> > +                     new = old * 2;
> > +                     if (new < max_cfs_quota_period) {
>
> I don't like this part as much. There may be a value between
> max_cfs_quota_period/2 and max_cfs_quota_period that would get us out of
> the loop. Possibly in practice it won't matter but here you trigger the
> warning and take no action to keep it from continuing.
>
> Also, if you are actually hitting this then you might want to just start at
> a higher but proportional quota and period.

I'd like to do what you suggested. A quick idea would be to scale period to
max_cfs_quota_period, and scale quota proportionally. However the naive
implementation won't work under this edge case:
original:
quota: 500,000us  period: 570,000us
after scaling:
quota: 877,192us  period: 1,000,000us
original ratio: 919803
new ratio: 919802

To do this right, the code would have to keep an eye out on the precision loss,
and increase quota by 1us sometimes to cancel out the precision loss.

Also, I think this case is not that important. Because if we are
hitting this case, that
suggests the period is already >0.5s. And if we are still hitting
timeouts with a 0.5s
period, scaling it to 1s probably won't help much.
When this happens, I'd imagine the parent cgroup would have a LOT of child
cgroups. It might make sense for the userspace to create the parent cgroup with
1s period.

If you think automatically scaling 0.5s+ to 1s is still important, I'm
happy to stash
this patch, and send in another one that handles the 0.5+s -> 1s
scaling the right
way. :) Thanks!

Best regards,
Xuewei

>
>
> Cheers,
> Phil
>
> > +                             cfs_b->period = ns_to_ktime(new);
> > +                             cfs_b->quota *= 2;
> > +
> > +                             pr_warn_ratelimited(
> > +     "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us = %lld, cfs_quota_us = %lld)\n",
> > +                                     smp_processor_id(),
> > +                                     div_u64(new, NSEC_PER_USEC),
> > +                                     div_u64(cfs_b->quota, NSEC_PER_USEC));
> > +                     } else {
> > +                             pr_warn_ratelimited(
> > +     "cfs_period_timer[cpu%d]: period too short, but cannot scale up without losing precision (cfs_period_us = %lld, cfs_quota_us = %lld)\n",
> > +                                     smp_processor_id(),
> > +                                     div_u64(old, NSEC_PER_USEC),
> > +                                     div_u64(cfs_b->quota, NSEC_PER_USEC));
> > +                     }
> >
> >                       /* reset count so we don't come right back in here */
> >                       count = 0;
> > --
> > 2.23.0.581.g78d2f28ef7-goog
> >
>
> --

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ