[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xm26k1qupsy0.fsf@bsegall-linux.svl.corp.google.com>
Date: Tue, 19 Jun 2018 10:49:43 -0700
From: bsegall@...gle.com
To: Xunlei Pang <xlpang@...ux.alibaba.com>
Cc: bsegall@...gle.com, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] sched/fair: Advance global expiration when period timer is restarted
Xunlei Pang <xlpang@...ux.alibaba.com> writes:
> On 6/19/18 2:58 AM, bsegall@...gle.com wrote:
>> Xunlei Pang <xlpang@...ux.alibaba.com> writes:
>>
>>> I noticed the group frequently got throttled even it consumed
>>> low cpu usage, this caused some jitters on the response time
>>> to some of our business containers enabling cpu quota.
>>>
>>> It's very easy to reproduce:
>>> mkdir /sys/fs/cgroup/cpu/test
>>> cd /sys/fs/cgroup/cpu/test
>>> echo 100000 > cpu.cfs_quota_us
>>> echo $$ > tasks
>>> then repeat:
>>> cat cpu.stat |grep nr_throttled // nr_throttled will increase
>>>
>>> After some analysis, we found that cfs_rq::runtime_remaining will
>>> be cleared by expire_cfs_rq_runtime() due to two equal but stale
>>> "cfs_{b|q}->runtime_expires" after period timer is re-armed.
>>
>> If this is after the first patch, then this is no longer what should
>> happen, and instead it would incorrectly /keep/ old local cfs_rq
>> runtime, and not __refill global runtime until the period.
>
> Exactly, I can improve the changelog.
>
>>
>>
>>>
>>> The global expiration should be advanced accordingly when the
>>> bandwidth period timer is restarted.
>>>
>>> Signed-off-by: Xunlei Pang <xlpang@...ux.alibaba.com>
>>> ---
>>> kernel/sched/fair.c | 15 ++++++++++-----
>>> 1 file changed, 10 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 9f384264e832..bb006e671609 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -5204,13 +5204,18 @@ static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq)
>>>
>>> void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
>>> {
>>> + u64 overrun;
>>> +
>>> lockdep_assert_held(&cfs_b->lock);
>>>
>>> - if (!cfs_b->period_active) {
>>> - cfs_b->period_active = 1;
>>> - hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period);
>>> - hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED);
>>> - }
>>> + if (cfs_b->period_active)
>>> + return;
>>> +
>>> + cfs_b->period_active = 1;
>>> + overrun = hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period);
>>> + cfs_b->runtime_expires += (overrun + 1) * ktime_to_ns(cfs_b->period);
>>
>> I think we actually want if (overrun) __refill_cfs_bandwidth_runtime(),
>> much like tg_set_cfs_bandwidth
>
> It's a little different, e.g. periods like below:
>
> Pm
> |
> v
> |-----|-----|
> Pn-1 Pn Pn+1
>
> Assuming we start_cfs_bandwidth() at some moment Pm between "Pn" and
> "Pn+1", cfs_b->runtime_expires we want here should be "Pn+1", while
> __refill_cfs_bandwidth_runtime() forces it to be "Pm+period"(hrtimer
> expires at "Pn+1").
>
> Regards,
> Xunlei
>
Hmm, yeah, it would wind up out of sync with the hrtimer unless we
restarted the period boundaries.
Now that I look at it even more closely, I don't see how you're getting
problems like that, besides wasting tiny amounts of cpu time in expire:
we only disable the period timer when there has been a full period with
no activity, so we should have a full cfs_b->runtime, and a
runtime_expires from the period that was spent entirely idle (so it
should always be in the past). But with a working extend-local-deadline
check like after patch 1 it shouldn't /matter/ what the actual value of
runtime_expires is, we'll just waste tiny amounts of cpu time
incrementing runtime_expires every account_cfs_rq_runtime call rather
than aborting early.
Actually, I think arguably the real issue may be in patch 1:
>> @@ -4637,8 +4640,10 @@ static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq)
>> * spread between our sched_clock and the one on which runtime was
>> * issued.
>> */
>> - if ((s64)(expires - cfs_rq->runtime_expires) > 0)
>> + if ((s64)(expires - cfs_rq->runtime_expires) > 0) {
>> cfs_rq->runtime_expires = expires;
>> + cfs_rq->expires_seq = expires_seq;
>> + }
>>
>> return cfs_rq->runtime_remaining > 0;
>> }
We don't want to clobber a tick-extended runtime_expires with the
original one, but we /do/ want update if expires_seq changes, no matter
what (rather than skipping updating both if we've tick-extended past the
/next/ period's runtime_expires).
I think this should actually be
if (cfs_rq->expires_seq != expires_seq) {
cfs_rq->runtime_expires = expires;
cfs_rq->expires_seq = expires_seq;
}
Can you easily check and see if this fixes the issue without this second
patch? That would clear up how the actual problem is happening, and I
think it's a change we'd want anyways.
That said, we should also just update runtime_expires like this patch
does regardless, and as mentioned we should have a full cfs_b->runtime
so we don't need a __refill like I was originally thinking.
Powered by blists - more mailing lists