[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9GJk65CxNjwQXoK@u40bc5e070a0153.ant.amazon.com>
Date: Wed, 25 Jan 2023 20:57:07 +0100
From: Roman Kagan <rkagan@...zon.de>
To: Zhang Qiao <zhangqiao22@...wei.com>
CC: Peter Zijlstra <peterz@...radead.org>,
Waiman Long <longman@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
"Vincent Guittot" <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
"Daniel Bristot de Oliveira" <bristot@...hat.com>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [bug-report] possible s64 overflow in max_vruntime()
On Fri, Dec 23, 2022 at 09:57:33PM +0800, Zhang Qiao wrote:
> 在 2022/12/22 20:45, Peter Zijlstra 写道:
> > On Wed, Dec 21, 2022 at 11:19:31PM +0800, Zhang Qiao wrote:
> >> hi folks,
> >>
> >> I found problem about s64 overflow in max_vruntime().
> >>
> >> I create a task group GROUPA (path: /system.slice/xxx/yyy/CGROUPA) and run a task in this
> >> group on each cpu, these tasks is while loop and 100% cpu usage.
> >>
> >> When unregister net devices, will queue a kwork on system_highpri_wq at flush_all_backlogs()
> >> and wake up a high-priority kworker thread on each cpu. However, the kworker thread has been
> >> waiting on the queue and has not been scheduled.
> >>
> >> After parsing the vmcore, the vruntime of the kworker is 0x918fdb05287da7c3 and the
> >> cfs_rq->min_vruntime is 0x124b17fd59db8d02.
> >>
> >> why the difference between the cfs_rq->min_vruntime and kworker's vruntime is so large?
> >> 1) the kworker of the system_highpri_wq sleep for long long time(about 300 days).
> >> 2) cfs_rq->curr is the ancestor of the GROUPA, cfs->curr->load.weight is 2494, so when
> >> the task belonging to the GROUPA run for a long time, its vruntime will increase by 420
> >> times, cfs_rq->min_vruntime will also grow rapidly.
> >> 3) when wakeup kworker thread, kworker will be set the maximum value between kworker's
> >> vruntime and cfs_rq->min_vruntime. But at max_vruntime(), there will be a s64 overflow issue,
> >> as follow:
> >>
> >> ---------
> >>
> >> static inline u64 min_vruntime(u64 min_vruntime, u64 vruntime)
> >> {
> >> /*
> >> * vruntime=0x124b17fd59db8d02
> >> * min_vruntime=0x918fdb05287da7c3
> >> * vruntime - min_vruntime = 9276074894177461567 > s64_max, will s64 overflow
> >> */
> >> s64 delta = (s64)(vruntime - min_vruntime);
> >> if (delta < 0)
> >> min_vruntime = vruntime;
> >>
> >> return min_vruntime;
> >> }
> >>
> >> ----------
> >>
> >> max_vruntime() will return the kworker's old vruntime, it is incorrect and the correct result
> >> shoud be cfs_rq->minvruntime. This incorrect result is greater than cfs_rq->min_vruntime and
> >> will cause kworker thread starved.
> >>
> >> Does anyone have a good suggestion for slove this problem? or bugfix patch.
> >
> > I don't understand what you tihnk the problem is. Signed overflow is
> > perfectly fine and works as designed here.
>
> hi, Peter and Waiman,
>
> This problem occurs in the production environment that deploy some dpdk services. When this probelm
> occurs, the system will be unavailable(for example, many commands about network will be stuck),so
> i think it's a problem.
>
> Because most network commands(such as "ip") require rtnl_mutex, but the rtnl_mutex's onwer is waiting for
> the the kworker of the system_highpri_wq at flush_all_backlogs(), util this highpri kworker finished
> flush the network packets.
>
> However, this highpri kworker has been sleeping for long, the difference between the kworker's vruntime
> and cfs_rq->min_vruntime is so big, when waking up it, it will be set its old vruntime due to s64 overflow
> at max_vruntime(). Because the incorrect vruntime, the kworker might not be scheduled.
>
> Is it necessary to deal with this problem in kernel?
> If necessary, for fix this problem, when a tasks is sleeping long enough, we set its vruntime as
> cfs_rq->min_vruntime when wakeup it, avoid the s64 overflow issue at max_vruntime, as follow:
>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e16e9f0124b0..89df8d7bae66 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4336,10 +4336,14 @@ static void check_spread(struct cfs_rq *cfs_rq, struct sched_entity *se)
> #endif
> }
>
> +/* when a task sleep over 200 days, it's vruntime will be set as cfs_rq->min_vruntime. */
> +#define WAKEUP_REINIT_THRESHOLD_NS (200LL * 24 * 3600 * NSEC_PER_SEC)
I wonder where do these 200 days come from?
E.g. in our setup we've observed the problem on a 448 cpu system, with
all the cpus being occupied by tasks in a single cpu cgroup (and
therefore contributing to its weight), when the other task (kworker)
slept for around 209 days. IOW presumably adding a few more cpus or
just running the whole cgroup at an elevated nice level will make the
difference accumulate faster.
Thanks,
Roman.
> +
> static void
> place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
> {
> u64 vruntime = cfs_rq->min_vruntime;
> + struct rq *rq = rq_of(cfs_rq);
>
> /*
> * The 'current' period is already promised to the current tasks,
> @@ -4364,8 +4368,11 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
> vruntime -= thresh;
> }
>
> - /* ensure we never gain time by being placed backwards. */
> - se->vruntime = max_vruntime(se->vruntime, vruntime);
> + if (unlikely(!initial && (s64)(rq_clock_task(rq) - se->exec_start) > WAKEUP_REINIT_THRESHOLD_NS))
> + se->vruntime = vruntime;
> + else
> + /* ensure we never gain time by being placed backwards. */
> + se->vruntime = max_vruntime(se->vruntime, vruntime);
> }
>
> static void check_enqueue_throttle(struct cfs_rq *cfs_rq);
>
>
>
> >
> > .
> >
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
Powered by blists - more mailing lists