lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Dec 2022 11:10:01 -0500
From:   Waiman Long <longman@...hat.com>
To:     Zhang Qiao <zhangqiao22@...wei.com>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>
Cc:     lkml <linux-kernel@...r.kernel.org>
Subject: Re: [bug-report] possible s64 overflow in max_vruntime()

On 12/21/22 10:19, Zhang Qiao wrote:
> hi folks,
>
>      I found problem about s64 overflow in max_vruntime().
>
>      I create a task group GROUPA (path: /system.slice/xxx/yyy/CGROUPA) and run a task in this
> group on each cpu, these tasks is while loop and 100% cpu usage.
>
>      When unregister net devices, will queue a kwork on system_highpri_wq at flush_all_backlogs()
> and wake up a high-priority kworker thread on each cpu. However, the kworker thread has been
> waiting on the queue and has not been scheduled.
>
>      After parsing the vmcore, the vruntime of the kworker is 0x918fdb05287da7c3 and the
> cfs_rq->min_vruntime is 0x124b17fd59db8d02.
>
>      why the difference between the cfs_rq->min_vruntime and kworker's vruntime is so large?
>      1) the kworker of the system_highpri_wq sleep for long long time(about 300 days).
This is an interesting problem. That means if the kworker has been 
sleeping even longer, like 600 days, it may overflow u64 as well. My 
suggestion is to cap the sleep time dependency of the vruntime 
computation to a max value that cannot overflow s64 when combined with a 
max load.weight. IOW, if the tasks are sleeping long enough, they are 
all treated to be the same.
>      2) cfs_rq->curr is the ancestor of the GROUPA, cfs->curr->load.weight is 2494, so when
> the task belonging to the GROUPA run for a long time, its vruntime will increase by 420
> times, cfs_rq->min_vruntime will also grow rapidly.
>      3) when wakeup kworker thread, kworker will be set the maximum value between kworker's
> vruntime and cfs_rq->min_vruntime. But at max_vruntime(), there will be a s64 overflow issue,
> as follow:
>
> ---------
>
> static inline u64 min_vruntime(u64 min_vruntime, u64 vruntime)
> {
> 	/*
> 	 * vruntime=0x124b17fd59db8d02
> 	 * min_vruntime=0x918fdb05287da7c3
> 	 * vruntime - min_vruntime = 9276074894177461567 > s64_max, will s64 overflow
> 	 */
> 	s64 delta = (s64)(vruntime - min_vruntime);
> 	if (delta < 0)
> 		min_vruntime = vruntime;
>
> 	return min_vruntime;
> }
>
> ----------
>
> max_vruntime() will return the kworker's old vruntime, it is incorrect and the correct result
> shoud be cfs_rq->minvruntime. This incorrect result is greater than cfs_rq->min_vruntime and
> will cause kworker thread starved.
>
>      Does anyone have a good suggestion for slove this problem? or bugfix patch.
>
Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ