lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Aug 2020 11:51:54 +0800
From:   "丁天琛" <>
To:     "'Mel Gorman'" <>
Cc:     "'Ingo Molnar'" <>,
        "'Peter Zijlstra'" <>,
        "'Juri Lelli'" <>,
        "'Vincent Guittot'" <>,
        "'Dietmar Eggemann'" <>,
        "'Steven Rostedt'" <>,
        "'Ben Segall'" <>,
        "'linux-kernel'" <>,
        "'??????'" <>
Subject: RE: [RFC PATCH] sched/numa: fix bug in update_task_scan_period

OK. Thanks for your advice and I'll use label instead.
In the case of migration failures, if there are still new failures after
clearing (meaning the node is still overloaded), the scanning period would
be doubled, just like not using this patch. However, if the failures do not
increase again, then the scanning period should be adjusted according to the
following rules (i.e., ps and lr ratio). I believe this is the original
design idea, right?

> -----Original Message-----
> From: Mel Gorman <>
> Sent: Tuesday, August 11, 2020 7:02 PM
> To: Tianchen Ding <>
> Cc: Ingo Molnar <>; Peter Zijlstra
> <>; Juri Lelli <>; Vincent
> <>; Dietmar Eggemann
> <>; Steven Rostedt <>;
> Ben Segall <>; linux-kernel <linux-
>>; ?????? <>
> Subject: Re: [RFC PATCH] sched/numa: fix bug in update_task_scan_period
> On Tue, Aug 11, 2020 at 04:30:31PM +0800, ????????? wrote:
> > When p->numa_faults_locality[2] > 0, numa_scan_period is doubled, but
> > this array will never be cleared, which causes scanning period always
> > reaching its max value. This patch clears numa_faults_locality after
> > numa_scan_period being doubled to fix this bug.
> >
> An out label at the end of the function to clears numa_faults_locality
> also work with a comment explaining why.  That aside, what is the user-
> visible impact of the patch? If there are no useful faults or migration
> it makes sense that scanning is very slow until the situation changes. The
> corner case is that a migration failure might keep the scan rate slower
than it
> should be but the flip side is that fixing it might increase the scan rate
and still
> incur migration failures which introduces overhead with no gain.
> --
> Mel Gorman
> SUSE Labs

Powered by blists - more mailing lists