[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200706133434.GA3483883@tassilo.jf.intel.com>
Date: Mon, 6 Jul 2020 06:34:34 -0700
From: Andi Kleen <andi.kleen@...el.com>
To: Feng Tang <feng.tang@...el.com>
Cc: Qian Cai <cai@....pw>, Andrew Morton <akpm@...ux-foundation.org>,
Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...ux.com>,
kernel test robot <rong.a.chen@...el.com>,
Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Matthew Wilcox <willy@...radead.org>,
Mel Gorman <mgorman@...e.de>,
Kees Cook <keescook@...omium.org>,
Luis Chamberlain <mcgrof@...nel.org>,
Iurii Zaikin <yzaikin@...gle.com>, tim.c.chen@...el.com,
dave.hansen@...el.com, ying.huang@...el.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, lkp@...ts.01.org
Subject: Re: [mm] 4e2c82a409: ltp.overcommit_memory01.fail
> ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
> - if (ret == 0 && write)
> + if (ret == 0 && write) {
> + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER)
> + schedule_on_each_cpu(sync_overcommit_as);
The schedule_on_each_cpu is not atomic, so the problem could still happen
in that window.
I think it may be ok if it eventually resolves, but certainly needs
a comment explaining it. Can you do some stress testing toggling the
policy all the time on different CPUs and running the test on
other CPUs and see if the test fails?
The other alternative would be to define some intermediate state
for the sysctl variable and only switch to never once the schedule_on_each_cpu
returned. But that's more complexity.
-Andi
Powered by blists - more mailing lists