[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180905134038.GE14951@dhcp22.suse.cz>
Date: Wed, 5 Sep 2018 15:40:38 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: David Rientjes <rientjes@...gle.com>, Tejun Heo <tj@...nel.org>,
Roman Gushchin <guro@...com>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm,page_alloc: PF_WQ_WORKER threads must sleep at
should_reclaim_retry().
On Wed 05-09-18 22:20:58, Tetsuo Handa wrote:
> On 2018/08/24 9:31, Tetsuo Handa wrote:
> > For now, I don't think we need to add af5679fbc669f31f to the list for
> > CVE-2016-10723, for af5679fbc669f31f might cause premature next OOM victim
> > selection (especially with CONFIG_PREEMPT=y kernels) due to
> >
> > __alloc_pages_may_oom(): oom_reap_task():
> >
> > mutex_trylock(&oom_lock) succeeds.
> > get_page_from_freelist() fails.
> > Preempted to other process.
> > oom_reap_task_mm() succeeds.
> > Sets MMF_OOM_SKIP.
> > Returned from preemption.
> > Finds that MMF_OOM_SKIP was already set.
> > Selects next OOM victim and kills it.
> > mutex_unlock(&oom_lock) is called.
> >
> > race window like described as
> >
> > Tetsuo was arguing that at least MMF_OOM_SKIP should be set under the lock
> > to prevent from races when the page allocator didn't manage to get the
> > freed (reaped) memory in __alloc_pages_may_oom but it sees the flag later
> > on and move on to another victim. Although this is possible in principle
> > let's wait for it to actually happen in real life before we make the
> > locking more complex again.
> >
> > in that commit.
> >
>
> Yes, that race window is real. We can needlessly select next OOM victim.
> I think that af5679fbc669f31f was too optimistic.
Changelog said
"Although this is possible in principle let's wait for it to actually
happen in real life before we make the locking more complex again."
So what is the real life workload that hits it? The log you have pasted
below doesn't tell much.
> [ 278.147280] Out of memory: Kill process 9943 (a.out) score 919 or sacrifice child
> [ 278.148927] Killed process 9943 (a.out) total-vm:4267252kB, anon-rss:3430056kB, file-rss:0kB, shmem-rss:0kB
> [ 278.151586] vmtoolsd invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
[...]
> [ 278.331527] Out of memory: Kill process 8790 (firewalld) score 5 or sacrifice child
> [ 278.333267] Killed process 8790 (firewalld) total-vm:358012kB, anon-rss:21928kB, file-rss:0kB, shmem-rss:0kB
> [ 278.336430] oom_reaper: reaped process 8790 (firewalld), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists