[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180626235014.GS3593@linux.vnet.ibm.com>
Date: Tue, 26 Jun 2018 16:50:14 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: Michal Hocko <mhocko@...nel.org>,
David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm,oom: Bring OOM notifier callbacks to outside of OOM
killer.
On Wed, Jun 27, 2018 at 05:10:48AM +0900, Tetsuo Handa wrote:
> On 2018/06/27 2:03, Paul E. McKenney wrote:
> > There are a lot of ways it could be made concurrency safe. If you need
> > me to do this, please do let me know.
> >
> > That said, the way it is now written, if you invoke rcu_oom_notify()
> > twice in a row, the second invocation will wait until the memory from
> > the first invocation is freed. What do you want me to do if you invoke
> > me concurrently?
> >
> > 1. One invocation "wins", waits for the earlier callbacks to
> > complete, then encourages any subsequent callbacks to be
> > processed more quickly. The other invocations return
> > immediately without doing anything.
> >
> > 2. The invocations serialize, with each invocation waiting for
> > the callbacks from previous invocation (in mutex_lock() order
> > or some such), and then starting a new round.
> >
> > 3. Something else?
> >
> > Thanx, Paul
>
> As far as I can see,
>
> - atomic_set(&oom_callback_count, 1);
> + atomic_inc(&oom_callback_count);
>
> should be sufficient.
I don't see how that helps. For example, suppose that two tasks
invoked rcu_oom_notify() at about the same time. Then they could
both see oom_callback_count equal to zero, both atomically increment
oom_callback_count, then both do the IPI invoking rcu_oom_notify_cpu()
on each online CPU.
So far, so good. But rcu_oom_notify_cpu() enqueues a per-CPU RCU
callback, and enqueuing the same callback twice in quick succession
would fatally tangle RCU's callback lists.
What am I missing here?
Thanx, Paul
Powered by blists - more mailing lists