[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200225021131.GC253171@google.com>
Date: Mon, 24 Feb 2020 21:11:31 -0500
From: Joel Fernandes <joel@...lfernandes.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Uladzislau Rezki <urezki@...il.com>,
"Theodore Y. Ts'o" <tytso@....edu>,
Ext4 Developers List <linux-ext4@...r.kernel.org>,
Suraj Jitindar Singh <surajjs@...zon.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC] ext4: fix potential race between online resizing and
write operations
On Sat, Feb 22, 2020 at 05:10:18PM -0800, Paul E. McKenney wrote:
[...]
> > I was thinking a 2 fold approach (just thinking out loud..):
> >
> > If kfree_call_rcu() is called in atomic context or in any rcu reader, then
> > use GFP_ATOMIC to grow an rcu_head wrapper on the atomic memory pool and
> > queue that.
> >
> > Otherwise, grow an rcu_head on the stack of kfree_call_rcu() and call
> > synchronize_rcu() inline with it.
> >
> > Use preemptible() andr task_struct's rcu_read_lock_nesting to differentiate
> > between the 2 cases.
> >
> > Thoughts?
>
> How much are we really losing by having an rcu_head in the structure,
> either separately or unioned over other fields?
It does seem a convenient API at first glance. Also seems like there are a
number of usecases now (ext4, vfree that Vlad mentioned, and all the other
users he mentioned, etc).
> > > > Also there is one more open question what to do if GFP_ATOMIC
> > > > gets failed in case of having low memory condition. Probably
> > > > we can make use of "mempool interface" that allows to have
> > > > min_nr guaranteed pre-allocated pages.
> > >
> > > But we really do still need to handle the case where everything runs out,
> > > even the pre-allocated pages.
> >
> > If *everything* runs out, you are pretty much going to OOM sooner or later
> > anyway :D. But I see what you mean. But the 'tradeoff' is RCU can free
> > head-less objects where possible.
>
> Would you rather pay an rcu_head tax (in cases where it cannot share
> with other fields), or would you rather have states where you could free
> a lot of memory if only there was some memory to allocate to track the
> memory to be freed?
Depends on the usecase we could use the right API.
> But yes, as you suggested above, there could be an API similar to the
> userspace RCU library's API that usually just queues the callback but
> sometimes sleeps for a full grace period. If enough people want this,
> it should not be hard to set up.
Sounds good!
thanks,
- Joel
Powered by blists - more mailing lists