[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200930164822.GX2277@dhcp22.suse.cz>
Date: Wed, 30 Sep 2020 18:48:22 +0200
From: Michal Hocko <mhocko@...e.com>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: Uladzislau Rezki <urezki@...il.com>,
Mel Gorman <mgorman@...hsingularity.net>,
"Paul E. McKenney" <paulmck@...nel.org>,
LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
"Theodore Y . Ts'o" <tytso@....edu>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
Mel Gorman <mgorman@...e.de>
Subject: Re: [RFC-PATCH 2/4] mm: Add __rcu_alloc_page_lockless() func.
On Wed 30-09-20 11:25:17, Joel Fernandes wrote:
> On Fri, Sep 25, 2020 at 05:47:41PM +0200, Michal Hocko wrote:
> > On Fri 25-09-20 17:31:29, Uladzislau Rezki wrote:
> > > > > > >
> > > > > > > All good points!
> > > > > > >
> > > > > > > On the other hand, duplicating a portion of the allocator functionality
> > > > > > > within RCU increases the amount of reserved memory, and needlessly most
> > > > > > > of the time.
> > > > > > >
> > > > > >
> > > > > > But it's very similar to what mempools are for.
> > > > > >
> > > > > As for dynamic caching or mempools. It requires extra logic on top of RCU
> > > > > to move things forward and it might be not efficient way. As a side
> > > > > effect, maintaining of the bulk arrays in the separate worker thread
> > > > > will introduce other drawbacks:
> > > >
> > > > This is true but it is also true that it is RCU to require this special
> > > > logic and we can expect that we might need to fine tune this logic
> > > > depending on the RCU usage. We definitely do not want to tune the
> > > > generic page allocator for a very specific usecase, do we?
> > > >
> > > I look at it in scope of GFP_ATOMIC/GFP_NOWAIT issues, i.e. inability
> > > to provide a memory service for contexts which are not allowed to
> > > sleep, RCU is part of them. Both flags used to provide such ability
> > > before but not anymore.
> > >
> > > Do you agree with it?
> >
> > Yes this sucks. But this is something that we likely really want to live
> > with. We have to explicitly _document_ that really atomic contexts in RT
> > cannot use the allocator. From the past discussions we've had this is
> > likely the most reasonable way forward because we do not really want to
> > encourage anybody to do something like that and there should be ways
> > around that. The same is btw. true also for !RT. The allocator is not
> > NMI safe and while we should be able to make it compatible I am not
> > convinced we really want to.
> >
> > Would something like this be helpful wrt documentation?
> >
> > diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> > index 67a0774e080b..9fcd47606493 100644
> > --- a/include/linux/gfp.h
> > +++ b/include/linux/gfp.h
> > @@ -238,7 +238,9 @@ struct vm_area_struct;
> > * %__GFP_FOO flags as necessary.
> > *
> > * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower
> > - * watermark is applied to allow access to "atomic reserves"
> > + * watermark is applied to allow access to "atomic reserves".
> > + * The current implementation doesn't support NMI and other non-preemptive context
> > + * (e.g. raw_spin_lock).
>
> I think documenting is useful.
>
> Could it be more explicit in what the issue is? Something like:
>
> * Even with GFP_ATOMIC, calls to the allocator can sleep on PREEMPT_RT
> systems. Therefore, the current low-level allocator implementation does not
> support being called from special contexts that are atomic on RT - such as
> NMI and raw_spin_lock. Due to these constraints and considering calling code
> usually has no control over the PREEMPT_RT configuration, callers of the
> allocator should avoid calling the allocator from these cotnexts even in
> non-RT systems.
I do not mind documenting RT specific behavior but as mentioned in other
reply, this should likely go via RT tree for now. There is likely more
to clarify about atomicity for PREEMPT_RT.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists