lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 Jun 2022 09:17:47 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     Uladzislau Rezki <urezki@...il.com>, rcu@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Frederic Weisbecker <frederic@...nel.org>,
        Neeraj Upadhyay <quic_neeraju@...cinc.com>,
        Josh Triplett <josh@...htriplett.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Joel Fernandes <joel@...lfernandes.org>
Subject: Re: [RFC PATCH] rcu: back off on allocation failure in
 fill_page_cache_func

On Thu 23-06-22 20:39:50, Paul E. McKenney wrote:
> On Wed, Jun 22, 2022 at 08:54:22PM +0200, Uladzislau Rezki wrote:
> > On Wed, Jun 22, 2022 at 01:47:11PM +0200, Michal Hocko wrote:
> > > From: Michal Hocko <mhocko@...e.com>
> > > 
> > > fill_page_cache_func allocates couple of pages to store
> > > kvfree_rcu_bulk_data. This is a lightweight (GFP_NORETRY) allocation
> > > which can fail under memory pressure. The function will, however keep
> > > retrying even when the previous attempt has failed.
> > > 
> > > While this is not really incorrect there is one thing to consider. This
> > > allocation is invoked from the WQ context and that means that if the
> > > memory reclaim gets stuck it can hog the worker for quite some time.
> > > WQ concurrency is only triggered when the worker context sleeps and that
> > > is not guaranteed for __GFP_NORETRY allocation attempts (see
> > > should_reclaim_retry).
> > > 
> > > We have seen WQ lockups
> > > kernel: BUG: workqueue lockup - pool cpus=93 node=1 flags=0x1 nice=0 stuck for 32s!
> > > [...]
> > > kernel: pool 74: cpus=37 node=0 flags=0x1 nice=0 hung=32s workers=2 manager: 2146
> > > kernel:   pwq 498: cpus=249 node=1 flags=0x1 nice=0 active=4/256 refcnt=5
> > > kernel:     in-flight: 1917:fill_page_cache_func
> > > kernel:     pending: dbs_work_handler, free_work, kfree_rcu_monitor
> > > 
> > > Originaly, we thought that several retries with direct reclaim being
> > > stuck is the underlying reason but we couldn't have confirmed that and
> > > have seen a similar lockups detected even without any heavy memory
> > > pressure so there is likely something else/more going on. On the other
> > > hand failing the allocation shouldn't have a big impact and from the
> > > code it is not really obvious why retrying is desirable so back off
> > > after the allocation failure.
> > > 
> > > Cc: Uladzislau Rezki (Sony) <urezki@...il.com>
> > > Cc: "Paul E. McKenney" <paulmck@...nel.org>
> > > Cc: Frederic Weisbecker <frederic@...nel.org>
> > > Cc: Neeraj Upadhyay <quic_neeraju@...cinc.com>
> > > Cc: Josh Triplett <josh@...htriplett.org>
> > > Cc: Steven Rostedt <rostedt@...dmis.org>
> > > Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
> > > Cc: Lai Jiangshan <jiangshanlai@...il.com>
> > > Cc: Joel Fernandes <joel@...lfernandes.org>
> > > Signed-off-by: Michal Hocko <mhocko@...e.com>
> > > ---
> > > 
> > > Hi,
> > > I am sending this as an RFC because I couldn't prove that the WQ
> > > concurency issue as a result from the allocation retry is really a
> > > problem. On the other hand I couldn't see a good reason to retry after a
> > > previous failure. While the kswapd running in the background could have
> > > released some memory this is a not really guaranteed and mostly a
> > > wishful thinking.
> > > 
> > > I do not understand the code well enough so I could be easily missing
> > > something. If the patch is a wrong thing to do then it would be really
> > > nice to add a comment why the retry is desirable and a good thing to do.
> > > 
> > > The retry loop should be bound to rcu_min_cached_objs which is quite
> > > small but configurable so this can get large in some setups.
> > > 
> > > Thanks
> > > 
> > >  kernel/rcu/tree.c | 17 +++++++++--------
> > >  1 file changed, 9 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > index c25ba442044a..54a3a19c4c0b 100644
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kernel/rcu/tree.c
> > > @@ -3508,15 +3508,16 @@ static void fill_page_cache_func(struct work_struct *work)
> > >  		bnode = (struct kvfree_rcu_bulk_data *)
> > >  			__get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
> > >  
> > > -		if (bnode) {
> > > -			raw_spin_lock_irqsave(&krcp->lock, flags);
> > > -			pushed = put_cached_bnode(krcp, bnode);
> > > -			raw_spin_unlock_irqrestore(&krcp->lock, flags);
> > > +		if (!bnode)
> > > +			break;
> > >  
> > > -			if (!pushed) {
> > > -				free_page((unsigned long) bnode);
> > > -				break;
> > > -			}
> > > +		raw_spin_lock_irqsave(&krcp->lock, flags);
> > > +		pushed = put_cached_bnode(krcp, bnode);
> > > +		raw_spin_unlock_irqrestore(&krcp->lock, flags);
> > > +
> > > +		if (!pushed) {
> > > +			free_page((unsigned long) bnode);
> > > +			break;
> > >  		}
> > >  	}
> > >  
> > > -- 
> > > 2.30.2
> > >
> > OK. You would like to break the loop once an allocation does not succeed.
> > To me it also makes sense, i mean there is no reason to repeat it several
> > times that can lead to worqueue hogging.
> > 
> > Reviewed-by: Uladzislau Rezki (Sony) <urezki@...il.com>

Thanks for the review!

> Queued for testing and further review, thank you both!
> 
> My normal process would send this not into the upcoming merge window
> (v5.20?) but instead into the next one (v6.0?).  Please let me know if
> this is more urgent then that, and I will see what I can do.

No, it is not really urgent. As I've said, it turned out that there are
other contributors to the WQ stalls so this is mostly a cosmetic change.

Thanks!
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ