[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161130170557.GK18432@dhcp22.suse.cz>
Date: Wed, 30 Nov 2016 18:05:57 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Donald Buczek <buczek@...gen.mpg.de>,
Paul Menzel <pmenzel@...gen.mpg.de>, dvteam@...gen.mpg.de,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Josh Triplett <josh@...htriplett.org>
Subject: Re: INFO: rcu_sched detected stalls on CPUs/tasks with `kswapd` and
`mem_cgroup_shrink_node`
On Wed 30-11-16 17:38:20, Peter Zijlstra wrote:
> On Wed, Nov 30, 2016 at 06:29:55AM -0800, Paul E. McKenney wrote:
> > We can, and you are correct that cond_resched() does not unconditionally
> > supply RCU quiescent states, and never has. Last time I tried to add
> > cond_resched_rcu_qs() semantics to cond_resched(), I got told "no",
> > but perhaps it is time to try again.
>
> Well, you got told: "ARRGH my benchmark goes all regress", or something
> along those lines. Didn't we recently dig out those commits for some
> reason or other?
>
> Finding out what benchmark that was and running it against this patch
> would make sense.
>
> Also, I seem to have missed, why are we going through this again?
Well, the point I've brought that up is because having basically two
APIs for cond_resched is more than confusing. Basically all longer in
kernel loops do cond_resched() but it seems that this will not help the
silence RCU lockup detector in rare cases where nothing really wants to
schedule. I am really not sure whether we want to sprinkle
cond_resched_rcu_qs at random places just to silence RCU detector...
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists