[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161124101525.GB20668@dhcp22.suse.cz>
Date: Thu, 24 Nov 2016 11:15:26 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Donald Buczek <buczek@...gen.mpg.de>
Cc: dvteam@...gen.mpg.de, Paul Menzel <pmenzel@...gen.mpg.de>,
linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org,
Josh Triplett <josh@...htriplett.org>
Subject: Re: INFO: rcu_sched detected stalls on CPUs/tasks with `kswapd` and
`mem_cgroup_shrink_node`
On Mon 21-11-16 16:35:53, Donald Buczek wrote:
[...]
> Hello,
>
> thanks a lot for looking into this!
>
> Let me add some information from the reporting site:
>
> * We've tried the patch from Paul E. McKenney (the one posted Wed, 16 Nov
> 2016) and it doesn't shut up the rcu stall warnings.
>
> * Log file from a boot with the patch applied ( grep kernel
> /var/log/messages ) is here :
> http://owww.molgen.mpg.de/~buczek/321322/2016-11-21_syslog.txt
>
> * This system is a backup server and walks over thousands of files sometimes
> with multiple parallel rsync processes.
>
> * No rcu_* warnings on that machine with 4.7.2, but with 4.8.4 , 4.8.6 ,
> 4.8.8 and now 4.9.0-rc5+Pauls patch
I assume you haven't tried the Linus 4.8 kernel without any further
stable patches? Just to be sure we are not talking about some later
regression which found its way to the stable tree.
> * When the backups are actually happening there might be relevant memory
> pressure from inode cache and the rsync processes. We saw the oom-killer
> kick in on another machine with same hardware and similar (a bit higher)
> workload. This other machine also shows a lot of rcu stall warnings since
> 4.8.4.
>
> * We see "rcu_sched detected stalls" also on some other machines since we
> switched to 4.8 but not as frequently as on the two backup servers. Usually
> there's "shrink_node" and "kswapd" on the top of the stack. Often
> "xfs_reclaim_inodes" variants on top of that.
I would be interested to see some reclaim tracepoints enabled. Could you
try that out? At least mm_shrink_slab_{start,end} and
mm_vmscan_lru_shrink_inactive. This should tell us more about how the
reclaim behaved.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists