[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200721141539.GA3696@lca.pw>
Date: Tue, 21 Jul 2020 10:15:39 -0400
From: Qian Cai <cai@....pw>
To: Michal Hocko <mhocko@...nel.org>
Cc: linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tim Chen <tim.c.chen@...ux.intel.com>
Subject: Re: [RFC PATCH] mm: silence soft lockups from unlock_page
On Tue, Jul 21, 2020 at 03:38:35PM +0200, Michal Hocko wrote:
> On Tue 21-07-20 09:23:44, Qian Cai wrote:
> > On Tue, Jul 21, 2020 at 02:17:52PM +0200, Michal Hocko wrote:
> > > On Tue 21-07-20 07:44:07, Qian Cai wrote:
> > > >
> > > >
> > > > > On Jul 21, 2020, at 7:25 AM, Michal Hocko <mhocko@...nel.org> wrote:
> > > > >
> > > > > Are these really important? I believe I can dig that out from the bug
> > > > > report but I didn't really consider that important enough.
> > > >
> > > > Please dig them out. We have also been running those things on
> > > > “large” powerpc as well and never saw such soft-lockups. Those
> > > > details may give us some clues about the actual problem.
> > >
> > > I strongly suspect this is not really relevant but just FYI this is
> > > 16Node, 11.9TB with 1536CPUs system.
> >
> > Okay, we are now talking about the HPC special case. Just brain-storming some
> > ideas here.
> >
> >
> > 1) What about increase the soft-lockup threshold early at boot and restore
> > afterwards? As far as I can tell, those soft-lockups are just a few bursts of
> > things and then cure itself after the booting.
>
> Is this really better option than silencing soft lockup from the code
> itself? What if the same access pattern happens later on?
It is better because it does not require a code change? Did your customers see
the similar soft-lockups after booting was done?
Powered by blists - more mailing lists