[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YK/zHMPSZSKrmXC6@casper.infradead.org>
Date: Thu, 27 May 2021 20:29:32 +0100
From: Matthew Wilcox <willy@...radead.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Andi Kleen <ak@...ux.intel.com>, Feng Tang <feng.tang@...el.com>,
kernel test robot <oliver.sang@...el.com>,
John Stultz <john.stultz@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Stephen Boyd <sboyd@...nel.org>,
Jonathan Corbet <corbet@....net>,
Mark Rutland <Mark.Rutland@....com>,
Marc Zyngier <maz@...nel.org>,
Xing Zhengjun <zhengjun.xing@...ux.intel.com>,
Chris Mason <clm@...com>, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
lkp@...ts.01.org, lkp@...el.com, ying.huang@...el.com,
zhengjun.xing@...el.com
Subject: Re: [clocksource] 8901ecc231: stress-ng.lockbus.ops_per_sec -9.5%
regression
On Thu, May 27, 2021 at 12:19:23PM -0700, Paul E. McKenney wrote:
> On Thu, May 27, 2021 at 12:01:23PM -0700, Andi Kleen wrote:
> >
> > > Nevertheless, it is quite possible that real-world use will result in
> > > some situation requiring that high-stress workloads run on hardware
> > > not designed to accommodate them, and also requiring that the kernel
> > > refrain from marking clocksources unstable.
> > > Therefore, provide an out-of-tree patch that reacts to this situation
> >
> > out-of-tree means it will not be submitted?
> >
> > I think it would make sense upstream, but perhaps guarded with some option.
>
> The reason I do not intend to immediately upstream this patch is that
> it increases the probability that a real clocksource read-latency issue
> will be ignored, for example, during hardware bringup. Furthermore,
> the only known need from it comes from hardware that is, in the words
> of the stress-ng man page, "poorly designed". And the timing of this
> email thread leads me to believe that such hardware is not easy to obtain.
I think you're placing a little too much weight on the documentation
here. It seems that a continuous stream of locked operations executed
in userspace on a single CPU can cause this problem to occur. If that's
true all the way out to one guest in a hypervisor can cause problems
for the hypervisor itself, I think cloud providers everywhere are
going to want this patch?
> My thought is therefore to keep this patch out of tree for now.
> If it becomes clear that long-latency clocksource reads really are
> a significant issue in their own right (as opposed to merely being a
> symptom of a hardware or firmware bug), then this patch is available to
> immediately respond to that issue.
>
> And there would then be strong evidence in favor of me biting the bullet,
> adding the complexity and the additional option (with your Suggested-by),
> and getting that upstream and into -stable.
>
> Seem reasonable?
>
> Thanx, Paul
>
Powered by blists - more mailing lists