[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1709051739140.1900@nanos>
Date: Tue, 5 Sep 2017 17:42:32 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Don Zickus <dzickus@...hat.com>
cc: Peter Zijlstra <peterz@...radead.org>,
Ulrich Obergfell <uobergfe@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Borislav Petkov <bp@...en8.de>,
Sebastian Siewior <bigeasy@...utronix.de>,
Nicholas Piggin <npiggin@...il.com>,
Chris Metcalf <cmetcalf@...lanox.com>
Subject: Re: [patch 11/29] lockup_detector: Remove park_in_progress hackery
On Tue, 5 Sep 2017, Don Zickus wrote:
> On Mon, Sep 04, 2017 at 02:10:50PM +0200, Peter Zijlstra wrote:
> > On Mon, Sep 04, 2017 at 01:09:06PM +0200, Ulrich Obergfell wrote:
> >
> > > - A thread hogs CPU N (soft lockup) so that watchdog/N is unable to run.
> > > - A user re-configures 'watchdog_thresh' on the fly. The reconfiguration
> > > requires parking/unparking of all watchdog threads.
> >
> > This is where you fail, its silly to require parking for
> > reconfiguration.
>
> Hi Peter,
>
> Ok, please elaborate. Unless I am misunderstanding, that is what Thomas
> requested us do years ago when he implemented the parking/unparking scheme
> and what his current patch set is doing now.
>
> The point of parking I believe was to avoid the overhead of tearing down a
> thread and restarting it when the code needed to update various lockup
> detector settings.
>
> So if we can't depend on parking for reconfiguration, then are the other
> options (besides tearing down threads)?
Yes, the park/unpark is what I still use as this was the simplest way to
keep everything in sync.
I pondered to do on the fly reconfiguration as well, but that would add
more code and would not solve the general issue of park/unpark. So I rather
went for a single mechanism which just works, even if it is suboptimal cpu
cycle wise. OTOH that reconfiguration is not something which happens every
5ms, so we can just go for the stupid, but simple mechanism.
Thanks,
tglx
Powered by blists - more mailing lists