lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 10 Jan 2018 15:21:58 +0100
From:   Juri Lelli <juri.lelli@...hat.com>
To:     "Rafael J. Wysocki" <rafael@...nel.org>
Cc:     Leonard Crestez <leonard.crestez@....com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Linux PM <linux-pm@...r.kernel.org>,
        Anson Huang <anson.huang@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: [BUG] schedutil governor produces regular max freq spikes
 because of lockup detector watchdog threads

On 10/01/18 13:35, Rafael J. Wysocki wrote:
> On Wed, Jan 10, 2018 at 11:54 AM, Juri Lelli <juri.lelli@...hat.com> wrote:
> > On 09/01/18 16:50, Rafael J. Wysocki wrote:
> >> On Tue, Jan 9, 2018 at 3:43 PM, Leonard Crestez <leonard.crestez@....com> wrote:
> >
> > [...]
> >
> >> > Every 4 seconds (really it's /proc/sys/kernel/watchdog_thresh * 2 / 5
> >> > and watchdog_thresh defaults to 10). There is a per-cpu hrtimer which
> >> > wakes the per-cpu thread in order to check that tasks can still
> >> > execute, this works very well against bugs like infinite loops in
> >> > softirq mode. The timers are synchronized initially but can get
> >> > staggered (for example by hotplug).
> >> >
> >> > My guess is that it's only marked RT so that it executes ahead of other
> >> > threads and the watchdog doesn't trigger simply when there are lots of
> >> > userspace tasks.
> >>
> >> I think so too.
> >>
> >> I see a couple of more-or-less hackish ways to avoid the issue, but
> >> nothing particularly attractive ATM.
> >>
> >> I wouldn't change the general behavior with respect to RT tasks
> >> because of this, though, as we would quickly find a case in which that
> >> would turn out to be not desirable.
> >
> > I agree we cannot generalize to all RT tasks, but what Patrick proposed
> > (clamping utilization of certain known tasks) might help here:
> >
> > lkml.kernel.org/r/20170824180857.32103-1-patrick.bellasi@....com
> >
> > Maybe with a per-task interface instead of using cgroups?
> 
> The problem here is that this is a kernel thing and user space should
> not be expected to have to do anything about fixing this IMO.

Not sure. If we would have such an interface, it should be possible to
use it from both kernel and userspace. In this case kernel might be able
to do the "right" thing. Also, RT userspace is usually already responsible
for configuring system priorities, it might be easy to set this as well.

> > The other option would be to relax DL tasks affinity constraints, so
> > that a case like this might be handled. Daniel and Tommaso proposed
> > possible approaches, this might be a driving use case. Not sure how we
> > would come up with a proper runtime for the watchdog, though.
> 
> That is a problem.
> 
> Basically, it needs to run as soon as possible, but it will be running
> for a very short time, every time.

Does it really require to run "as soon as possible" or is it "at least
once every watchdog period"? In the latter case DL might still fit, with
a very short runtime (to be defined).

> Overall, using a thread for that seems wasteful ...

Not sure I'm following you here, aren't we using a thread already?

Thanks,

- Juri

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ