[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070424072408.GA27769@elte.hu>
Date: Tue, 24 Apr 2007 09:24:08 +0200
From: Ingo Molnar <mingo@...e.hu>
To: David Lang <david.lang@...italinsight.com>
Cc: Gene Heskett <gene.heskett@...il.com>,
Peter Williams <pwil3058@...pond.net.au>,
Arjan van de Ven <arjan@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Nick Piggin <npiggin@...e.de>,
Juliusz Chroboczek <jch@....jussieu.fr>,
Con Kolivas <kernel@...ivas.org>, ck list <ck@....kolivas.org>,
Bill Davidsen <davidsen@....com>, Willy Tarreau <w@....eu>,
William Lee Irwin III <wli@...omorphy.com>,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Galbraith <efault@....de>,
Thomas Gleixner <tglx@...utronix.de>, caglar@...dus.org.tr
Subject: Re: [REPORT] cfs-v4 vs sd-0.44
* David Lang <david.lang@...italinsight.com> wrote:
> > (Btw., to protect against such mishaps in the future i have changed
> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
> > change real-time tasks to SCHED_OTHER, but to also renice negative
> > nice levels back to 0 - this will show up in -v6. That way you'd
> > only have had to hit SysRq-N to get the system out of the wedge.)
>
> if you are trying to unwedge a system it may be a good idea to renice
> all tasks to 0, it could be that a task at +19 is holding a lock that
> something else is waiting for.
Yeah, that's possible too, but +19 tasks are getting a small but
guaranteed share of the CPU so eventually it ought to release it. It's
still a possibility, but i think i'll wait for a specific incident to
happen first, and then react to that incident :-)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists