[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8738ix5uyk.fsf@tassilo.jf.intel.com>
Date: Tue, 04 Mar 2014 16:51:15 -0800
From: Andi Kleen <andi@...stfloor.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Khalid Aziz <khalid.aziz@...cle.com>,
One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...nel.org>,
peterz@...radead.org, akpm@...ux-foundation.org,
viro@...iv.linux.org.uk, oleg@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [RFC] [PATCH] Pre-emption control for userspace
Thomas Gleixner <tglx@...utronix.de> writes:
> On Tue, 4 Mar 2014, Khalid Aziz wrote:
>> be in the right control group. Besides they want to use a common mechanism
>> across multiple OSs and pre-emption delay is already in use on other OSs. Good
>> idea though.
>
> Well, just because preemption delay is a mechanism exposed by some
> other OS does not make it a good idea.
>
> In fact its a horrible idea.
>
> What you are creating is a crystal ball based form of time bound
> priority ceiling with the worst user space interface i've ever seen.
So how would you solve the user space lock holder preemption
problem then?
It's a real problem, affecting lots of workloads.
Just saying everything is crap without suggesting anything
constructive is not really getting us anywhere.
The workarounds I've seen for it are generally far worse
than this. Often people do all kinds of fragile tunings
to address this, when then break on the next kernel
update that does even a minor scheduler change.
futex doesn't solve the problem at all.
The real time scheduler is also really poor fit for these
workloads and needs a lot of hacks to scale.
The thread swap proposal from plumbers had some potential,
but it's likely very intrusive everywhere and seems
to have died too.
Anything else?
-Andi
--
ak@...ux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists