[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140306224146.GG22728@two.firstfloor.org>
Date: Thu, 6 Mar 2014 23:41:46 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Kevin Easton <kevin@...rana.org>, Andi Kleen <andi@...stfloor.org>,
Thomas Gleixner <tglx@...utronix.de>,
Khalid Aziz <khalid.aziz@...cle.com>,
One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...nel.org>,
akpm@...ux-foundation.org, viro@...iv.linux.org.uk,
oleg@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC] [PATCH] Pre-emption control for userspace
On Thu, Mar 06, 2014 at 02:59:46PM +0100, Peter Zijlstra wrote:
> On Thu, Mar 06, 2014 at 11:13:33PM +1100, Kevin Easton wrote:
> > On Tue, Mar 04, 2014 at 04:51:15PM -0800, Andi Kleen wrote:
> > > Anything else?
> >
> > If it was possible to make the time remaining in the current timeslice
> > available to userspace through the vdso, the thread could do something like:
>
> Assuming we can do per-cpu values in the VDSO; this would mean hitting
> that cacheline on every context switch and wakeup. That's a complete
> non-starter performance wise.
If you worry about fetching it you can always prefetch it early.
> > if (sys_timeleft() < CRITICAL_SECTION_SIZE)
> > yield();
> > lock();
> >
> > to avoid running out of timeslice in the middle of the critical section.
>
> Can still happen, the effective slice of a single runnable task is
> infinite, the moment another task gets woken this gets reduced to a finite
> amount, we then keep reducing the slice until there are about 8 runnable
> tasks (assuming you've not poked at any sysctls).
I guess it could be some predicted value, similar to how the menu
governour works.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists