lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 6 Mar 2014 23:41:46 +0100
From:	Andi Kleen <>
To:	Peter Zijlstra <>
Cc:	Kevin Easton <>, Andi Kleen <>,
	Thomas Gleixner <>,
	Khalid Aziz <>,
	One Thousand Gnomes <>,
	"H. Peter Anvin" <>, Ingo Molnar <>,,,,
Subject: Re: [RFC] [PATCH] Pre-emption control for userspace

On Thu, Mar 06, 2014 at 02:59:46PM +0100, Peter Zijlstra wrote:
> On Thu, Mar 06, 2014 at 11:13:33PM +1100, Kevin Easton wrote:
> > On Tue, Mar 04, 2014 at 04:51:15PM -0800, Andi Kleen wrote:
> > > Anything else?
> > 
> > If it was possible to make the time remaining in the current timeslice
> > available to userspace through the vdso, the thread could do something like:
> Assuming we can do per-cpu values in the VDSO; this would mean hitting
> that cacheline on every context switch and wakeup. That's a complete
> non-starter performance wise.

If you worry about fetching it you can always prefetch it early.

> > if (sys_timeleft() < CRITICAL_SECTION_SIZE)
> >     yield();
> > lock();
> > 
> > to avoid running out of timeslice in the middle of the critical section.
> Can still happen, the effective slice of a single runnable task is
> infinite, the moment another task gets woken this gets reduced to a finite
> amount, we then keep reducing the slice until there are about 8 runnable
> tasks (assuming you've not poked at any sysctls).

I guess it could be some predicted value, similar to how the menu
governour works.


-- -- Speaking for myself only.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists