[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 03 Mar 2014 13:51:33 -0800
From: Davidlohr Bueso <davidlohr@...com>
To: Khalid Aziz <khalid.aziz@...cle.com>
Cc: tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
peterz@...radead.org, akpm@...ux-foundation.org,
andi.kleen@...el.com, rob@...dley.net, viro@...iv.linux.org.uk,
oleg@...hat.com, venki@...gle.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC] [PATCH] Pre-emption control for userspace
On Mon, 2014-03-03 at 11:07 -0700, Khalid Aziz wrote:
> I am working on a feature that has been requested by database folks that
> helps with performance. Some of the oft executed database code uses
> mutexes to lock other threads out of a critical section. They often see
> a situation where a thread grabs the mutex, runs out of its timeslice
> and gets switched out which then causes another thread to run which
> tries to grab the same mutex, spins for a while and finally gives up.
This strikes me more of a feature for a real-time kernel. It is
definitely an interesting concept but wonder about it being abused.
Also, what about just using a voluntary preemption model instead? I'd
think that systems where this is really a problem would opt for that.
> This can happen with multiple threads until original lock owner gets the
> CPU again and can complete executing its critical section. This queueing
> and subsequent CPU cycle wastage can be avoided if the locking thread
> could request to be granted an additional timeslice if its current
> timeslice runs out before it gives up the lock. Other operating systems
> have implemented this functionality and is used by databases as well as
> JVM. This functionality has been shown to improve performance by 3%-5%.
Could you elaborate more on those performance numbers? What
benchmark/workload?
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists