[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1337094383.27694.62.camel@twins>
Date: Tue, 15 May 2012 17:06:23 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
RT <linux-rt-users@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Clark Williams <williams@...hat.com>
Subject: Re: [RFC][PATCH RT] rwsem_rt: Another (more sane) approach to mulit
reader rt locks
On Tue, 2012-05-15 at 10:03 -0400, Steven Rostedt wrote:
>
> where readers may nest (the same task may grab the same rwsem for
> read multiple times), but only one task may hold the rwsem at any
> given
> time (for read or write).
Humm, that sounds iffy, rwsem isn't a recursive read lock only rwlock_t
is.
> The idea here is to have an rwsem create a rt_mutex for each CPU.
> Actually, it creates a rwsem for each CPU that can only be acquired by
> one task at a time. This allows for readers on separate CPUs to take
> only the per cpu lock. When a writer needs to take a lock, it must
> grab
> all CPU locks before continuing.
So you've turned it into a global/local or br or whatever that thing was
called lock.
>
> Also, I don't use per_cpu sections for the locks, which means we have
> cache line collisions, but a normal (mainline) rwsem has that as well.
>
Why not?
> Thoughts?
Ideally someone would try and get rid of mmap_sem itself.. but that's a
tough nut.
> void rt_down_write(struct rw_semaphore *rwsem)
> {
> - rwsem_acquire(&rwsem->dep_map, 0, 0, _RET_IP_);
> - rt_mutex_lock(&rwsem->lock);
> + int i;
> + initialize_rwsem(rwsem);
> + for_each_possible_cpu(i) {
> + rwsem_acquire(&rwsem->lock[i].dep_map, 0, 0,
> _RET_IP_);
> + rt_mutex_lock(&rwsem->lock[i].lock);
> + }
> }
> EXPORT_SYMBOL(rt_down_write);
>
That'll make lockdep explode.. you'll want to make the whole set a
single lock and not treat it as nr_cpus locks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists