[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTinqX_=TLU3TuKJFFTJBBxm1scZ3Ew@mail.gmail.com>
Date: Mon, 23 May 2011 09:37:35 +0300
From: Lucian Adrian Grijincu <lucian.grijincu@...il.com>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: linux-kernel <linux-kernel@...r.kernel.org>,
netdev@...r.kernel.org, Alexey Dobriyan <adobriyan@...il.com>,
Octavian Purdila <tavi@...pub.ro>,
"David S . Miller" <davem@...emloft.net>
Subject: Re: [v3 00/39] faster tree-based sysctl implementation
On Mon, May 23, 2011 at 7:27 AM, Eric W. Biederman
<ebiederm@...ssion.com> wrote:
> This patchset looks like it is deserving of some close scrutiny, and
> not just the high level design overview I have given the previous
> patches. This is going to be a busy week for me so I probably won't
> get through all of the patches for a while.
I have one more question. The current implementation uses a single
sysctl_lock to synchronize all changes to the data structures.
In my algorithm I change a few places to use a per-header read-write
lock. Even though the code is organized to handle a per-header rwlock,
the implementation uses a single global rwlock. In v2 I got rid of the
rwlock and replaced the subdirs/files regular lists with rcu-protected
lists and that's why I did not bother giving each header a rwlock.
I have no idea how to use rcu with rbtree. Should I now give each
header it's own lock to reduce contention?
I'm asking this because I don't know why the only is a global sysctl
spin lock, when multiple locks could have been used, each to protect
it's own domain of values.
If you'd like to keep locking as simple as possible (to reduce all the
potential problems brought on by too many locks), or if in general
contention is low enough, then global lock is better. If not, then
I'll change the code to support per-header rwlocks (increasing the
ctl_table_header structure size).
--
.
..: Lucian
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists