[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87wpxcdzbr.fsf@x220.int.ebiederm.org>
Date: Mon, 03 Aug 2015 13:03:20 -0500
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Waiman Long <waiman.long@...com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Nicolas Dichtel <nicolas.dichtel@...nd.com>,
Al Viro <viro@...iv.linux.org.uk>,
Alexey Dobriyan <adobriyan@...il.com>,
linux-kernel@...r.kernel.org, Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH] proc: change proc_subdir_lock to a rwlock
Waiman Long <waiman.long@...com> writes:
> On 07/30/2015 10:16 PM, Waiman Long wrote:
>> On 07/29/2015 06:21 PM, Eric W. Biederman wrote:
>>> Two quick questions.
>>>
>>> - What motivates this work? Are you seeing lots of
>>> parallel reads on proc?
>>
>> The micro-benchmark that I used was artificial, but it was used to reproduce
>> an exit hanging problem that I saw in real application. In fact, only allow
>> one task to do a lookup seems too limiting to me.
>>> - Why not rcu? Additions and removal of proc generic
>>> files is very rare. Conversion to rcu for reads should
>>> perform better and not take much more work.
>>
>> RCU is harder to verify its correctness, whereas rwlock is easier to use and
>> understand. If it is really a performance critical path where every extra bit
>> of performance counts, I will certainly think RCU may be the right
>> choice. However, in this particular case, I don't think using RCU will give
>> any noticeable performance gain compared with a rwlock.
>
> One more thing, RCU is typically used with linked list. It is not easy to use
> RCU with rbtree and may require major changes to the code.
>
> Another alternative is to use seqlock + RCU, but it will still need more code
> changes than rwlock.
I had forgotten we had switch proc directories to rbtrees. So on that
note.
Acked-by: "Eric W. Biederman" <ebiederm@...ssion.com>
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists