[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1skkeu4ka.fsf@fess.ebiederm.org>
Date: Sat, 11 Apr 2009 16:57:25 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
Hugh Dickins <hugh@...itas.com>, Tejun Heo <tj@...nel.org>,
Alexey Dobriyan <adobriyan@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Greg Kroah-Hartman <gregkh@...e.de>
Subject: Re: [RFC][PATCH 0/9] File descriptor hot-unplug support
Al Viro <viro@...IV.linux.org.uk> writes:
> On Sat, Apr 11, 2009 at 09:49:36AM -0700, Eric W. Biederman wrote:
>
>> The fact that in the common case only one task ever accesses a struct
>> file leaves a lot of room for optimization.
>
> I'm not at all sure that it's a good assumption; even leaving aside e.g.
> several tasks sharing stdout/stderr, a bunch of datagrams coming out of
> several threads over the same socket is quite possible.
Maybe not. However those cases are already more expensive today.
Somewhere along the way we are already going to get cache line ping
pongs if there is real contention, and we are going to see the cost of
atomic operations. In which case the extra ref counting I am doing is
a little more expensive. And when I say a little more expensive I
mean 10-20ns per read/write more expensive.
At the same time if the common case really is applications not sharing
file descriptors (which seems sane) my current optimization easily
keeps the cost to practically nothing.
Using the srcu locking would also keep the cost down in the noise
because it guarantees non-shared cachelines and no expensive atomic
operations. srcu has the downside of requiring per cpu memory which
seems wrong to me somehow. However there are hybrid models like what
is used in mnt_want_write that are possible to limit the total amount
of per cpu memory while still getting the advantages.
Beyond that for correctness it looks like a pay me now or pay me later
situation. Do we track when we are in the methods for an object
generically where we can do the work once, and then concentrate on
enhancements. Or do we bog ourselves down using inferior
implementations that are replicated in varying ways from subsystem to
subsystem, and spend our time fighting the bugs in the subsystems?
I have the refcount/locking abstraction wrapped and have only to
perform the most basic of optimizations. So if we need to do something
more it should be easy.
Is performance your only concern with my patches?
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists