lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 03 Jun 2009 18:42:07 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Davide Libenzi <davidel@...ilserver.org>
Cc:	Al Viro <viro@...IV.linux.org.uk>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-pci@...r.kernel.org, linux-mm@...ck.org,
	linux-fsdevel@...r.kernel.org, Hugh Dickins <hugh@...itas.com>,
	Tejun Heo <tj@...nel.org>,
	Alexey Dobriyan <adobriyan@...il.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Greg Kroah-Hartman <gregkh@...e.de>,
	Nick Piggin <npiggin@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Christoph Hellwig <hch@...radead.org>,
	"Eric W. Biederman" <ebiederm@...stanetworks.com>
Subject: Re: [PATCH 18/23] vfs: Teach epoll to use file_hotplug_lock

Davide Libenzi <davidel@...ilserver.org> writes:

> On Wed, 3 Jun 2009, Eric W. Biederman wrote:
>
>> What code are you talking about?
>> 
>> To the open path a few memory writes and a smp_wmb.  No atomics and no
>> spin lock/unlocks.
>> 
>> Are you complaining because I retain the file_list?
>
> Sorry, did I overlook the patch? Weren't a couple of atomic ops and a spin 
> lock/unlock couple present in __dentry_open() (same sort of the release 
> path)?

You might be remembering v1.  In v2 I have operations like file_hotplug_read_trylock
that implement a lock but use an rcu like algorithm.  So there are no atomic
operations involved with their associated pipeline stalls.  Over my previous
version this made a reasonable performance benefit.

> And that's only like 5% of the code touched by the new special handling of 
> the file operations structure (basically, every f_op access ends up being 
> wrapped by two atomic ops and other extra code).

Yes there is a single extra wrapping of every file in the syscall path.  So
we know that someone is using it. 

> The question, that I'd like to reiterate is, is this stuff really needed?
> Anyway, my complaint ends here and I'll let others evaluate if merging 
> this patchset is worth the cost.

Sure.  My apologies for not answering that question earlier.

My perspective is that every subsystem that winds up supporting hotplug
hardware winds up rolling it's own version of something like this,
and they each have a different set of bugs.

So one generic version is definitely worth implementing.

Similarly there is a case for a generic revoke facility in the kernel.
Alan at least has made the case that there are certain security problems
that can not be solved in userspace without revoke.

>From an implementation point of view doing the generic implementation at
the vfs level has significant benefits.

The extra locking appears reasonable from a code maintenance and
comprehensibility point of view.  A real pain to find all of the entry
points into the vfs, and get other code to use the right vfs helpers
they should always have been using but I am volunteering to do that
work.

The practical question I see is are the performance overheads of my
primitives low enough that I do not cause performance regressions
on anyone's fast path.  As far as I have been able to measure is that 
the performance overhead is low enough, because I have been able to
avoid the use of atomics and have been able to use fairly small code
with predictable branches.  Which is why I pressed you to be certain
I understood where you are coming from.

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ