lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b6c5339f0804180651m2a291755ke093bda3bea1d2b4@mail.gmail.com>
Date:	Fri, 18 Apr 2008 09:51:03 -0400
From:	"Bob Copeland" <me@...copeland.com>
To:	"Szabolcs Szakacsits" <szaka@...s-3g.org>
Cc:	"Miklos Szeredi" <miklos@...redi.hu>, hch@...radead.org,
	akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 0/7] OMFS filesystem version 3

On Fri, Apr 18, 2008 at 6:30 AM, Szabolcs Szakacsits <szaka@...s-3g.org> wrote:
>  On Sun, 13 Apr 2008, Bob Copeland wrote:
>
>  > I don't have hard numbers, but anecdotally my FUSE version is quite
>  > a bit less performant.  That's no criticism of FUSE - I just haven't
>  > put the time into optimizing and adding various caches.
>
>  Thankfully you need none, it's already there by FUSE and the kernel. The
>  trick is exactly that you can have the kernel performance and the left is
>  moved to user space with typically negligible performance overhead which is
>  usually well compensated with faster delivered new features and bug fixes.

Correct me if I'm wrong, but one place where caches seem necessary is for
lookup.  My file system already has an inode number; my understanding
is that the kernel inode cache and dcache are caching the FUSE inode by
filename and its hashed inode number.

In FUSE, on open, I'm passed a filename which I then have to resolve into an
inode # via my own lookup.  The VFS does the path_lookup as part of sys_open,
and since I get to put private data into the struct inode, I'll generally
already have the block # and various other info in the dcache by the time
open is called.

Also, if you stuff inode data into the private fh field in fuse_file_info,
you need to be sure that any subsequent lookups always return the same
inode structure, otherwise a thread doing ftruncate vs one doing truncate
will cause issues.  So I created an internal dcache to solve those two
problems.

>  I noticed that the OMFS kernel driver supports only the USB interface and
>  the FUSE one both the network and the USB one. Isn't it possible that you
>  compared the performance using the USB with the kernel vs the much slower
>  and lower latency network with FUSE?

Nope, that's not possible, sorry.  Both require use of USB.  lkarmafs and
omfs_fuse aren't the same thing.

>  If you did use the USB interface with FUSE then what exactly do you mean by
>  "quite a bit less performance" in numbers and workloads? What you did, how
>  long it took using what CPU?

Like I said it was anecdotal (copy 20 gigs of X) in both.  I'm sure a
good portion of it is my fault, such as doing unnecessary malloc & copies
in omfs_fuse.  I have put exactly zero effort into making it fast so far.

BTW, I hardly intended to start a huge VFS vs FUSE debate.  I think FUSE
is great.  I'm not sure it's the right fit for this, is all.

-- 
Bob Copeland %% www.bobcopeland.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ