lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 16 Feb 2007 00:57:16 +0100
From:	Andi Kleen <andi@...stfloor.org>
To:	Juan Piernas Canovas <piernas@...ec.um.es>
Cc:	Andi Kleen <andi@...stfloor.org>,
	Jan Engelhardt <jengelh@...ux01.gwdg.de>,
	sfaibish <sfaibish@....com>,
	kernel list <linux-kernel@...r.kernel.org>
Subject: Re: [ANNOUNCE] DualFS: File System with Meta-data and Data Separation

> >Also many storage subsystems have some internal parallelism
> >in writing (e.g. a RAID can write on different disks in parallel for
> >a single partition) so i'm not sure your distinction is that useful.
> >
> But we are talking about a different case. What I have said is that if you 
> use two devices, one for the 'regular' file system and another one for the 
> log, DualFS is better in that case because it can use the log for reads. 
> Other journaling file systems can not do that.

Shadow paging based systems typically can, but we have no widely used
one on Linux (reiser4 would be probably the closest) 

> >If you stripe two disks with a standard fs versus use one of them
> >as metadata volume and the other as data volume with dualfs i would
> >expect the striped variant usually be faster because it will give
> >parallelism not only to data versus metadata, but also to all data
> >versus other data.
> >
> If you have a RAID system, both the data and meta-data devices of DualFS 
> can be stripped, and you get the same result. No problem for DualFS :)

Sure, but then you need four disks. And if your workloads happens 
to be much more data intensive than metadata intensive the 
stripped spindles assigned to metadata only will be more idle
than the ones doing data.

Stripping everything from the same pool has the potential
to adapt itself to any workload mix better.

I can see that you win for some specific workloads, but it is 
hard to see how you can win over a wide range of workloads
because of that.

> 
> >Also I would expect your design to be slow for metadata read intensive
> >workloads. E.g. have you tried to boot a root partition with dual fs?
> >That's a very important IO benchmark for desktop Linux systems.
> >
> I do not think so. The performance of DualFS is superb in meta-data read 
> intensive workloads . And it is also better than the performance of other 
> file system when reading a directory tree with several copies of the Linux 
> kernel source code (I showed those results on Tuesday at the LSF07 
> workshop)

PDFs available? 

Is that with running a LFS style cleaner inbetween or without?

I would be interested in a "install distro with installer ; boot afterwards
from it" type benchmark. Do you have something like this? 

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ