lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 23 Feb 2007 13:26:45 +0000
From:	Jörn Engel <joern@...ybastard.org>
To:	Juan Piernas Canovas <piernas@...ec.um.es>
Cc:	Sorin Faibish <sfaibish@....com>,
	kernel list <linux-kernel@...r.kernel.org>
Subject: Re: [ANNOUNCE] DualFS: File System with Meta-data and Data Separation

On Thu, 22 February 2007 20:57:12 +0100, Juan Piernas Canovas wrote:
> 
> I do not agree with this picture, because it does not show that all the 
> indirect blocks which point to a direct block are along with it in the 
> same segment. That figure should look like:
> 
> Segment 1: [some data] [ DA D1' D2' ] [more data]
> Segment 2: [some data] [ D0 D1' D2' ] [more data]
> Segment 3: [some data] [ DB D1  D2  ] [more data]
> 
> where D0, DA, and DB are datablocks, D1 and D2 indirect blocks which 
> point to the datablocks, and D1' and D2' obsolete copies of those 
> indirect blocks. By using this figure, is is clear that if you need to 
> move D0 to clean the segment 2, you will need only one free segment at 
> most, and not more. You will get:
> 
> Segment 1: [some data] [ DA D1' D2' ] [more data]
> Segment 2: [                free                ]
> Segment 3: [some data] [ DB D1' D2' ] [more data]
> ......
> Segment n: [ D0 D1 D2 ] [         empty         ]
> 
> That is, D0 needs in the new segment the same space that it needs in the 
> previous one.
> 
> The differences are subtle but important.

Ah, now I see.  Yes, that is deadlock-free.  If you are not accounting
the bytes of used space but the number of used segments, and you count
each partially used segment the same as a 100% used segment, there is no
deadlock.

Some people may consider this to be cheating, however.  It will cause
more than 50% wasted space.  All obsolete copies are garbage, after all.
With a maximum tree height of N, you can have up to (N-1) / N of your
filesystem occupied by garbage.

It also means that "df" will have unexpected output.  You cannot
estimate how much data can fit into the filesystem, as that depends on
how much garbage you will accumulate in the segments.  Admittedly this
is not a problem for DualFS, as the uncertainty only exists for
metadata, do "df" for DualFS still makes sense.

Another downside is that with large amounts of garbage between otherwise
useful data, your disk cache hit rate goes down.  Read performance is
suffering.  But that may be a fair tradeoff and will only show up in
large metadata reads in the uncached (per Linux) case.  Seems fair.

Quite interesting, actually.  The costs of your design are disk space,
depending on the amount and depth of your metadata, and metadata read
performance.  Disk space is cheap and metadata reads tend to be slow for
most filesystems, in comparison to data reads.  You gain faster metadata
writes and loss of journal overhead.  I like the idea.

Jörn

-- 
All art is but imitation of nature.
-- Lucius Annaeus Seneca
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ