lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 17 Feb 2007 18:36:46 +0000
From:	Jörn Engel <joern@...ybastard.org>
To:	Bill Davidsen <davidsen@....com>
Cc:	Juan Piernas Canovas <piernas@...ec.um.es>,
	Jan Engelhardt <jengelh@...ux01.gwdg.de>,
	sfaibish <sfaibish@....com>,
	kernel list <linux-kernel@...r.kernel.org>
Subject: Re: [ANNOUNCE] DualFS: File System with Meta-data and Data Separation

On Sat, 17 February 2007 13:10:23 -0500, Bill Davidsen wrote:
> >  
> I missed that. Which corner case did you find triggers this in DualFS?

This is not specific to DualFS, it applies to any log-structured
filesystem.

Garbage collection always needs at least one spare segment to collect
valid data into.  Regular writes may require additional free segments,
so GC has to kick in and free those when space is getting tight.  (1)

GC frees segments by writing all valid data in it into the spare
segment.  If there is remaining space in the spare segment, GC can move
more data from further segment.  Nice and simple.

The requirement is that GC *always* frees more segments than it uses up
doing so.  If that requirement is not fulfilled, GC will simply use up
its last spare segment without freeing a new one.  We have a deadlock.

Now imagine your filesystem is 90% full and all data is spread perfectly
across all segments.  The best segment you could pick for GC is 90%
full.  One would imagine that GC would only need to copy those 90% into
a spare segment and have freed 100%, making overall progress.

But more log-structured filesystems maintain a tree of some sorts on the
medium.  If you move data elsewhere, you also need to update the
indirect block pointing to it.  So that has to get written as well.  If
you have doubly or triply indirect blocks, those need to get written.
So you can end up writing 180% or more to free 100%.  Deadlock.

And if you read the documentation of the original Sprite LFS or any
other of the newer log-structured filesystems, you usually won't see a
solution to this problem, or even an acknowledgement that the problem
exists in the first place.  But there is no shortage of log-structured
filesystem projects that were abandoned years ago and have "cleaner" or
"garbage collector" as their top item on the todo-list.  Coincidence?


(1) GC may also kick in earlier, but that is just an optimization and
doesn't change the worst case, so that bit is irrelevant here.


Btw, the deadlock problem is solvable and I definitely don't want to
discourage further work in this area.  DualFS does look interesting.
But my solution for this problem will likely eat up all the performance
DualFS has gained and more, as it isn't aimed at hard disks.  So someone
has to come up with a different idea.

Jörn

-- 
To recognize individual spam features you have to try to get into the
mind of the spammer, and frankly I want to spend as little time inside
the minds of spammers as possible.
-- Paul Graham
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ