lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <op.tnwuonh3rwwil4@muki-1mj5c5mxes>
Date:	Sat, 17 Feb 2007 15:47:01 -0500
From:	"Sorin Faibish" <sfaibish@....com>
To:	Jörn Engel <joern@...ybastard.org>,
	"Bill Davidsen" <davidsen@....com>
Cc:	"Juan Piernas Canovas" <piernas@...ec.um.es>,
	"Jan Engelhardt" <jengelh@...ux01.gwdg.de>,
	"kernel list" <linux-kernel@...r.kernel.org>
Subject: Re: [ANNOUNCE] DualFS: File System with Meta-data and Data Separation

On Sat, 17 Feb 2007 13:36:46 -0500, Jörn Engel <joern@...ybastard.org>  
wrote:

> On Sat, 17 February 2007 13:10:23 -0500, Bill Davidsen wrote:
>> >
>> I missed that. Which corner case did you find triggers this in DualFS?
>
> This is not specific to DualFS, it applies to any log-structured
> filesystem.
>
> Garbage collection always needs at least one spare segment to collect
> valid data into.  Regular writes may require additional free segments,
> so GC has to kick in and free those when space is getting tight.  (1)
>
> GC frees segments by writing all valid data in it into the spare
> segment.  If there is remaining space in the spare segment, GC can move
> more data from further segment.  Nice and simple.
>
> The requirement is that GC *always* frees more segments than it uses up
> doing so.  If that requirement is not fulfilled, GC will simply use up
> its last spare segment without freeing a new one.  We have a deadlock.
>
> Now imagine your filesystem is 90% full and all data is spread perfectly
> across all segments.  The best segment you could pick for GC is 90%
> full.  One would imagine that GC would only need to copy those 90% into
> a spare segment and have freed 100%, making overall progress.
>
> But more log-structured filesystems maintain a tree of some sorts on the
> medium.  If you move data elsewhere, you also need to update the
> indirect block pointing to it.  So that has to get written as well.  If
> you have doubly or triply indirect blocks, those need to get written.
> So you can end up writing 180% or more to free 100%.  Deadlock.
>
> And if you read the documentation of the original Sprite LFS or any
> other of the newer log-structured filesystems, you usually won't see a
> solution to this problem, or even an acknowledgement that the problem
> exists in the first place.  But there is no shortage of log-structured
> filesystem projects that were abandoned years ago and have "cleaner" or
> "garbage collector" as their top item on the todo-list.  Coincidence?
>
>
> (1) GC may also kick in earlier, but that is just an optimization and
> doesn't change the worst case, so that bit is irrelevant here.
>
>
> Btw, the deadlock problem is solvable and I definitely don't want to
> discourage further work in this area.  DualFS does look interesting.
> But my solution for this problem will likely eat up all the performance
> DualFS has gained and more, as it isn't aimed at hard disks.  So someone
> has to come up with a different idea.
DualFS can probably get around this corner case as it is up to the user
to select the size of the MD device size. If you want to prevent this
corner case you can always use a device bigger than 10% of the data device
which is exagerate for any FS assuming that the directory files are so
large (this is when you have billions of files with long names).
In general the problem you mention is mainly due to the data blocks
filling the file system. In DualFS case you have the choice of selecting
different sizes for the MD and Data volume. When Data volume gets full
the GC will have a problem but the MD device will not have a problem.
It is my understanding that most of the GC problem you mention is
due to the filling of the FS with data and the result is a MD operation
being disrupted by the filling of the FS with data blocks. As about the
performance impact on solving this problem, as you mentioned all
journal FSs will have this problem, I am sure that DualFS performance
impact will be less than others at least due to using only one MD
write instead of 2.

>
> Jörn
>



-- 
Best Regards
  Sorin Faibish
Senior Technologist
Senior Consulting Software Engineer Network Storage Group

        EMC²            where information lives

Phone: 508-435-1000 x 48545
Cellphone: 617-510-0422
Email : sfaibish@....com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ