lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.61.0702210507500.882@ditec.inf.um.es>
Date:	Wed, 21 Feb 2007 05:36:22 +0100 (CET)
From:	Juan Piernas Canovas <piernas@...ec.um.es>
To:	Jörn Engel <joern@...ybastard.org>
Cc:	Sorin Faibish <sfaibish@....com>,
	kernel list <linux-kernel@...r.kernel.org>
Subject: Re: [ANNOUNCE] DualFS: File System with Meta-data and Data Separation

Hi Jörn,

On Tue, 20 Feb 2007, [utf-8] Jörn Engel wrote:

> On Tue, 20 February 2007 00:57:50 +0100, Juan Piernas Canovas wrote:
>>
>> Actually, the GC may become a problem when the number of free segments is
>> 50% or less. If your LFS always guarantees, at least, 50% of free
>> "segments" (note that I am talking about segments, not free space), the
>> deadlock problem disappears, right? This is a quite naive solution, but it
>> works.
>
> I don't see how you can guarantee 50% free segments.  Can you explain
> that bit?
It is quite simple. If 50% of your segments are busy, and the other 50% 
are free, and the file system needs a new segment, the cleaner starts 
freeing some of busy ones. If the cleaner is unable to free one segment at 
least, your file system gets "full" (and it returns a nice ENOSPC error). 
This solution wastes the half of your storage device, but it is 
deadlock-free. Obviously, there are better approaches.

>
>> In a traditional LFS, with data and meta-data blocks, 50% of free segments
>> represents a huge amount of wasted disk space. But, in DualFS, 50% of free
>> segments in the meta-data device is not too much. In a typical Ext2,
>> or Ext3 file system, there are 20 data blocks for every meta-data block
>> (that is, meta-data blocks are 5% of the disk blocks used by files).
>> Since files are implemented in DualFS in the same way, we can suppose the
>> same ratio for DualFS (1).
>
> This will work fairly well for most people.  It is possible to construct
> metadata-heavy workloads, however.  Many large directories containing
> symlinks or special files (char/block devices, sockets, fifos,
> whiteouts) come to mind.  Most likely noone of your user will ever want
> that, but a malicious attacker might.
>
Quotas, bigger meta-data device, cleverer cleaner,... there are 
solutions :)

>> The point of all the above is that you must improve the common case, and
>> manage the worst case correctly. And that is the idea behind DualFS :)
>
> A fine principle to work with.  Surprisingly, what is the worst case for
> you is the common case for LogFS, so maybe I'm more interested in it
> than most people.  Or maybe I'm just more paranoid.
>

No, you are right. It is the common case for LogFS because it has data and 
meta-data blocks in the same address space, but that is not the case of 
DualFS. Anyway, I'm very interested in your work because any solution to 
the problem of the GC will be also applicable to DualFS. So, keep up with 
it. ;-)

 	Juan.
-- 
D. Juan Piernas Cánovas
Departamento de Ingeniería y Tecnología de Computadores
Facultad de Informática. Universidad de Murcia
Campus de Espinardo - 30080 Murcia (SPAIN)
Tel.: +34968367657    Fax: +34968364151
email: piernas@...ec.um.es
PGP public key:
http://pgp.rediris.es:11371/pks/lookup?search=piernas%40ditec.um.es&op=index

*** Por favor, envíeme sus documentos en formato texto, HTML, PDF o PostScript :-) ***

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ