lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 Jun 2012 16:37:07 +0200 (CEST)
From:	Lukáš Czerner <lczerner@...hat.com>
To:	"Ted Ts'o" <tytso@....edu>
cc:	Lukáš Czerner <lczerner@...hat.com>,
	Spelic <spelic@...ftmail.org>, xfs@....sgi.com,
	linux-ext4@...r.kernel.org,
	device-mapper development <dm-devel@...hat.com>
Subject: Re: Ext4 and xfs problems in dm-thin on allocation and discard

On Tue, 19 Jun 2012, Ted Ts'o wrote:

> Date: Tue, 19 Jun 2012 10:19:33 -0400
> From: Ted Ts'o <tytso@....edu>
> To: Lukáš Czerner <lczerner@...hat.com>
> Cc: Spelic <spelic@...ftmail.org>, xfs@....sgi.com,
>     linux-ext4@...r.kernel.org,
>     device-mapper development <dm-devel@...hat.com>
> Subject: Re: Ext4 and xfs problems in dm-thin on allocation and discard
> 
> On Tue, Jun 19, 2012 at 04:09:48PM +0200, Lukáš Czerner wrote:
> > 
> > With thin provisioning you'll get totally different file system
> > layout than on fully provisioned disk as you push more and more
> > writes to your drive. This unfortunately has great impact on
> > performance since file systems usually have a lot of optimization on
> > where to put data/metadata on the drive and how to read them.
> > However in case of thinly provisioned storage those optimization
> > would not help. And yes, you just have to expect lower performance
> > with dm-thin from the file system on top of it. It is not and it
> > will never be ideal solution for workloads where you expect the best
> > performance.
> 
> One of the things which would be nice to be able to easily set up is a
> configuration where we get the benefits of thin provisioning with
> respect to snapshost, but where the underlying block device used by
> the file system is contiguous.  That is, it would be really useful to
> *not* use thin provisioning for the underlying file system, but to use
> thin provisioned snapshots.  That way we only pay the thinp
> performance penalty for the snapshots, and not for normal file system
> operations.  This is something that would be very useful both for ext4
> and xfs.
> 
> I talked to Alasdair about this a few months ago at the Collab Summit,
> and I think it's doable today, but it was somewhat complicaed to set
> up.  I don't recall the details now, but perhaps someone who's more
> familiar device mapper could outline the details, and perhaps we can
> either simplify it or abstract it away in a convenient front-end
> script?

like ssm for example ? :)

Yes this would definitely help and I think there are actually more
possible optimization like this.

If we "cripple" the dm-thin so that only snapshot feature is
provided, but the actual thinp feature is not used. It would
definitely help the performance for those who are only interested in
snapshots. You'll still have your file system layout mixed up once
you start using snapshot, but it'll be definitely better. Also some
king of fs/dm interface for optimizing the layout might helpful as
well.

The other thing which could be done is to still enable to utilize
thinp feature, but try to keep file systems on the dm-thin relatively
separated and contiguous (although probably not in it's entire size).
It would certainly work only to some thin pool utilization threshold,
but it is something. Also if we can add some fs related optimization
to try not to span entire file system but rather utilize smaller parts
first (alter the block allocator so it does not allocate blocks from
random groups from entire fs but rather have smaller block group
working set at start), this can be even more useful.

-Lukas

> 
> 						- Ted
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ