lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 2 Jun 2008 20:41:38 +0100
From:	Alan Cox <alan@...rguk.ukuu.org.uk>
To:	"Xiaoming Li" <forrubm2@...il.com>
Cc:	"Rik van Riel" <riel@...hat.com>,
	"Dave Chinner" <david@...morbit.com>,
	"Christoph Hellwig" <hch@...radead.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [help]How to block new write in a "Thin Provisioning" logical
 volume manager as a virtual device driver when physical spaces run out?

> However, I want to ask, is there _any need_ to implement "Thin
> Provioning" on block level rather than FS level?

A good question

> In my opinon, there are some reasons why we implemented "Thin
> Provioning" on block level:
> 1. We can use _all_ types of FS on our ASD device.

Except that you can run out of space and die.

> 2. In our current system, we use some other virtual device drivers to
> provide other features, like snapshot, cahce management, exporting as
> an iSCSI target in a SAN, etc. Please note, all of these virtual
> device drivers have been developed already.

A cluster file system can do cache management and in theory snapshots.
The iscsi larget is a block property - the equivalence in fs layer would
I guess be NFS. Most of those have been developed too ;)

> 3. Some storage vendors (e.g. EMC) have their own "Block-based thin
> provisioning" product; they must have enough reasons to do so.

Some storage vendors do the most marvellously bizzare things. That
doesn't mean they are right answers. EMC don't/didn't have a cluster file
system so that rather limited their choice.

I think you missed one however and maybe one EMC considered - its a much
easier way to do cross platform non shared filestore as a device than add
clustering file systems to do that.

However if you overcommit you have a problem. Its interesting as a front
end technology with an array of slow large disks behind it (so you don't
overcommit but push old storage to the slow disks). I don't think its
interesting in the general case except where you can carefully avoid
overcommit by management policies.

Its also not helped by the fact your storage layer needs to understand
the fs's it supports in order to deduce what blocks are free so that it
can recover them.

Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ