[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090816165943.GA26983@infradead.org>
Date: Sun, 16 Aug 2009 12:59:43 -0400
From: Christoph Hellwig <hch@...radead.org>
To: James Bottomley <James.Bottomley@...e.de>
Cc: Arjan van de Ven <arjan@...radead.org>,
Alan Cox <alan@...rguk.ukuu.org.uk>, Mark Lord <liml@....ca>,
Chris Worley <worleys@...il.com>,
Matthew Wilcox <matthew@....cx>,
Bryan Donlan <bdonlan@...il.com>, david@...g.hm,
Greg Freemyer <greg.freemyer@...il.com>,
Markus Trippelsdorf <markus@...ppelsdorf.de>,
Matthew Wilcox <willy@...ux.intel.com>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Nitin Gupta <ngupta@...are.org>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-scsi@...r.kernel.org, linux-ide@...r.kernel.org,
Linux RAID <linux-raid@...r.kernel.org>
Subject: Re: Discard support (was Re: [PATCH] swap: send callback when swap
slot is freed)
On Sun, Aug 16, 2009 at 10:52:07AM -0500, James Bottomley wrote:
> However, the enterprise has been doing UNMAP for a while, so we can draw
> inferences from them since the SSD FTL will operate similarly. For
> them, UNMAP is the same cost in terms of time regardless of the number
> of extents. The reason is that it's moving the blocks from the global
> in use list to the global free list. Part of the problem is that this
> involves locking and quiescing, so UNMAP ends up being quite expensive
> to the array but constant in terms of cost (hence they want as few
> unmaps for as many sectors as possible).
How are they doing the unmaps? Using something similar to Mark's wiper
script and using SG_IO? Because right now we do not actually implement
UNMAP support in the kernel. I'd really love to test the XFS batched
discard support with a real UNMAP implementation.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists