[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0908130931400.28013@asgard.lang.hm>
Date: Thu, 13 Aug 2009 09:33:39 -0700 (PDT)
From: david@...g.hm
To: Markus Trippelsdorf <markus@...ppelsdorf.de>
cc: Matthew Wilcox <willy@...ux.intel.com>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Nitin Gupta <ngupta@...are.org>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-scsi@...r.kernel.org, linux-ide@...r.kernel.org
Subject: Re: Discard support (was Re: [PATCH] swap: send callback when swap
slot is freed)
On Thu, 13 Aug 2009, Markus Trippelsdorf wrote:
> On Thu, Aug 13, 2009 at 08:13:12AM -0700, Matthew Wilcox wrote:
>> I am planning a complete overhaul of the discard work. Users can send
>> down discard requests as frequently as they like. The block layer will
>> cache them, and invalidate them if writes come through. Periodically,
>> the block layer will send down a TRIM or an UNMAP (depending on the
>> underlying device) and get rid of the blocks that have remained unwanted
>> in the interim.
>
> That is a very good idea. I've tested your original TRIM implementation on
> my Vertex yesterday and it was awful ;-). The SSD needs hundreds of
> milliseconds to digest a single TRIM command. And since your implementation
> sends a TRIM for each extent of each deleted file, the whole system is
> unusable after a short while.
> An optimal solution would be to consolidate the discard requests, bundle
> them and send them to the drive as infrequent as possible.
or queue them up and send them when the drive is idle (you would need to
keep track to make sure the space isn't re-used)
as an example, if you would consider spinning down a drive you don't hurt
performance by sending accumulated trim commands.
David Lang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists