[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A86B605.5060701@rtr.ca>
Date: Sat, 15 Aug 2009 09:20:05 -0400
From: Mark Lord <liml@....ca>
To: Chris Worley <worleys@...il.com>
Cc: Greg Freemyer <greg.freemyer@...il.com>,
Matthew Wilcox <matthew@....cx>,
Bryan Donlan <bdonlan@...il.com>, david@...g.hm,
Markus Trippelsdorf <markus@...ppelsdorf.de>,
Matthew Wilcox <willy@...ux.intel.com>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Nitin Gupta <ngupta@...are.org>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-scsi@...r.kernel.org, linux-ide@...r.kernel.org,
Linux RAID <linux-raid@...r.kernel.org>
Subject: Re: Discard support (was Re: [PATCH] swap: send callback when swap
slot is freed)
Chris Worley wrote:
..
> So erase blocks are 512 bytes (if I write 512 bytes, an erase block is
> now freed)? Not true.
..
No, erase blocks are typically 512 KILO-bytes, or 1024 sectors.
Logical write blocks are only 512 bytes, but most drives out there
now actually use 4096 bytes as the native internal write size.
Lots of issues there.
The only existing "in the wild" TRIM-capable SSDs today all incur
large overheads from TRIM --> they seem to run a garbage-collection
and erase cycle for each TRIM command, typically taking 100s of milliseconds
regardless of the amount being trimmed.
So it makes send to gather small TRIMs into single larger TRIMs.
But I think, even better, is to just not bother with the bookkeeping,
and instead have the filesystem periodically just issue a TRIM for all
free blocks within a block group, cycling through the block groups
one by one over time.
That's how I'd like it to work on my own machine here.
Server/enterprise users very likely want something different.
Pluggable architecture, anyone? :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists