[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A883D21.5020209@gmail.com>
Date: Sun, 16 Aug 2009 11:08:49 -0600
From: Robert Hancock <hancockrwd@...il.com>
To: jim owens <jowens@...com>
CC: James Bottomley <James.Bottomley@...e.de>, Mark Lord <liml@....ca>,
Chris Worley <worleys@...il.com>,
Matthew Wilcox <matthew@....cx>,
Bryan Donlan <bdonlan@...il.com>, david@...g.hm,
Greg Freemyer <greg.freemyer@...il.com>,
Markus Trippelsdorf <markus@...ppelsdorf.de>,
Matthew Wilcox <willy@...ux.intel.com>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Nitin Gupta <ngupta@...are.org>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-scsi@...r.kernel.org, linux-ide@...r.kernel.org,
Linux RAID <linux-raid@...r.kernel.org>
Subject: Re: Discard support (was Re: [PATCH] swap: send callback when swap
slot is freed)
On 08/15/2009 11:39 AM, jim owens wrote:
> ***begin rant***
>
> I have not seen any analysis of the benefit and cost to the
> end user of the TRIM or array UNMAP. We now see that TRIM
> as implemented by some (all?) SSDs will come at high cost.
> The cost is all born by the host. Do we get any benefit, or
> is it all for the device vendor. And when we subtract the cost
> from the benefit, does the user actually benefit and how?
>
> I'm tired of working around shit storage products and broken
> device protocols from the "T" committees. I suggest we just
> add a "white list" of devices that handle the discard fast
> and without us needing NCQ queue drain. Then only send TRIM
> to devices that are on the white list and throw the others
> away in the block device layer.
They all will require NCQ queue drain. It's an inherent requirement of
the protocol that you can't overlap NCQ and non-NCQ commands, and the
trim command is not NCQ.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists