[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A899E73.6000505@redhat.com>
Date: Mon, 17 Aug 2009 14:16:19 -0400
From: Ric Wheeler <rwheeler@...hat.com>
To: James Bottomley <James.Bottomley@...e.de>
CC: Greg Freemyer <greg.freemyer@...il.com>,
Bill Davidsen <davidsen@....com>, Mark Lord <liml@....ca>,
Arjan van de Ven <arjan@...radead.org>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Chris Worley <worleys@...il.com>,
Matthew Wilcox <matthew@....cx>,
Bryan Donlan <bdonlan@...il.com>, david@...g.hm,
Markus Trippelsdorf <markus@...ppelsdorf.de>,
Matthew Wilcox <willy@...ux.intel.com>,
Hugh Dickins <hugh.dickins@...cali.co.uk>,
Nitin Gupta <ngupta@...are.org>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-scsi@...r.kernel.org, linux-ide@...r.kernel.org,
Linux RAID <linux-raid@...r.kernel.org>
Subject: Re: Discard support (was Re: [PATCH] swap: send callback when swap
slot is freed)
Chiming in here a bit late, but coalescing requests is also a good way
to prevent read-modify-write cycles.
Specifically, if I remember the concern correctly, for the WRITE_SAME
with unmap bit set, when the IO is not evenly aligned on the "erase
chunk" (whatever they call it) boundary the device can be forced to do a
read-modify-write (of zeroes) to the end or beginning of that region.
For a disk array, the WRITE_SAME with unmap bit when done cleanly on an
aligned boundary can be done entirely in the array's cache. The
read-modify-write can generate several reads to the back end disks which
are significantly slower....
ric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists