lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1250437927.3856.119.camel@mulgrave.site>
Date:	Sun, 16 Aug 2009 10:52:07 -0500
From:	James Bottomley <James.Bottomley@...e.de>
To:	Arjan van de Ven <arjan@...radead.org>
Cc:	Alan Cox <alan@...rguk.ukuu.org.uk>, Mark Lord <liml@....ca>,
	Chris Worley <worleys@...il.com>,
	Matthew Wilcox <matthew@....cx>,
	Bryan Donlan <bdonlan@...il.com>, david@...g.hm,
	Greg Freemyer <greg.freemyer@...il.com>,
	Markus Trippelsdorf <markus@...ppelsdorf.de>,
	Matthew Wilcox <willy@...ux.intel.com>,
	Hugh Dickins <hugh.dickins@...cali.co.uk>,
	Nitin Gupta <ngupta@...are.org>, Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	linux-scsi@...r.kernel.org, linux-ide@...r.kernel.org,
	Linux RAID <linux-raid@...r.kernel.org>
Subject: Re: Discard support (was Re: [PATCH] swap: send callback when swap
 slot is freed)

On Sun, 2009-08-16 at 08:34 -0700, Arjan van de Ven wrote:
> On Sun, 16 Aug 2009 15:05:30 +0100
> Alan Cox <alan@...rguk.ukuu.org.uk> wrote:
> 
> > On Sat, 15 Aug 2009 08:55:17 -0500
> > James Bottomley <James.Bottomley@...e.de> wrote:
> > 
> > > On Sat, 2009-08-15 at 09:22 -0400, Mark Lord wrote:
> > > > James Bottomley wrote:
> > > > >
> > > > > This means you have to drain the outstanding NCQ commands
> > > > > (stalling the device) before you can send a TRIM.   If we do
> > > > > this for every discard, the performance impact will be pretty
> > > > > devastating, hence the need to coalesce.  It's nothing really
> > > > > to do with device characteristics, it's an ATA protocol problem.
> > > > ..
> > > > 
> > > > I don't think that's really much of an issue -- we already have
> > > > to do that for cache-flushes whenever barriers are enabled.  Yes
> > > > it costs, but not too much.
> > > 
> > > That's not really what the enterprise is saying about flush
> > > barriers. True, not all the performance problems are NCQ queue
> > > drain, but for a steady workload they are significant.
> > 
> > Flush barriers are nightmare for more than enterprise. You drive
> > basically goes for a hike for a bit which trashes interactivity as
> > well. If the device can't do trim and the like without a drain I
> > don't see much point doing it at all, except maybe to wait for idle
> > devices and run a filesystem managed background 'strimmer' thread to
> > just weed out now idle blocks that have stayed idle - eg by adding an
> > inode of all the deleted untrimmed blocks and giving it an irregular
> > empty ? 
> 
> trim is mostly for ssd's though, and those tend to not have the "goes
> for a hike" behavior as much......

Well, yes and no ... a lot of SSDs don't actually implement NCQ, so the
impact to them will be less ... although I think enterprise class SSDs
do implement NCQ.

> I wonder if it's worse to batch stuff up, because then the trim itself
> gets bigger and might take longer.....

So this is where we're getting into the realms of speculation.  There
really are only about a couple of people out there with trim
implementing SSDs, so that's not really enough to make any judgement.

However, the enterprise has been doing UNMAP for a while, so we can draw
inferences from them since the SSD FTL will operate similarly.  For
them, UNMAP is the same cost in terms of time regardless of the number
of extents.  The reason is that it's moving the blocks from the global
in use list to the global free list.  Part of the problem is that this
involves locking and quiescing, so UNMAP ends up being quite expensive
to the array but constant in terms of cost (hence they want as few
unmaps for as many sectors as possible).

For SSDs, the FTL has to have a separate operation: erase.  Now, one
could see the correct implementation simply moving the sectors from the
in-use list to the to be cleaned list and still do the cleaning in the
background: that would be constant cost (but, again, likely expensive).
Of course, if SSD vendors decided to erase on the spot when seeing TRIM,
this wouldn't be true ...

James


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ