lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 17 Aug 2009 14:21:21 -0400
From:	Greg Freemyer <greg.freemyer@...il.com>
To:	James Bottomley <James.Bottomley@...e.de>
Cc:	Bill Davidsen <davidsen@....com>, Mark Lord <liml@....ca>,
	Arjan van de Ven <arjan@...radead.org>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Chris Worley <worleys@...il.com>,
	Matthew Wilcox <matthew@....cx>,
	Bryan Donlan <bdonlan@...il.com>, david@...g.hm,
	Markus Trippelsdorf <markus@...ppelsdorf.de>,
	Matthew Wilcox <willy@...ux.intel.com>,
	Hugh Dickins <hugh.dickins@...cali.co.uk>,
	Nitin Gupta <ngupta@...are.org>, Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	linux-scsi@...r.kernel.org, linux-ide@...r.kernel.org,
	Linux RAID <linux-raid@...r.kernel.org>
Subject: Re: Discard support (was Re: [PATCH] swap: send callback when swap 
	slot is freed)

On Mon, Aug 17, 2009 at 1:19 PM, James Bottomley<James.Bottomley@...e.de> wrote:
> On Mon, 2009-08-17 at 13:08 -0400, Greg Freemyer wrote:
>> All,
>>
>> Seems like the high-level wrap-up of all this is:
>>
>> There are hopes that highly efficient SSDs will appear on the market
>> that can leverage a passthru non-coalescing discard feature.  And that
>> a whitelist should be created to allow those SSDs to see discards
>> intermixed with the rest of the data i/o.
>
> That's not my conclusion.  Mine was the NCQ drain would still be
> detremental to interleaved trim even if the drive could do it for zero
> cost.

Maybe I misunderstood Jim Owens previous comment that designing for
devices that only meet the spec. was not his / Linus'es preference.

Instead they want to have a whitelist enabled list of drives that
support trim / ncq without having to drain the queue.

I just re-read his post and he did not explicitly say that, so maybe
I'm mis-representing it.

>> For the other known cases:
>>
>> SSDs that meet the ata-8 spec, but don't exceed it
>> Enterprise SCSI
>
> No, SCSI will do WRITE_SAME/UNMAP as currently drafted in SBC3
>
>> mdraid with SSD storage used to build raid5 / raid6 arrays
>>
>> Non-coalescing is believed detrimental,
>
> It is?  Why?

For the only compliant SSD in the wild, Mark has shown it to be true
via testing.

For Enterprise SCSI, I thought you said a coalescing solution is
preferred.  (I took that to mean non-coalescing is detremental.  Not
true?).

For mdraid, if the trims are not coalesced mdraid will have to either
ignore them, or coalesce them themselves. Having them come in bigger
discard ranges is clearly better.  (ie. At least the size of a stripe,
so it can adjust the start / end sector to a stripe boundary.)

>>  but a regular flushing of the
>> unused blocks/sectors via a tool like Mark Lord has written should be
>> acceptable.
>>
>> Mark, I don't believe your tool really addresses the mdraid situation,
>> do you agree.  ie. Since your bypassing most of the block stack,
>> mdraid has no way of snooping on / adjusting the discards you are
>> sending out.
>>
>> Thus the 2 solutions that have been worked on already seem to address
>> the needs of everything but mdraid.
>
> I count three:  Mark Lord script via SG_IO.  hch enhanced script via
> XFS_TRIM and willy current discard inline which he's considering
> coalescing for.

I missed XFS_TRIM somehow.  What benefit does XFS_TRIM provide at a
high level?  Is it part of the realtime delete file process, or an
after the fact scanner?

> James

Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ