lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 18 Nov 2010 12:04:31 -0800
From:	Greg Freemyer <greg.freemyer@...il.com>
To:	James Bottomley <James.Bottomley@...e.de>,
	Mark Lord <kernel@...savvy.com>
Cc:	Jeff Moyer <jmoyer@...hat.com>,
	Christoph Hellwig <hch@...radead.org>,
	Matthew Wilcox <matthew@....cx>,
	Josef Bacik <josef@...hat.com>,
	Lukas Czerner <lczerner@...hat.com>, tytso@....edu,
	linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, sandeen@...hat.com
Subject: Re: [PATCH 1/2] fs: Do not dispatch FITRIM through separate super_operation

Adding Mark Lord in CC:

On Thu, Nov 18, 2010 at 9:41 AM, James Bottomley
<James.Bottomley@...e.de> wrote:
> On Thu, 2010-11-18 at 12:22 -0500, Jeff Moyer wrote:
>> James Bottomley <James.Bottomley@...e.de> writes:
>>
>> > Not stepping into the debate: I'm happy to see punch go to the mapping
>> > data and FITRIM pick it up later.
>> >
>> > However, I think it's time to question whether we actually still want to
>> > allow online discard at all.  Most of the benchmarks show it to be a net
>>
>> Define online discard, please.
>
> Trims emitted inline at the FS operates (mount option -o discard)
>
>> > lose to almost everything (either SSD or Thinly Provisioned arrays), so
>> > it's become an "enable this to degrade performance" option with no
>> > upside.
>>
>> Some SSDs very much require TRIMming to perform well as they age.  If
>> you're suggesting that we move from doing discards in journal commits to
>> a batched discard, like the one Lukas implemented, then I think that's
>> fine.  If we need to reintroduce the finer-grained discards due to some
>> hardware changes in the future, we can always do that.
>
> Right, I'm suggesting we just rely on offline methods.  Regardless of
> what happens to FITRIM, we have wiper.sh now (although it does require
> unmounted use for most of the less than modern fs).
>
> James

I'm a fan of wiper.sh, but afaik it still cannot address a
multi-spindle LVM setup, Nor a MDraid setup.  etc.

That's because it bypasses the block stack and talks directly to the
devices.  Thus it doesn't get the benefit of all the logical to
physical sector remapping handled via the block stack.

Mark, please correct me if I'm wrong.

The LVM and MDraid setup are important to support, and only "mount -o
discard" and Lucas's FITRIM support them.

So afaik we have 3 options, each with an opportunity for improvement:

1) mount -o discard - needs kernel tuning / new hardware to be a
performance win.

2) FITRIM doesn't leverage the fact the a TRIM command can handle
multiple ranges per TRIM command payload.  I haven't seen any FITRIM
vs. wiper.sh benchmarks, so I don't know what impact that has in
practice.  Mark Lord thought that this lacking feature would cause
FITRIM to take minutes or hours with some hardware.  Especially early
generation SSDs.

3) wiper.sh does leverage the multiple ranges per TRIM command, but it
really needs a new block layer interface that would allow it to push
discard commands into the kernel via the block layer, not just down at
the physical drive layer.  The interface should accept multiple ranges
per invocation and trigger TRIM commands to the SSD that have have a
multi-range discard payload.

So it seems that for now keeping all 3 is best.  My personal hope is
that the block layer grows the ability to accept multirange discard
requests and  FITRIM is updated to leverage it.

Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ