lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87f94c370905120628r2352e923h43dca1645e197b6c@mail.gmail.com>
Date:	Tue, 12 May 2009 09:28:53 -0400
From:	Greg Freemyer <greg.freemyer@...il.com>
To:	Neil Brown <neilb@...e.de>
Cc:	Matthew Wilcox <matthew@....cx>, Theodore Tso <tytso@....edu>,
	Ric Wheeler <rwheeler@...hat.com>,
	"J?rn Engel" <joern@...fs.org>,
	Matthew Wilcox <willy@...ux.intel.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
	Linux RAID <linux-raid@...r.kernel.org>
Subject: Re: Is TRIM/DISCARD going to be a performance problem?

On Mon, May 11, 2009 at 7:38 PM, Neil Brown <neilb@...e.de> wrote:
> On Monday May 11, greg.freemyer@...il.com wrote:
>>
>> And since the mdraid layer is not currently planning to track what has
>> been discarded over time, when a re-shape comes along, it will
>> effectively un-trim everything and rewrite 100% of the FS.
>
> You might not call them "plans" exactly, but I have had thoughts
> about tracking which part of an raid5 had 'live' data and which were
> trimmed.  I think that is the only way I could support TRIM, unless
> devices guarantee that all trimmed blocks read a zeros, and that seems
> unlikely.

Neil,

Re: raid 5, etc. No FS info/discussion

The latest T13 proposed spec I saw explicitly allows reads from
trimmed sectors to return non-determinate data in some devices.  Their
is a per device flag you can read to see if a device does that or not.
 I think mdraid needs to simply assume all trimmed sectors return
non-determinate data.  Either that, or simply check that per device
flag and refuse to accept a drive that supports returning
non-determinate data.

Regardless, ignoring reshape, why do you need to track it?

... thinking

Oh yes, you will have to track it at least at the stripe level.

If p = d1 ^ d2 is not guaranteed to be true due to a stripe discard
and p, d1, d2 are all potentially non-determinate all is good at first
because who cares that d1 = p ^ d2 is not true for your discarded
stripe.  d1 is effectively just random data anyway.

But as soon as either d1 or d2 is written to, you will need to force
the entire stripe back into a determinate state or else you will have
unprotected data sitting on that stripe.  You can only do that if you
know the entire stripe was previously indeterminate, thus you have no
option but to track the state of the stripes if dmraid is going to
support discards with devices that advertise themselves as returning
indeterminate data.

So Neil, it looks like you need to move from thoughts about tracking
discards to planning to track discards.

FYI: I don't know if it just for show, or if people really plan to do
it, but I have seen several people build up very high performance raid
arrays from SSDs already.  Seems that about 8 SSDs maxes out the
current group of sata controllers, pci-express, etc.

Since SSDs with trim support should be even faster, I suspect these
ultra-high performance setups will want to use them.

Greg
-- 
Greg Freemyer
Head of EDD Tape Extraction and Processing team
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
First 99 Days Litigation White Paper -
http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf

The Norcross Group
The Intersection of Evidence & Technology
http://www.norcrossgroup.com
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ