lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1242062325.9647.4.camel@localhost.localdomain>
Date:	Mon, 11 May 2009 13:18:45 -0400
From:	Chris Mason <chris.mason@...cle.com>
To:	Theodore Tso <tytso@....edu>
Cc:	Jens Axboe <jens.axboe@...cle.com>,
	Matthew Wilcox <willy@...ux.intel.com>,
	Ric Wheeler <rwheeler@...hat.com>,
	linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org
Subject: Re: Is TRIM/DISCARD going to be a performance problem?

On Mon, 2009-05-11 at 04:41 -0400, Theodore Tso wrote: 
> On Mon, May 11, 2009 at 10:12:16AM +0200, Jens Axboe wrote:
> > 
> > I largely agree with this. I think that trims should be queued and
> > postponed until the drive is largely idle. I don't want to put this IO
> > tracking in the block layer though, it's going to slow down our iops
> > rates for writes. Providing the functionality in the block layer does
> > make sense though, since it sits between that and the fs anyway. So just
> > not part of the generic IO path, but a set of helpers on the side.
> 
> Yes, I agree.  However, in that case, we need two things from the
> block I/O path.  (A) The discard management layer needs a way of
> knowing that the block device has become idle, and (B) ideally there
> should be a more efficient method for sending trim requests to the I/O
> submission path.

Just a quick me too on the performance problem.  The way btrfs does
trims today is going to be pretty slow as well.

For both btrfs and lvm, the filesystem is going to maintain free block
information based on logical block numbers.  The generic trim layer
should probably be based on a logical address that is stored per-bdi.

Then the bdi will need a callback to turn the logical address based trim
extent into physical extents on N number of physical device.

The tricky part is how will the FS decide a given block is actually
reusable.  We'll need a call back into the FS that indicates trim is
complete on a given logical extent.

-chris
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ