lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090511084941.GN4694@kernel.dk>
Date:	Mon, 11 May 2009 10:49:41 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Theodore Tso <tytso@....edu>
Cc:	Matthew Wilcox <willy@...ux.intel.com>,
	Ric Wheeler <rwheeler@...hat.com>,
	linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org
Subject: Re: Is TRIM/DISCARD going to be a performance problem?

On Mon, May 11 2009, Theodore Tso wrote:
> On Mon, May 11, 2009 at 10:12:16AM +0200, Jens Axboe wrote:
> > 
> > I largely agree with this. I think that trims should be queued and
> > postponed until the drive is largely idle. I don't want to put this IO
> > tracking in the block layer though, it's going to slow down our iops
> > rates for writes. Providing the functionality in the block layer does
> > make sense though, since it sits between that and the fs anyway. So just
> > not part of the generic IO path, but a set of helpers on the side.
> 
> Yes, I agree.  However, in that case, we need two things from the
> block I/O path.  (A) The discard management layer needs a way of
> knowing that the block device has become idle, and (B) ideally there

We don't have to inform of such a condition, the block layer can check
for existing pending trims and kick those off at an appropriate time.

> should be a more efficient method for sending trim requests to the I/O
> submission path.  If we batch the results, when we *do* send the
> discard requests, we may be sending several hundred discards, and it
> would be useful if we could pass into the I/O submission path a linked
> list of regions, so the queue can be drained *once*, and then a whole
> series of discards can be sent to the device all at once.
> 
> Does that make sense to you?

Agree, we definitely only want to do the queue quiesce once for passing
down a series of trims. With the delayed trim queuing, that isn't very
difficult.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ