lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080828010856.GB30189@disturbed>
Date:	Thu, 28 Aug 2008 11:08:56 +1000
From:	Dave Chinner <david@...morbit.com>
To:	david@...g.hm
Cc:	Jamie Lokier <jamie@...reable.org>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	gus3 <musicman529@...oo.com>,
	Szabolcs Szakacsits <szaka@...s-3g.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	xfs@....sgi.com
Subject: Re: XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous
	snapshotting file system)

On Wed, Aug 27, 2008 at 02:54:28PM -0700, david@...g.hm wrote:
> On Wed, 27 Aug 2008, Dave Chinner wrote:
>
>> On Mon, Aug 25, 2008 at 08:50:14PM -0700, david@...g.hm wrote:
>>> it sounds as if the various flag definitions have been evolving, would it
>>> be worthwhile to sep back and try to get the various filesystem folks to
>>> brainstorm together on what types of hints they would _like_ to see
>>> supported?
>>
>> Three types:
>>
>> 	1. immediate dispatch - merge first with adjacent requests
>> 	   then dispatch
>> 	2. delayed dispatch - queue for a short while to allow
>> 	   merging of requests from above
>> 	3. bulk data - queue and merge. dispatch is completely
>> 	   controlled by the elevator
>
> does this list change if you consider the fact that there may be a raid  
> array or some more complex structure for the block device instead of a  
> simple single disk partition?

No. The whole point of immediate dispatch is that those I/Os are
extremely latency sensitive (i.e. whole fs can stall waiting or
them), so it doesn't matter what the end target is. The faster the
storage subsystem, the more important it is to dispatch those
I/Os immediately to keep the pipes filled...

> since I am suggesting re-thinking the filesystem <-> elevator interface,  
> is there anything you need to have the elevator tell the filesystem? (I'm 
> thinking that this may be the path for the filesystem to learn things  
> about the block device that's under it, is it a raid array, a solid-state 
> drive, etc)

Not so much the elevator, but the block layer in general. That is:

	- capability reporting
		- barriers and type
		- discard support
		- integrity support
		- maximum number of I/Os that can be in flight
		  before congestion occurs
	- geometry of the underlying storage
		- independent domains within the device (e.g. boundaries
		  of linear concatentations)
		- stripe unit/width per domain
		- optimal I/O size per domain
		- latency characteristics per domain
	- notifiers to indicate change of status due to device
	  hotplug back up to the filesystem
		- barrier status change
		- geometry changes due to on-line volume modification
		  (e.g. raid5/6 rebuild after adding a new disk,
		   added another disk to a linear concat, etc)

I'm sure there's more, but that's the list quickly off the top of
my head.

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ