lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070823024723.GN61154114@sgi.com>
Date:	Thu, 23 Aug 2007 12:47:23 +1000
From:	David Chinner <dgc@....com>
To:	Chris Mason <chris.mason@...cle.com>
Cc:	Fengguang Wu <wfg@...l.ustc.edu.cn>, Andrew Morton <akpm@...l.org>,
	Ken Chen <kenchen@...gle.com>, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, Jens Axboe <jens.axboe@...cle.com>
Subject: Re: [PATCH 0/6] writeback time order/delay fixes take 3

On Wed, Aug 22, 2007 at 08:42:01AM -0400, Chris Mason wrote:
> I think we should assume a full scan of s_dirty is impossible in the
> presence of concurrent writers.  We want to be able to pick a start
> time (right now) and find all the inodes older than that start time.
> New things will come in while we're scanning.  But perhaps that's what
> you're saying...
> 
> At any rate, we've got two types of lists now.  One keeps track of age
> and the other two keep track of what is currently being written.  I
> would try two things:
> 
> 1) s_dirty stays a list for FIFO.  s_io becomes a radix tree that
> indexes by inode number (or some arbitrary field the FS can set in the
> inode).  Radix tree tags are used to indicate which things in s_io are
> already in progress or are pending (hand waving because I'm not sure
> exactly).
> 
> inodes are pulled off s_dirty and the corresponding slot in s_io is
> tagged to indicate IO has started.  Any nearby inodes in s_io are also
> sent down.

the problem with this approach is that it only looks at inode locality.
Data locality is ignored completely here and the data for all the
inodes that are close together could be splattered all over the drive.
In that case, clustering by inode location is exactly the wrong
thing to do.

For example, XFs changes allocation strategy at 1TB for 32bit inode
filesystems which makes the data get placed way away from the inodes.
i.e. inodes in AGs below 1TB, all data in AGs > 1TB. clustering
by inode number for data writeback is mostly useless in the >1TB
case.

The inode32 for <1Tb and inode64 allocators both try to keep data
close to the inode (i.e. in the same AG) so clustering by inode number
might work better here.

Also, it might be worthwhile allowing the filesystem to supply a
hint or mask for "closeness" for inode clustering. This would help
the gernic code only try to cluster inode writes to inodes that
fall into the same cluster as the first inode....

> > Notes:
> > (1) I'm not sure inode number is correlated to disk location in
> >     filesystems other than ext2/3/4. Or parent dir?
> 
> In general, it is a better assumption than sorting by time.  It may
> make sense to one day let the FS provide a clustering hint
> (corresponding to the first block in the file?), but for starters it
> makes sense to just go with the inode number.

Perhaps multiple hints are needed - one for data locality and one
for inode cluster locality.

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ