lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YQLQRGyKGdL00sQ7@casper.infradead.org>
Date:   Thu, 29 Jul 2021 16:59:00 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Konstantin Komarov <almaz.alexandrovich@...agon-software.com>
Cc:     linux-fsdevel@...r.kernel.org, viro@...iv.linux.org.uk,
        linux-kernel@...r.kernel.org, pali@...nel.org, dsterba@...e.cz,
        aaptel@...e.com, rdunlap@...radead.org, joe@...ches.com,
        mark@...mstone.com, nborisov@...e.com,
        linux-ntfs-dev@...ts.sourceforge.net, anton@...era.com,
        dan.carpenter@...cle.com, hch@....de, ebiggers@...nel.org,
        andy.lavr@...il.com, kari.argillander@...il.com,
        oleksandr@...alenko.name
Subject: Re: [PATCH v27 02/10] fs/ntfs3: Add initialization of super block

On Thu, Jul 29, 2021 at 04:49:35PM +0300, Konstantin Komarov wrote:
> +static void ntfs_readahead(struct readahead_control *rac)
> +{
> +	struct address_space *mapping = rac->mapping;
> +	struct inode *inode = mapping->host;
> +	struct ntfs_inode *ni = ntfs_i(inode);
> +	u64 valid;
> +	loff_t pos;
> +
> +	if (is_resident(ni)) {
> +		/* no readahead for resident */
> +		return;
> +	}
> +
> +	if (is_compressed(ni)) {
> +		/* no readahead for compressed */
> +		return;

I'm sure this works, but it could perform better ;-)

The ->readpage path that you fall back to is synchronous (unless I
misunderstand something).  We now have readahead_expand() which lets
you ensure that there are pages in the page cache for the entire
range of the compressed extent.  So if you can create an asynchronous
decompression path (probably need your own custom bi_end_io), you can
start reads here and only unlock the pages after you've done the
decompression.

This should not gate your code being accepted upstream.  What I'd
really like to see is your buffered i/o path being converted from
buffer_heads to iomap, and iomap to gain the ability to handle
compressed extents.

> +	valid = ni->i_valid;
> +	pos = readahead_pos(rac);
> +
> +	if (valid < i_size_read(inode) && pos <= valid &&
> +	    valid < pos + readahead_length(rac)) {
> +		/* range cross 'valid'. read it page by page */
> +		return;

This, I do not understand.  Why can't ntfs_get_block / mpage_readahead
cope with a readahead that crosses i_valid?  AIUI, i_valid is basically
the same as i_size, and mpage_readahead() handles crossing i_size
just fine.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ