lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 04 Dec 2019 10:53:50 +0300
From:   Vyacheslav Dubeyko <slava@...eyko.com>
To:     Eric Biggers <ebiggers@...nel.org>, linux-fscrypt@...r.kernel.org
Cc:     linux-ext4@...r.kernel.org, linux-f2fs-devel@...ts.sourceforge.net,
        linux-fsdevel@...r.kernel.org,
        Victor Hsieh <victorhsieh@...gle.com>
Subject: Re: [PATCH] fs-verity: implement readahead for FS_IOC_ENABLE_VERITY

On Tue, 2019-12-03 at 11:30 -0800, Eric Biggers wrote:
> From: Eric Biggers <ebiggers@...gle.com>
> 
> When it builds the first level of the Merkle tree,
> FS_IOC_ENABLE_VERITY
> sequentially reads each page of the file using read_mapping_page().
> This works fine if the file's data is already in pagecache, which
> should
> normally be the case, since this ioctl is normally used immediately
> after writing out the file.
> 
> But in any other case this implementation performs very poorly, since
> only one page is read at a time.
> 
> Fix this by implementing readahead using the functions from
> mm/readahead.c.
> 
> This improves performance in the uncached case by about 20x, as seen
> in
> the following benchmarks done on a 250MB file (on x86_64 with SHA-
> NI):
> 
>     FS_IOC_ENABLE_VERITY uncached (before) 3.299s
>     FS_IOC_ENABLE_VERITY uncached (after)  0.160s
>     FS_IOC_ENABLE_VERITY cached            0.147s
>     sha256sum uncached                     0.191s
>     sha256sum cached                       0.145s
> 
> Note: we could instead switch to kernel_read().  But that would mean
> we'd no longer be hashing the data directly from the pagecache, which
> is
> a nice optimization of its own.  And using kernel_read() would
> require
> allocating another temporary buffer, hashing the data and tree pages
> separately, and explicitly zero-padding the last page -- so it
> wouldn't
> really be any simpler than direct pagecache access, at least for now.
> 
> Signed-off-by: Eric Biggers <ebiggers@...gle.com>
> ---
>  fs/verity/enable.c | 46 ++++++++++++++++++++++++++++++++++++++++--
> ----
>  1 file changed, 40 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/verity/enable.c b/fs/verity/enable.c
> index eabc6ac19906..f7eaffa60196 100644
> --- a/fs/verity/enable.c
> +++ b/fs/verity/enable.c
> @@ -13,14 +13,44 @@
>  #include <linux/sched/signal.h>
>  #include <linux/uaccess.h>
>  
> -static int build_merkle_tree_level(struct inode *inode, unsigned int
> level,
> +/*
> + * Read a file data page for Merkle tree construction.  Do
> aggressive readahead,
> + * since we're sequentially reading the entire file.
> + */
> +static struct page *read_file_data_page(struct inode *inode,
> +					struct file_ra_state *ra,
> +					struct file *filp,
> +					pgoff_t index,
> +					pgoff_t num_pages_in_file)
> +{
> +	struct page *page;
> +
> +	page = find_get_page(inode->i_mapping, index);
> +	if (!page || !PageUptodate(page)) {
> +		if (page)
> +			put_page(page);


It looks like that there is not necessary check here. If we have NULL
pointer on page then we will not enter inside. But if we have valid
pointer on page then we have double check inside. Am I correct? 


> +		page_cache_sync_readahead(inode->i_mapping, ra, filp,
> +					  index, num_pages_in_file -
> index);
> +		page = read_mapping_page(inode->i_mapping, index,
> NULL);
> +		if (IS_ERR(page))
> +			return page;

Could we recieve the NULL pointer here? Is callee ready to process theNULL return value? 

Thanks,
Viacheslav Dubeyko.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ