lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200904110938.d9a2cb53a58e67a15c960f47@linux-foundation.org>
Date:   Fri, 4 Sep 2020 11:09:38 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Bean Huo <huobean@...il.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        beanhuo@...ron.com
Subject: Re: [PATCH RFC] mm: Let readahead submit larger batches of pages in
 case of ra->ra_pages == 0

On Fri,  4 Sep 2020 16:48:07 +0200 Bean Huo <huobean@...il.com> wrote:

> From: Bean Huo <beanhuo@...ron.com>
> 
> Current generic_file_buffered_read() will break up the larger batches of pages
> and read data in single page length in case of ra->ra_pages == 0. This patch is
> to allow it to pass the batches of pages down to the device if the supported
> maximum IO size >= the requested size.
> 
> ...
>
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2062,6 +2062,7 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
>  	struct file *filp = iocb->ki_filp;
>  	struct address_space *mapping = filp->f_mapping;
>  	struct inode *inode = mapping->host;
> +	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
>  	struct file_ra_state *ra = &filp->f_ra;
>  	loff_t *ppos = &iocb->ki_pos;
>  	pgoff_t index;
> @@ -2098,9 +2099,14 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
>  		if (!page) {
>  			if (iocb->ki_flags & IOCB_NOIO)
>  				goto would_block;
> -			page_cache_sync_readahead(mapping,
> -					ra, filp,
> -					index, last_index - index);
> +
> +			if (!ra->ra_pages && bdi->io_pages >= last_index - index)
> +				__do_page_cache_readahead(mapping, filp, index,
> +							  last_index - index, 0);
> +			else
> +				page_cache_sync_readahead(mapping, ra, filp,
> +							  index,
> +							  last_index - index);
>  			page = find_get_page(mapping, index);
>  			if (unlikely(page == NULL))
>  				goto no_cached_page;

I assume this is a performance patch.  What are the observed changes in
behaviour?

What is special about ->ra_pages==0?  Wouldn't this optimization still
be valid if ->ra_pages==2?

Doesn't this defeat the purpose of having ->ra_pages==0?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ