lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 7 Jun 2022 15:01:38 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Alistair Popple <apopple@...dia.com>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/filemap.c: Always read one page in
 do_sync_mmap_readahead()

On Tue, Jun 07, 2022 at 06:37:14PM +1000, Alistair Popple wrote:
> ---
>  include/linux/pagemap.h |  7 +++---
>  mm/filemap.c            | 47 +++++++++++++----------------------------
>  2 files changed, 18 insertions(+), 36 deletions(-)

Love the diffstat ;-)

> @@ -3011,14 +3001,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>  	}
>  #endif
>  
> -	/* If we don't want any read-ahead, don't bother */
> -	if (vmf->vma->vm_flags & VM_RAND_READ)
> -		return fpin;
> -	if (!ra->ra_pages)
> -		return fpin;
> -
> +	fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>  	if (vmf->vma->vm_flags & VM_SEQ_READ) {
> -		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>  		page_cache_sync_ra(&ractl, ra->ra_pages);
>  		return fpin;
>  	}

Good.  Could even pull the maybe_unlock_mmap_for_io() all the way to the
top of the file and remove it from the VM_HUGEPAGE case?

> @@ -3029,19 +3013,20 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>  		WRITE_ONCE(ra->mmap_miss, ++mmap_miss);
>  
>  	/*
> -	 * Do we miss much more than hit in this file? If so,
> -	 * stop bothering with read-ahead. It will only hurt.
> +	 * mmap read-around. If we don't want any read-ahead or if we miss more
> +	 * than we hit don't bother with read-ahead and just read a single page.
>  	 */
> -	if (mmap_miss > MMAP_LOTSAMISS)
> -		return fpin;
> +	if ((vmf->vma->vm_flags & VM_RAND_READ) ||
> +	    !ra->ra_pages || mmap_miss > MMAP_LOTSAMISS) {
> +		ra->start = vmf->pgoff;
> +		ra->size = 1;
> +		ra->async_size = 0;
> +	} else {

I'd put the:
		/* mmap read-around */
here

> +		ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
> +		ra->size = ra->ra_pages;
> +		ra->async_size = ra->ra_pages / 4;
> +	}
>  
> -	/*
> -	 * mmap read-around
> -	 */
> -	fpin = maybe_unlock_mmap_for_io(vmf, fpin);
> -	ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
> -	ra->size = ra->ra_pages;
> -	ra->async_size = ra->ra_pages / 4;
>  	ractl._index = ra->start;
>  	page_cache_ra_order(&ractl, ra, 0);
>  	return fpin;
> @@ -3145,9 +3130,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>  			filemap_invalidate_lock_shared(mapping);
>  			mapping_locked = true;
>  		}
> -		folio = __filemap_get_folio(mapping, index,
> -					  FGP_CREAT|FGP_FOR_MMAP,
> -					  vmf->gfp_mask);
> +		folio = filemap_get_folio(mapping, index);
>  		if (!folio) {
>  			if (fpin)
>  				goto out_retry;

I think we also should remove the filemap_invalidate_lock_shared()
here, no?

We also need to handle the !folio case differently.  Before, if it was
gone, that was definitely an OOM.  Now if it's gone it might have been
truncated, or removed due to memory pressure, or it might be an OOM
situation where readahead didn't manage to create the folio.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ