lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 20 Jun 2022 19:06:38 +1000
From:   Alistair Popple <apopple@...dia.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/filemap.c: Always read one page in
 do_sync_mmap_readahead()


Alistair Popple <apopple@...dia.com> writes:

> Matthew Wilcox <willy@...radead.org> writes:
>
>> On Tue, Jun 07, 2022 at 06:37:14PM +1000, Alistair Popple wrote:
>>> ---
>>>  include/linux/pagemap.h |  7 +++---
>>>  mm/filemap.c            | 47 +++++++++++++----------------------------
>>>  2 files changed, 18 insertions(+), 36 deletions(-)
>>
>> Love the diffstat ;-)
>>
>>> @@ -3011,14 +3001,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>>>  	}
>>>  #endif
>>>
>>> -	/* If we don't want any read-ahead, don't bother */
>>> -	if (vmf->vma->vm_flags & VM_RAND_READ)
>>> -		return fpin;
>>> -	if (!ra->ra_pages)
>>> -		return fpin;
>>> -
>>> +	fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>>>  	if (vmf->vma->vm_flags & VM_SEQ_READ) {
>>> -		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>>>  		page_cache_sync_ra(&ractl, ra->ra_pages);
>>>  		return fpin;
>>>  	}
>>
>> Good.  Could even pull the maybe_unlock_mmap_for_io() all the way to the
>> top of the file and remove it from the VM_HUGEPAGE case?
>
> Good idea. Also while I'm here is there a reason we don't update
> ra->start or mmap_miss for the VM_HUGEPAGE case?
>
>>> @@ -3029,19 +3013,20 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
>>>  		WRITE_ONCE(ra->mmap_miss, ++mmap_miss);
>>>
>>>  	/*
>>> -	 * Do we miss much more than hit in this file? If so,
>>> -	 * stop bothering with read-ahead. It will only hurt.
>>> +	 * mmap read-around. If we don't want any read-ahead or if we miss more
>>> +	 * than we hit don't bother with read-ahead and just read a single page.
>>>  	 */
>>> -	if (mmap_miss > MMAP_LOTSAMISS)
>>> -		return fpin;
>>> +	if ((vmf->vma->vm_flags & VM_RAND_READ) ||
>>> +	    !ra->ra_pages || mmap_miss > MMAP_LOTSAMISS) {
>>> +		ra->start = vmf->pgoff;
>>> +		ra->size = 1;
>>> +		ra->async_size = 0;
>>> +	} else {
>>
>> I'd put the:
>> 		/* mmap read-around */
>> here
>>
>>> +		ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
>>> +		ra->size = ra->ra_pages;
>>> +		ra->async_size = ra->ra_pages / 4;
>>> +	}
>>>
>>> -	/*
>>> -	 * mmap read-around
>>> -	 */
>>> -	fpin = maybe_unlock_mmap_for_io(vmf, fpin);
>>> -	ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
>>> -	ra->size = ra->ra_pages;
>>> -	ra->async_size = ra->ra_pages / 4;
>>>  	ractl._index = ra->start;
>>>  	page_cache_ra_order(&ractl, ra, 0);
>>>  	return fpin;
>>> @@ -3145,9 +3130,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>  			filemap_invalidate_lock_shared(mapping);
>>>  			mapping_locked = true;
>>>  		}
>>> -		folio = __filemap_get_folio(mapping, index,
>>> -					  FGP_CREAT|FGP_FOR_MMAP,
>>> -					  vmf->gfp_mask);
>>> +		folio = filemap_get_folio(mapping, index);
>>>  		if (!folio) {
>>>  			if (fpin)
>>>  				goto out_retry;
>>
>> I think we also should remove the filemap_invalidate_lock_shared()
>> here, no?
>
> Right, afaik filemap_invalidate_lock_shared() is needed when
> instantiating pages in the page cache during fault, which this patch
> does via page_cache_ra_order() in do_sync_mmap_readahead() so I think
> you're right about removing it for filemap_get_folio().
>
> However do_sync_mmap_readahead() is the way normal (ie. !VM_RAND_READ)
> pages would get instantiated today. So shouldn't
> filemap_invalidate_lock_shared() be called before
> do_sync_mmap_readahead() anyway? Or am I missing something?

Never mind. I missed that this is normally done further down the call
stack (in page_cache_ra_unbounded()). This makes it somewhat annoying
to do this clean-up though, because to deal with this case:

	if (unlikely(!folio_test_uptodate(folio))) {
		/*
		 * The page was in cache and uptodate and now it is not.
		 * Strange but possible since we didn't hold the page lock all
		 * the time. Let's drop everything get the invalidate lock and
		 * try again.
		 */
		if (!mapping_locked) {

In this change we need to be able to call do_sync_mmap_readahead()
whilst holding invalidate_lock to ensure we can successfully get an
uptodate folio without it being removed by eg. hole punching when the
folio lock is dropped.

I am experimenting with pulling all the filemap_invalidate_lock_shared()
calls further up the stack, but that creates it's own problems.

>> We also need to handle the !folio case differently.  Before, if it was
>> gone, that was definitely an OOM.  Now if it's gone it might have been
>> truncated, or removed due to memory pressure, or it might be an OOM
>> situation where readahead didn't manage to create the folio.
>
> Good point, thanks for catching that.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ