[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ba941c8f-381b-3db0-7ec6-ba1094759056@intel.com>
Date: Mon, 11 Sep 2023 15:24:34 +0800
From: Yin Fengwei <fengwei.yin@...el.com>
To: Matthew Wilcox <willy@...radead.org>,
syzbot <syzbot+55cc72f8cc3a549119df@...kaller.appspotmail.com>
CC: <akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <syzkaller-bugs@...glegroups.com>
Subject: Re: [syzbot] [mm?] BUG: Bad page map (7)
Hi Matthew,
On 9/10/23 11:02, Matthew Wilcox wrote:
> On Sat, Sep 09, 2023 at 10:12:48AM -0700, syzbot wrote:
>> commit 617c28ecab22d98a3809370eb6cb50fa24b7bfe1
>> Author: Yin Fengwei <fengwei.yin@...el.com>
>> Date: Wed Aug 2 15:14:05 2023 +0000
>>
>> filemap: batch PTE mappings
>
> Hmm ... I don't know if this is the bug, but ...
I do think we should merge your patch here. LKP already noticed some performance
regressions. I suppose this patch can fix some of them.
I root caused the this "bad page map" issue in my local env. It's related with pte
with protnone on x86_64. So if pte is not protnone, advancing pte by adding
1UL << PFN_PTE_SHIFT is correct. But if pte is protnone, should subtract
1UL << PFN_PTE_SHIFT. I saw pfn_pte() had pfn ^= protnone_mask() and just realized
it.
The producer mmap with PROT_NONE and then trigger SIGXFSZ and create core file.
That will cause GUP with FOLL_FORCE and create protnone pte.
I submitted request to sysbot to test the fixing worked on my local env. Thanks.
Regards
Yin, Fengwei
>
> #syz test
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 582f5317ff71..580d0b2b1a7c 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3506,7 +3506,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
> if (count) {
> set_pte_range(vmf, folio, page, count, addr);
> folio_ref_add(folio, count);
> - if (in_range(vmf->address, addr, count))
> + if (in_range(vmf->address, addr, count * PAGE_SIZE))
> ret = VM_FAULT_NOPAGE;
> }
>
> @@ -3520,7 +3520,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
> if (count) {
> set_pte_range(vmf, folio, page, count, addr);
> folio_ref_add(folio, count);
> - if (in_range(vmf->address, addr, count))
> + if (in_range(vmf->address, addr, count * PAGE_SIZE))
> ret = VM_FAULT_NOPAGE;
> }
>
>
Powered by blists - more mailing lists