lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e51ee295-57c1-47e0-88d6-5ca0d12c2267@redhat.com>
Date: Fri, 10 Jan 2025 17:35:25 +0100
From: David Hildenbrand <david@...hat.com>
To: Hillf Danton <hdanton@...a.com>,
 syzbot <syzbot+c0673e1f1f054fac28c2@...kaller.appspotmail.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2)

On 31.12.24 09:41, Hillf Danton wrote:
> On Fri, 27 Dec 2024 20:56:21 -0800
>> syzbot has found a reproducer for the following issue on:
>>
>> HEAD commit:    8155b4ef3466 Add linux-next specific files for 20241220
>> git tree:       linux-next
>> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000
> 
> #syz test
> 
> --- x/mm/filemap.c
> +++ y/mm/filemap.c
> @@ -3636,6 +3636,10 @@ static vm_fault_t filemap_map_folio_rang
>   		continue;
>   skip:
>   		if (count) {
> +			for (unsigned int i = 0; i < count; i++) {
> +				if (page_folio(page + i) != folio)
> +					goto out;
> +			}

IIRC, count <= nr_pages. Wouldn't that mean that we somehow pass in 
nr_pages that already exceeds the given folio+start?

When I last looked at this, I was not able to spot the error in the 
caller :(

>   			set_pte_range(vmf, folio, page, count, addr);
>   			*rss += count;
>   			folio_ref_add(folio, count);
> @@ -3658,6 +3662,7 @@ skip:
>   			ret = VM_FAULT_NOPAGE;
>   	}
>   
> +out:
>   	vmf->pte = old_ptep;
>   
>   	return ret;
> @@ -3702,7 +3707,7 @@ vm_fault_t filemap_map_pages(struct vm_f
>   	struct file *file = vma->vm_file;
>   	struct address_space *mapping = file->f_mapping;
>   	pgoff_t file_end, last_pgoff = start_pgoff;
> -	unsigned long addr;
> +	unsigned long addr, pmd_end;
>   	XA_STATE(xas, &mapping->i_pages, start_pgoff);
>   	struct folio *folio;
>   	vm_fault_t ret = 0;
> @@ -3731,6 +3736,12 @@ vm_fault_t filemap_map_pages(struct vm_f
>   	if (end_pgoff > file_end)
>   		end_pgoff = file_end;
>   
> +	/* make vmf->pte[x] valid */
> +	pmd_end = ALIGN(addr, PMD_SIZE);
> +	pmd_end = (pmd_end - addr) >> PAGE_SHIFT;
> +	if (end_pgoff - start_pgoff > pmd_end)
> +		end_pgoff = start_pgoff + pmd_end;
> +

do_fault_around() comments "This way it's easier to guarantee that we 
don't cross page table boundaries."

It does some magic with PTRS_PER_PTE.

You're diff here seems to indicate that this is not the case?

But it's rather surprising that we see these issues pop up just now in 
-next.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ