lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 21 May 2024 11:18:41 +0200
From: David Hildenbrand <david@...hat.com>
To: Vincent Donnefort <vdonnefort@...gle.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 Andrew Morton <akpm@...ux-foundation.org>,
 Dan Williams <dan.j.williams@...el.com>, rostedt@...dmis.org
Subject: Re: [PATCH v1 1/2] mm/memory: cleanly support zeropage in
 vm_insert_page*(), vm_map_pages*() and vmf_insert_mixed()

On 21.05.24 11:06, Vincent Donnefort wrote:
> On Tue, May 21, 2024 at 10:25:43AM +0200, David Hildenbrand wrote:
>> On 17.05.24 17:07, Vincent Donnefort wrote:
>>> Hi David,
>>>
>>> [...]
>>>
>>>> -static int validate_page_before_insert(struct page *page)
>>>> +static bool vm_mixed_zeropage_allowed(struct vm_area_struct *vma)
>>>> +{
>>>> +	VM_WARN_ON_ONCE(vma->vm_flags & VM_PFNMAP);
>>>> +	/*
>>>> +	 * Whoever wants to forbid the zeropage after some zeropages
>>>> +	 * might already have been mapped has to scan the page tables and
>>>> +	 * bail out on any zeropages. Zeropages in COW mappings can
>>>> +	 * be unshared using FAULT_FLAG_UNSHARE faults.
>>>> +	 */
>>>> +	if (mm_forbids_zeropage(vma->vm_mm))
>>>> +		return false;
>>>> +	/* zeropages in COW mappings are common and unproblematic. */
>>>> +	if (is_cow_mapping(vma->vm_flags))
>>>> +		return true;
>>>> +	/* Mappings that do not allow for writable PTEs are unproblematic. */
>>>> +	if (!(vma->vm_flags & (VM_WRITE | VM_MAYWRITE)))
>>>> +		return false;
>>>
>>> Shouldn't we return true here?
>>
>> Indeed, thanks! I wish we would have user in the tree already that could
>> exercise that code path.
> 
> I have a patch ready to use this path from the memory map tracing! I can either
> send it once this one is picked-up or you can add it to your series?

Whatever works for you! To debug, it would be good if you could send me 
the patch and simple instructions on how to test it (do we have a 
selftest as well?).

> 
>>
>> [...]
>>
>>>> @@ -2043,7 +2085,7 @@ static int insert_page_in_batch_locked(struct vm_area_struct *vma, pte_t *pte,
>>>>    	if (!page_count(page))
>>>>    		return -EINVAL;
>>>
>>> This test here prevents inserting the zero-page.
>>
>> You mean the existing page_count() check? or the (wrong) vma->vm_flags check
>> in vm_mixed_zeropage_allowed() ?
> 
> I meant this page_count() here. As a quick test, I removed that check (also fixed
> the vm_flags above) and the zero-page was properly mapped!

That's weird and might indicate another issue.

The refcount of the shared zeropage should be initialized to 1, just 
like for any other reserved pages 
(mm/mm_init.c:__init_single_page()->init_page_count())

Hm ...

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ