lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e9c649f-5fc9-4fcc-928c-c4f46a74ca66@redhat.com>
Date: Wed, 13 Nov 2024 12:43:29 +0100
From: David Hildenbrand <david@...hat.com>
To: Qi Zheng <zhengqi.arch@...edance.com>
Cc: jannh@...gle.com, hughd@...gle.com, willy@...radead.org, mgorman@...e.de,
 muchun.song@...ux.dev, vbabka@...nel.org, akpm@...ux-foundation.org,
 zokeefe@...gle.com, rientjes@...gle.com, peterx@...hat.com,
 catalin.marinas@....com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 x86@...nel.org
Subject: Re: [PATCH v2 3/7] mm: introduce do_zap_pte_range()

On 13.11.24 03:40, Qi Zheng wrote:
> 
> 
> On 2024/11/13 01:00, David Hildenbrand wrote:
>> On 31.10.24 09:13, Qi Zheng wrote:
>>> This commit introduces do_zap_pte_range() to actually zap the PTEs, which
>>> will help improve code readability and facilitate secondary checking of
>>> the processed PTEs in the future.
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
>>> ---
>>>    mm/memory.c | 45 ++++++++++++++++++++++++++-------------------
>>>    1 file changed, 26 insertions(+), 19 deletions(-)
>>>
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index bd9ebe0f4471f..c1150e62dd073 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -1657,6 +1657,27 @@ static inline int zap_nonpresent_ptes(struct
>>> mmu_gather *tlb,
>>>        return nr;
>>>    }
>>> +static inline int do_zap_pte_range(struct mmu_gather *tlb,
>>> +                   struct vm_area_struct *vma, pte_t *pte,
>>> +                   unsigned long addr, unsigned long end,
>>> +                   struct zap_details *details, int *rss,
>>> +                   bool *force_flush, bool *force_break)
>>> +{
>>> +    pte_t ptent = ptep_get(pte);
>>> +    int max_nr = (end - addr) / PAGE_SIZE;
>>> +
>>> +    if (pte_none(ptent))
>>> +        return 1;
>>
>> Maybe we should just skip all applicable pte_none() here directly.
> 
> Do you mean we should keep pte_none() case in zap_pte_range()? Like
> below:
> 

No rather an addon patch that will simply skip over all
consecutive pte_none, like:

if (pte_none(ptent)) {
	int nr;

	for (nr = 1; nr < max_nr; nr++) {
		ptent = ptep_get(pte + nr);
		if (pte_none(ptent))
			continue;
	}

	max_nr -= nr;
	if (!max_nr)
		return nr;
	addr += nr * PAGE_SIZE;
	pte += nr;
}

Assuming that it's likely more common to have larger pte_none() holes 
that single ones, optimizing out the 
need_resched()+force_break+incremental pte/addr increments etc.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ