lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c11a4b2e-6895-43b7-9ff6-620793bf8551@arm.com>
Date: Mon, 23 Jun 2025 14:26:29 +0530
From: Dev Jain <dev.jain@....com>
To: Alexander Gordeev <agordeev@...ux.ibm.com>,
 Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: move mask update out of the atomic context


On 23/06/25 1:34 pm, Alexander Gordeev wrote:
> There is not need to modify page table synchronization mask
> while apply_to_pte_range() holds user page tables spinlock.

I don't get you, what is the problem with the current code?
Are you just concerned about the duration of holding the
lock?

>
> Signed-off-by: Alexander Gordeev <agordeev@...ux.ibm.com>
> ---
>   mm/memory.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 8eba595056fe..6849ab4e44bf 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3035,12 +3035,13 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
>   			}
>   		} while (pte++, addr += PAGE_SIZE, addr != end);
>   	}
> -	*mask |= PGTBL_PTE_MODIFIED;
>   
>   	arch_leave_lazy_mmu_mode();
>   
>   	if (mm != &init_mm)
>   		pte_unmap_unlock(mapped_pte, ptl);
> +	*mask |= PGTBL_PTE_MODIFIED;
> +
>   	return err;
>   }
>   

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ