[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f5c7ed26-b034-4600-ba29-26761eb1eef5@arm.com>
Date: Mon, 23 Jun 2025 15:39:13 +0530
From: Dev Jain <dev.jain@....com>
To: Alexander Gordeev <agordeev@...ux.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: move mask update out of the atomic context
On 23/06/25 3:07 pm, Alexander Gordeev wrote:
> On Mon, Jun 23, 2025 at 02:26:29PM +0530, Dev Jain wrote:
>> On 23/06/25 1:34 pm, Alexander Gordeev wrote:
>>> There is not need to modify page table synchronization mask
>>> while apply_to_pte_range() holds user page tables spinlock.
>> I don't get you, what is the problem with the current code?
>> Are you just concerned about the duration of holding the
>> lock?
> Yes.
Doesn't really matter but still a correct change:
Reviewed-by: Dev Jain <dev.jain@....com>
>
>>> Signed-off-by: Alexander Gordeev <agordeev@...ux.ibm.com>
>>> ---
>>> mm/memory.c | 3 ++-
>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index 8eba595056fe..6849ab4e44bf 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -3035,12 +3035,13 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
>>> }
>>> } while (pte++, addr += PAGE_SIZE, addr != end);
>>> }
>>> - *mask |= PGTBL_PTE_MODIFIED;
>>> arch_leave_lazy_mmu_mode();
>>> if (mm != &init_mm)
>>> pte_unmap_unlock(mapped_pte, ptl);
>>> + *mask |= PGTBL_PTE_MODIFIED;
>>> +
>>> return err;
>>> }
Powered by blists - more mailing lists