[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <296f9c4b-8c93-4464-9b75-e06770a4bd31@arm.com>
Date: Wed, 3 Dec 2025 12:37:48 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Jianpeng Chang <jianpeng.chang.cn@...driver.com>,
catalin.marinas@....com, will@...nel.org, ardb@...nel.org,
ying.huang@...ux.alibaba.com
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH] arm64: mm: Fix kexec failure after pte_mkwrite_novma()
change
On 02/12/25 1:25 PM, Jianpeng Chang wrote:
>
> On 12/2/25 2:57 PM, Anshuman Khandual wrote:
>> CAUTION: This email comes from a non Wind River email account!
>> Do not click links or open attachments unless you recognize the sender and know the content is safe.
>>
>> On 02/12/25 7:57 AM, Jianpeng Chang wrote:
>>> Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in
>>> pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY
>>> when the page is already dirty (PTE_DIRTY is set). While this optimization
>>> prevents unnecessary dirty page marking in normal memory management paths,
>>> it breaks kexec on some platforms like NXP LS1043.
>>>
>>> The issue occurs in the kexec code path:
>>> 1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a
>>> writable copy of the linear mapping
>>> 2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy
>>> are writable for the new kernel image copying
>>> 3. With the new logic, clean pages (without PTE_DIRTY) remain read-only
>>> 4. When kexec tries to copy the new kernel image through the linear
>>> mapping, it fails on read-only pages, causing the system to hang
>>> after "Bye!"
>>>
>>> The same issue affects hibernation which uses the same trans_pgd code path.
>>>
>>> Fix this by explicitly clearing PTE_RDONLY in _copy_pte() for both
>> via pte_mkdirty() ?
> Sorry, for this.
>
> Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which
> ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and
> hibernation, making all pages in the temporary mapping writable
> regardless of their dirty state.
Fair enough.
>>
>>> kexec and hibernation, ensuring all pages in the temporary mapping are
>>> writable regardless of their dirty state. This preserves the original
>>> commit's optimization for normal memory management while fixing the
>>> kexec/hibernation regression.
>>>
>>> Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in pte_mkwrite()")
>>> Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@...driver.com>
>>> ---
>>> v2:
>>> - Use pte_mkwrite_novma(pte_mkdirty(pte)) instead of manual bit manipulation
>>> - Updated comments to clarify pte_mkwrite_novma() alone cannot be used
>>> v1: https://lore.kernel.org/all/20251127034350.3600454-1-jianpeng.chang.cn@windriver.com/
>>>
>>> arch/arm64/mm/trans_pgd.c | 9 +++++++--
>>> 1 file changed, 7 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
>>> index 18543b603c77..08f5ee6643e1 100644
>>> --- a/arch/arm64/mm/trans_pgd.c
>>> +++ b/arch/arm64/mm/trans_pgd.c
>>> @@ -40,8 +40,13 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr)
>>> * Resume will overwrite areas that may be marked
>>> * read only (code, rodata). Clear the RDONLY bit from
>>> * the temporary mappings we use during restore.
>>> + *
>>> + * For kexec/hibernation, we need writable access to all
>>> + * pages in the linear mapping to copy the new kernel image.
>>> + * Mark pages dirty first to ensure pte_mkwrite_novma()
>>> + * clears PTE_RDONLY.
>>> */
>> /*
>> * For both kexec and hibernation, writable accesses are required
>> * for all pages in the linear map to copy over new kernel image.
>> * Hence mark these pages dirty first via pte_mkdirty() to ensure
>> * pte_mkwrite_novma() subsequently clears PTE_RDONLY - providing
>> * required write access for the pages.
>> */
> I will change it.
>>
>>> - __set_pte(dst_ptep, pte_mkwrite_novma(pte));
>>> + __set_pte(dst_ptep, pte_mkwrite_novma(pte_mkdirty(pte)));
>>> } else if (!pte_none(pte)) {
>>> /*
>>> * debug_pagealloc will removed the PTE_VALID bit if
>>> @@ -57,7 +62,7 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr)
>>> */
>>> BUG_ON(!pfn_valid(pte_pfn(pte)));
>>>
>> The comments should be replicated here as well given the same special situation.
> I will change it.
>>
>>> - __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte)));
>>> + __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte_mkdirty(pte))));
>>> }
>>> }
>>>
>> static inline pte_t pte_mkwrite_novma(pte_t pte)
>> {
>> pte = set_pte_bit(pte, __pgprot(PTE_WRITE));
>> if (pte_sw_dirty(pte))
>> pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY));
>> return pte;
>> }
>>
>> static inline pte_t pte_mkdirty(pte_t pte)
>> {
>> pte = set_pte_bit(pte, __pgprot(PTE_DIRTY));
>>
>> if (pte_write(pte))
>> pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY));
>>
>> return pte;
>> }
>>
>> So if pte_write() is true, there will be a redundant PTE_RDONLY clearing which is OK.
>> Should this be mentioned in the commit message ?
> TBH, I don't have a better idea - any suggestions?
>
> Or, let's add some lines:
>
> Using pte_mkdirty() causes redundant bit operations, but this is
> acceptable since it's not a hot path.
Makes sense - better to call out potential redundancies.
Powered by blists - more mailing lists