lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 10 Nov 2021 18:54:25 +0800 From: Qi Zheng <zhengqi.arch@...edance.com> To: akpm@...ux-foundation.org, tglx@...utronix.de, kirill.shutemov@...ux.intel.com, mika.penttila@...tfour.com, david@...hat.com, jgg@...dia.com Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org, songmuchun@...edance.com, zhouchengming@...edance.com, Qi Zheng <zhengqi.arch@...edance.com> Subject: [PATCH v3 12/15] mm/pte_ref: update the pmd entry in move_normal_pmd() The ->pmd member record the pmd entry that maps the user PTE page table page. When the pmd entry changes, ->pmd needs to be updated synchronously. Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com> --- mm/mremap.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/mremap.c b/mm/mremap.c index 088a7a75cb4b..4661cdec79dc 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -278,6 +278,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pmd_none(*new_pmd)); pmd_populate(mm, new_pmd, pmd_pgtable(pmd)); + pte_update_pmd(pmd, new_pmd); flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); if (new_ptl != old_ptl) spin_unlock(new_ptl); -- 2.11.0
Powered by blists - more mailing lists