lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 29 Mar 2022 21:48:48 +0800
From:   Muchun Song <songmuchun@...edance.com>
To:     dan.j.williams@...el.com, willy@...radead.org, jack@...e.cz,
        viro@...iv.linux.org.uk, akpm@...ux-foundation.org,
        apopple@...dia.com, shy828301@...il.com, rcampbell@...dia.com,
        hughd@...gle.com, xiyuyang19@...an.edu.cn,
        kirill.shutemov@...ux.intel.com, zwisler@...nel.org,
        hch@...radead.org
Cc:     linux-fsdevel@...r.kernel.org, nvdimm@...ts.linux.dev,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        duanxiongchun@...edance.com, smuchun@...il.com,
        Muchun Song <songmuchun@...edance.com>,
        Christoph Hellwig <hch@....de>
Subject: [PATCH v6 1/6] mm: rmap: fix cache flush on THP pages

The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue. At least, no
problems were found due to this. Maybe because the architectures that
have virtual indexed caches is less.

Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
Signed-off-by: Muchun Song <songmuchun@...edance.com>
Reviewed-by: Yang Shi <shy828301@...il.com>
Reviewed-by: Dan Williams <dan.j.williams@...el.com>
Reviewed-by: Christoph Hellwig <hch@....de>
---
 mm/rmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index fc46a3d7b704..723682ddb9e8 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -970,7 +970,8 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
 			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
 				continue;
 
-			flush_cache_page(vma, address, folio_pfn(folio));
+			flush_cache_range(vma, address,
+					  address + HPAGE_PMD_SIZE);
 			entry = pmdp_invalidate(vma, address, pmd);
 			entry = pmd_wrprotect(entry);
 			entry = pmd_mkclean(entry);
-- 
2.11.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ