lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 3 Apr 2022 13:39:53 +0800 From: Muchun Song <songmuchun@...edance.com> To: dan.j.williams@...el.com, willy@...radead.org, jack@...e.cz, viro@...iv.linux.org.uk, akpm@...ux-foundation.org, apopple@...dia.com, shy828301@...il.com, rcampbell@...dia.com, hughd@...gle.com, xiyuyang19@...an.edu.cn, kirill.shutemov@...ux.intel.com, zwisler@...nel.org, hch@...radead.org Cc: linux-fsdevel@...r.kernel.org, nvdimm@...ts.linux.dev, linux-kernel@...r.kernel.org, linux-mm@...ck.org, duanxiongchun@...edance.com, smuchun@...il.com, Muchun Song <songmuchun@...edance.com>, Christoph Hellwig <hch@....de> Subject: [PATCH v7 2/6] dax: fix cache flush on PMD-mapped pages The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. This is just a documentation issue with the respect to properly documenting the expected usage of cache flushing before modifying the pmd. However, in practice this is not a problem due to the fact that DAX is not available on architectures with virtually indexed caches per: commit d92576f1167c ("dax: does not work correctly with virtual aliasing caches") Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song <songmuchun@...edance.com> Reviewed-by: Dan Williams <dan.j.williams@...el.com> Reviewed-by: Christoph Hellwig <hch@....de> --- fs/dax.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 67a08a32fccb..a372304c9695 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); -- 2.11.0
Powered by blists - more mailing lists