[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkqzu+20TJc8RGDDCyDaFmG+Q7xjkVgpJF5-uPqubMN2HA@mail.gmail.com>
Date: Fri, 21 Jan 2022 10:05:51 -0800
From: Yang Shi <shy828301@...il.com>
To: Muchun Song <songmuchun@...edance.com>
Cc: Dan Williams <dan.j.williams@...el.com>,
Matthew Wilcox <willy@...radead.org>, Jan Kara <jack@...e.cz>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andrew Morton <akpm@...ux-foundation.org>,
Alistair Popple <apopple@...dia.com>,
Ralph Campbell <rcampbell@...dia.com>,
Hugh Dickins <hughd@...gle.com>, xiyuyang19@...an.edu.cn,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
zwisler@...nel.org,
Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
nvdimm@...ts.linux.dev,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>
Subject: Re: [PATCH 1/5] mm: rmap: fix cache flush on THP pages
On Thu, Jan 20, 2022 at 11:56 PM Muchun Song <songmuchun@...edance.com> wrote:
>
> The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
> However, it does not cover the full pages in a THP except a head page.
> Replace it with flush_cache_range() to fix this issue. At least, no
> problems were found due to this. Maybe because the architectures that
> have virtual indexed caches is less.
Yeah, actually flush_cache_page()/flush_cache_range() are no-op for
the most architectures which have THP supported, i.e. x86, aarch64,
powerpc, etc.
And currently just tmpfs and read-only files support PMD-mapped THP,
but both don't have to do writeback. And it seems DAX doesn't have
writeback either, which uses __set_page_dirty_no_writeback() for
set_page_dirty. So this code should never be called IIUC.
But anyway your fix looks correct to me. Reviewed-by: Yang Shi
<shy828301@...il.com>
>
> Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
> mm/rmap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b0fd9dc19eba..65670cb805d6 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
> if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
> continue;
>
> - flush_cache_page(vma, address, page_to_pfn(page));
> + flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
> entry = pmdp_invalidate(vma, address, pmd);
> entry = pmd_wrprotect(entry);
> entry = pmd_mkclean(entry);
> --
> 2.11.0
>
Powered by blists - more mailing lists