[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z4Fukra_N1cRxFYs@casper.infradead.org>
Date: Fri, 10 Jan 2025 19:01:38 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Yang Shi <yang@...amperecomputing.com>
Cc: Liu Shixin <liushixin2@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Chengming Zhou <chengming.zhou@...ux.dev>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Nanyong Sun <sunnanyong@...wei.com>,
Muchun Song <muchun.song@...ux.dev>,
Qi Zheng <zhengqi.arch@...edance.com>,
Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: khugepaged: fix call hpage_collapse_scan_file() for
anonymous vma
On Fri, Jan 10, 2025 at 10:04:42AM -0800, Yang Shi wrote:
> On 1/9/25 8:31 PM, Matthew Wilcox wrote:
> > On Thu, Jan 09, 2025 at 09:00:24AM -0800, Yang Shi wrote:
> > > Thanks for catching this. It sounds a little bit weird to have vm_file for
> > > an anonymous VMA. I'm not sure why we should keep such special case. It
> > > seems shared mapping is treated as shmem file mapping. So can we set vm_file
> > > to NULL when mmap'ing /dev/zero for private mapping? Something like:
> > >
> > > diff --git a/drivers/char/mem.c b/drivers/char/mem.c
> > > index 169eed162a7f..fc332efc5c11 100644
> > > --- a/drivers/char/mem.c
> > > +++ b/drivers/char/mem.c
> > > @@ -527,6 +527,7 @@ static int mmap_zero(struct file *file, struct
> > > vm_area_struct *vma)
> > > if (vma->vm_flags & VM_SHARED)
> > > return shmem_zero_setup(vma);
> > > vma_set_anonymous(vma);
> > > + vma->vm_file = NULL;
> > > return 0;
> > > }
> > I'm wary this might cause other bugs somewhere. rc6 is a bit late to be
> > introducing such a subtle change.
>
> Thanks for the extra caution. Applying the proposed fix in khugepaged code
> is fine to me either. We can try to kill the special case later.
>
> Looking at the code further, I think we should do more to make private
> /dev/zero mapping an anonymous mapping:
I'm still nervous about this. We map device inodes in a lot of places.
Powered by blists - more mailing lists