[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200915173948.GK5449@casper.infradead.org>
Date: Tue, 15 Sep 2020 18:39:48 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Muchun Song <songmuchun@...edance.com>
Cc: Jonathan Corbet <corbet@....net>,
Mike Kravetz <mike.kravetz@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org,
Peter Zijlstra <peterz@...radead.org>, viro@...iv.linux.org.uk,
Andrew Morton <akpm@...ux-foundation.org>, paulmck@...nel.org,
mchehab+huawei@...nel.org, pawan.kumar.gupta@...ux.intel.com,
Randy Dunlap <rdunlap@...radead.org>, oneukum@...e.com,
anshuman.khandual@....com, jroedel@...e.de, almasrymina@...gle.com,
David Rientjes <rientjes@...gle.com>,
linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel@...r.kernel.org
Subject: Re: [External] Re: [RFC PATCH 00/24] mm/hugetlb: Free some vmemmap
pages of hugetlb page
On Wed, Sep 16, 2020 at 01:32:46AM +0800, Muchun Song wrote:
> On Tue, Sep 15, 2020 at 11:42 PM Matthew Wilcox <willy@...radead.org> wrote:
> >
> > On Tue, Sep 15, 2020 at 11:28:01PM +0800, Muchun Song wrote:
> > > On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@...radead.org> wrote:
> > > >
> > > > On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> > > > > This patch series will free some vmemmap pages(struct page structures)
> > > > > associated with each hugetlbpage when preallocated to save memory.
> > > >
> > > > It would be lovely to be able to do this. Unfortunately, it's completely
> > > > impossible right now. Consider, for example, get_user_pages() called
> > > > on the fifth page of a hugetlb page.
> > >
> > > Can you elaborate on the problem? Thanks so much.
> >
> > OK, let's say you want to do a 2kB I/O to offset 0x5000 of a 2MB page
> > on a 4kB base page system. Today, that results in a bio_vec containing
> > {head+5, 0, 0x800}. Then we call page_to_phys() on that (head+5) struct
> > page to get the physical address of the I/O, and we turn it into a struct
> > scatterlist, which similarly has a reference to the page (head+5).
>
> As I know, in this case, the get_user_pages() will get a reference
> to the head page (head+0) before returning such that the hugetlb
> page can not be freed. Although get_user_pages() returns the
> page (head+5) and the scatterlist has a reference to the page
> (head+5), this patch series can handle this situation. I can not
> figure out where the problem is. What I missed? Thanks.
You freed pages 4-511 from the vmemmap so they could be used for
something else. Page 5 isn't there any more. So if you return head+5,
then when we complete the I/O, we'll look for the compound_head() of
head+5 and we won't find head.
Powered by blists - more mailing lists