[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMZfGtW3S8kGJwff6oH14QWPXKTuQEAGdYwcLRUZxuJ7q8s7sA@mail.gmail.com>
Date: Wed, 16 Sep 2020 02:03:15 +0800
From: Muchun Song <songmuchun@...edance.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Jonathan Corbet <corbet@....net>,
Mike Kravetz <mike.kravetz@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org,
Peter Zijlstra <peterz@...radead.org>, viro@...iv.linux.org.uk,
Andrew Morton <akpm@...ux-foundation.org>, paulmck@...nel.org,
mchehab+huawei@...nel.org, pawan.kumar.gupta@...ux.intel.com,
Randy Dunlap <rdunlap@...radead.org>, oneukum@...e.com,
anshuman.khandual@....com, jroedel@...e.de, almasrymina@...gle.com,
David Rientjes <rientjes@...gle.com>,
linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel@...r.kernel.org
Subject: Re: [External] Re: [RFC PATCH 00/24] mm/hugetlb: Free some vmemmap
pages of hugetlb page
On Wed, Sep 16, 2020 at 1:39 AM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Wed, Sep 16, 2020 at 01:32:46AM +0800, Muchun Song wrote:
> > On Tue, Sep 15, 2020 at 11:42 PM Matthew Wilcox <willy@...radead.org> wrote:
> > >
> > > On Tue, Sep 15, 2020 at 11:28:01PM +0800, Muchun Song wrote:
> > > > On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@...radead.org> wrote:
> > > > >
> > > > > On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> > > > > > This patch series will free some vmemmap pages(struct page structures)
> > > > > > associated with each hugetlbpage when preallocated to save memory.
> > > > >
> > > > > It would be lovely to be able to do this. Unfortunately, it's completely
> > > > > impossible right now. Consider, for example, get_user_pages() called
> > > > > on the fifth page of a hugetlb page.
> > > >
> > > > Can you elaborate on the problem? Thanks so much.
> > >
> > > OK, let's say you want to do a 2kB I/O to offset 0x5000 of a 2MB page
> > > on a 4kB base page system. Today, that results in a bio_vec containing
> > > {head+5, 0, 0x800}. Then we call page_to_phys() on that (head+5) struct
> > > page to get the physical address of the I/O, and we turn it into a struct
> > > scatterlist, which similarly has a reference to the page (head+5).
> >
> > As I know, in this case, the get_user_pages() will get a reference
> > to the head page (head+0) before returning such that the hugetlb
> > page can not be freed. Although get_user_pages() returns the
> > page (head+5) and the scatterlist has a reference to the page
> > (head+5), this patch series can handle this situation. I can not
> > figure out where the problem is. What I missed? Thanks.
>
> You freed pages 4-511 from the vmemmap so they could be used for
> something else. Page 5 isn't there any more. So if you return head+5,
> then when we complete the I/O, we'll look for the compound_head() of
> head+5 and we won't find head.
>
We do not free pages 4-511 from the vmemmap. Actually, we only
free pages 128-511 from the vmemmap.
The 512 struct pages occupy 8 pages of physical memory. We only
free 6 physical page frames to the buddy. But we will create a new
mapping just like below. The virtual address of the freed pages will
remap to the second page frame. So the second page frame is
reused.
When a hugetlbpage is preallocated, we can change the mapping to
bellow.
hugetlbpage struct page(8 pages) page
frame(8 pages)
+-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+
| | | 0 | -------------> | 0 |
| | | 1 | -------------> | 1 |
| | | 2 | -------------> +-----------+
| | | 3 | -----------------^ ^ ^ ^ ^
| | | 4 | -------------------+ | | |
| 2M | | 5 | ---------------------+ | |
| | | 6 | -----------------------+ |
| | | 7 | -------------------------+
| | +-----------+
| |
| |
+-----------+
As you can see, we reuse the first tail page.
--
Yours,
Muchun
Powered by blists - more mailing lists