[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAFgQCTt2M3NVD5Xmip3YX=eYM_wJn9mWLjZq8z-jXuvT5q-naQ@mail.gmail.com>
Date: Mon, 24 Jun 2019 09:21:07 +0800
From: Pingfan Liu <kernelfans@...il.com>
To: Ira Weiny <ira.weiny@...el.com>
Cc: Linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
John Hubbard <jhubbard@...dia.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Christoph Hellwig <hch@....de>,
Keith Busch <keith.busch@...el.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
LKML <Linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/gup: speed up check_and_migrate_cma_pages() on huge page
On Sat, Jun 22, 2019 at 2:13 AM Ira Weiny <ira.weiny@...el.com> wrote:
>
> On Fri, Jun 21, 2019 at 06:15:16PM +0800, Pingfan Liu wrote:
> > Both hugetlb and thp locate on the same migration type of pageblock, since
> > they are allocated from a free_list[]. Based on this fact, it is enough to
> > check on a single subpage to decide the migration type of the whole huge
> > page. By this way, it saves (2M/4K - 1) times loop for pmd_huge on x86,
> > similar on other archs.
> >
> > Furthermore, when executing isolate_huge_page(), it avoid taking global
> > hugetlb_lock many times, and meanless remove/add to the local link list
> > cma_page_list.
> >
> > Signed-off-by: Pingfan Liu <kernelfans@...il.com>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Cc: Ira Weiny <ira.weiny@...el.com>
> > Cc: Mike Rapoport <rppt@...ux.ibm.com>
> > Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: John Hubbard <jhubbard@...dia.com>
> > Cc: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
> > Cc: Christoph Hellwig <hch@....de>
> > Cc: Keith Busch <keith.busch@...el.com>
> > Cc: Mike Kravetz <mike.kravetz@...cle.com>
> > Cc: Linux-kernel@...r.kernel.org
> > ---
> > mm/gup.c | 13 +++++++++----
> > 1 file changed, 9 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/gup.c b/mm/gup.c
> > index ddde097..2eecb16 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -1342,16 +1342,19 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> > LIST_HEAD(cma_page_list);
> >
> > check_again:
> > - for (i = 0; i < nr_pages; i++) {
> > + for (i = 0; i < nr_pages;) {
> > +
> > + struct page *head = compound_head(pages[i]);
> > + long step = 1;
> > +
> > + if (PageCompound(head))
> > + step = compound_order(head) - (pages[i] - head);
> > /*
> > * If we get a page from the CMA zone, since we are going to
> > * be pinning these entries, we might as well move them out
> > * of the CMA zone if possible.
> > */
> > if (is_migrate_cma_page(pages[i])) {
>
> I like this but I think for consistency I would change this pages[i] to be
> head. Even though it is not required.
Yes, agree. Thank you for your good suggestion.
Regards,
Pingfan
>
> Ira
>
> > -
> > - struct page *head = compound_head(pages[i]);
> > -
> > if (PageHuge(head)) {
> > isolate_huge_page(head, &cma_page_list);
> > } else {
> > @@ -1369,6 +1372,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> > }
> > }
> > }
> > +
> > + i += step;
> > }
> >
> > if (!list_empty(&cma_page_list)) {
> > --
> > 2.7.5
> >
Powered by blists - more mailing lists