lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200708074103.GD7271@dhcp22.suse.cz>
Date:   Wed, 8 Jul 2020 09:41:03 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Joonsoo Kim <js1304@...il.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kernel-team@....com,
        Christoph Hellwig <hch@...radead.org>,
        Roman Gushchin <guro@...com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Subject: Re: [PATCH v4 04/11] mm/hugetlb: make hugetlb migration callback CMA
 aware

On Wed 08-07-20 16:16:02, Joonsoo Kim wrote:
> On Tue, Jul 07, 2020 at 01:22:31PM +0200, Vlastimil Babka wrote:
> > On 7/7/20 9:44 AM, js1304@...il.com wrote:
> > > From: Joonsoo Kim <iamjoonsoo.kim@....com>
> > > 
> > > new_non_cma_page() in gup.c which try to allocate migration target page
> > > requires to allocate the new page that is not on the CMA area.
> > > new_non_cma_page() implements it by removing __GFP_MOVABLE flag.  This way
> > > works well for THP page or normal page but not for hugetlb page.
> > > 
> > > hugetlb page allocation process consists of two steps.  First is dequeing
> > > from the pool.  Second is, if there is no available page on the queue,
> > > allocating from the page allocator.
> > > 
> > > new_non_cma_page() can control allocation from the page allocator by
> > > specifying correct gfp flag.  However, dequeing cannot be controlled until
> > > now, so, new_non_cma_page() skips dequeing completely.  It is a suboptimal
> > > since new_non_cma_page() cannot utilize hugetlb pages on the queue so this
> > > patch tries to fix this situation.
> > > 
> > > This patch makes the deque function on hugetlb CMA aware and skip CMA
> > > pages if newly added skip_cma argument is passed as true.
> > 
> > Hmm, can't you instead change dequeue_huge_page_node_exact() to test the PF_
> > flag and avoid adding bool skip_cma everywhere?
> 
> Okay! Please check following patch.
> > 
> > I think that's what Michal suggested [1] except he said "the code already does
> > by memalloc_nocma_{save,restore} API". It needs extending a bit though, AFAICS.
> > __gup_longterm_locked() indeed does the save/restore, but restore comes before
> > check_and_migrate_cma_pages() and thus new_non_cma_page() is called, so an
> > adjustment is needed there, but that's all?
> > 
> > Hm the adjustment should be also done because save/restore is done around
> > __get_user_pages_locked(), but check_and_migrate_cma_pages() also calls
> > __get_user_pages_locked(), and that call not being between nocma save and
> > restore is thus also a correctness issue?
> 
> Simply, I call memalloc_nocma_{save,restore} in new_non_cma_page(). It
> would not cause any problem.

I believe a proper fix is the following. The scope is really defined for
FOLL_LONGTERM pins and pushing it inside check_and_migrate_cma_pages
will solve the problem as well but it imho makes more sense to do it in
the caller the same way we do for any others. 

Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region")

I am not sure this is worth backporting to stable yet.

diff --git a/mm/gup.c b/mm/gup.c
index de9e36262ccb..75980dd5a2fc 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1794,7 +1794,6 @@ static long __gup_longterm_locked(struct task_struct *tsk,
 				     vmas_tmp, NULL, gup_flags);
 
 	if (gup_flags & FOLL_LONGTERM) {
-		memalloc_nocma_restore(flags);
 		if (rc < 0)
 			goto out;
 
@@ -1802,11 +1801,13 @@ static long __gup_longterm_locked(struct task_struct *tsk,
 			for (i = 0; i < rc; i++)
 				put_page(pages[i]);
 			rc = -EOPNOTSUPP;
+			memalloc_nocma_restore(flags);
 			goto out;
 		}
 
 		rc = check_and_migrate_cma_pages(tsk, mm, start, rc, pages,
 						 vmas_tmp, gup_flags);
+		memalloc_nocma_restore(flags);
 	}
 
 out:
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ