[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190904114820.42d9c4daf445ded3d0da52ab@linux-foundation.org>
Date: Wed, 4 Sep 2019 11:48:20 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: zhong jiang <zhongjiang@...wei.com>, mhocko@...nel.org,
anshuman.khandual@....com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Ira Weiny <ira.weiny@...el.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
Subject: Re: [PATCH] mm: Unsigned 'nr_pages' always larger than zero
On Wed, 4 Sep 2019 13:24:58 +0200 Vlastimil Babka <vbabka@...e.cz> wrote:
> On 9/4/19 12:26 PM, zhong jiang wrote:
> > With the help of unsigned_lesser_than_zero.cocci. Unsigned 'nr_pages"'
> > compare with zero. And __get_user_pages_locked will return an long value.
> > Hence, Convert the long to compare with zero is feasible.
>
> It would be nicer if the parameter nr_pages was long again instead of unsigned
> long (note there are two variants of the function, so both should be changed).
nr_pages should be unsigned - it's a count of pages!
The bug is that __get_user_pages_locked() returns a signed long which
can be a -ve errno.
I think it's best if __get_user_pages_locked() is to get itself a new
local with the same type as its return value. Something like:
--- a/mm/gup.c~a
+++ a/mm/gup.c
@@ -1450,6 +1450,7 @@ static long check_and_migrate_cma_pages(
bool drain_allow = true;
bool migrate_allow = true;
LIST_HEAD(cma_page_list);
+ long ret;
check_again:
for (i = 0; i < nr_pages;) {
@@ -1511,17 +1512,18 @@ check_again:
* again migrating any new CMA pages which we failed to isolate
* earlier.
*/
- nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages,
+ ret = __get_user_pages_locked(tsk, mm, start, nr_pages,
pages, vmas, NULL,
gup_flags);
- if ((nr_pages > 0) && migrate_allow) {
+ nr_pages = ret;
+ if (ret > 0 && migrate_allow) {
drain_allow = true;
goto check_again;
}
}
- return nr_pages;
+ return ret;
}
#else
static long check_and_migrate_cma_pages(struct task_struct *tsk,
Powered by blists - more mailing lists