[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1601221347080.27098@chino.kir.corp.google.com>
Date: Fri, 22 Jan 2016 13:49:58 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Mika Penttilä <mika.penttila@...tfour.com>
cc: LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
Pekka Enberg <penberg@...nel.org>,
Rusty Russell <rusty@...tcorp.com.au>
Subject: Re: [PATCH, REGRESSION v4] mm: make apply_to_page_range more
robust
On Fri, 22 Jan 2016, Mika Penttilä wrote:
> diff --git a/mm/memory.c b/mm/memory.c
> index 30991f8..9178ee6 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1871,7 +1871,9 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
> unsigned long end = addr + size;
> int err;
>
> - BUG_ON(addr >= end);
> + if (WARN_ON(addr >= end))
> + return -EINVAL;
> +
> pgd = pgd_offset(mm, addr);
> do {
> next = pgd_addr_end(addr, end);
This would be fine as a second patch in a 2-patch series. The first patch
should fix change_memory_common() for numpages == 0 by returning without
ever calling this function and triggering the WARN_ON(). Let's fix the
problem.
Powered by blists - more mailing lists