[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20181011060850.GA19822@rapoport-lnx>
Date: Thu, 11 Oct 2018 09:08:50 +0300
From: Mike Rapoport <rppt@...ux.vnet.ibm.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, Catalin Marinas <catalin.marinas@....com>,
Chris Zankel <chris@...kel.net>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Guan Xuetao <gxt@....edu.cn>, Ingo Molnar <mingo@...hat.com>,
Matt Turner <mattst88@...il.com>,
Michael Ellerman <mpe@...erman.id.au>,
Michal Hocko <mhocko@...e.com>,
Michal Simek <monstr@...str.eu>,
Paul Burton <paul.burton@...s.com>,
Richard Weinberger <richard@....at>,
Russell King <linux@...linux.org.uk>,
Thomas Gleixner <tglx@...utronix.de>,
Tony Luck <tony.luck@...el.com>, linux-alpha@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-ia64@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-m68k@...r.kernel.org,
linux-mips@...ux-mips.org, linuxppc-dev@...ts.ozlabs.org,
linux-um@...ts.infradead.org
Subject: Re: [PATCH] memblock: stop using implicit alignement to
SMP_CACHE_BYTES
On Fri, Oct 05, 2018 at 03:19:34PM -0700, Andrew Morton wrote:
> On Fri, 5 Oct 2018 00:07:04 +0300 Mike Rapoport <rppt@...ux.vnet.ibm.com> wrote:
>
> > When a memblock allocation APIs are called with align = 0, the alignment is
> > implicitly set to SMP_CACHE_BYTES.
> >
> > Replace all such uses of memblock APIs with the 'align' parameter explicitly
> > set to SMP_CACHE_BYTES and stop implicit alignment assignment in the
> > memblock internal allocation functions.
> >
> > For the case when memblock APIs are used via helper functions, e.g. like
> > iommu_arena_new_node() in Alpha, the helper functions were detected with
> > Coccinelle's help and then manually examined and updated where appropriate.
> >
> > ...
> >
> > --- a/mm/memblock.c
> > +++ b/mm/memblock.c
> > @@ -1298,9 +1298,6 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
> > {
> > phys_addr_t found;
> >
> > - if (!align)
> > - align = SMP_CACHE_BYTES;
> > -
>
> Can we add a WARN_ON_ONCE(!align) here? To catch unconverted code
> which sneaks in later on.
Here it goes:
>From baec825c58e8bc11371433d3a4b20b2216877a50 Mon Sep 17 00:00:00 2001
From: Mike Rapoport <rppt@...ux.vnet.ibm.com>
Date: Mon, 8 Oct 2018 11:22:10 +0300
Subject: [PATCH] memblock: warn if zero alignment was requested
After update of all memblock users to explicitly specify SMP_CACHE_BYTES
alignment rather than use 0, it is still possible that uncovered users
may sneak in. Add a WARN_ON_ONCE for such cases.
Signed-off-by: Mike Rapoport <rppt@...ux.vnet.ibm.com>
---
mm/memblock.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/mm/memblock.c b/mm/memblock.c
index 0bbae56..5fefc70 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1298,6 +1298,9 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
{
phys_addr_t found;
+ if (WARN_ON_ONCE(!align))
+ align = SMP_CACHE_BYTES;
+
found = memblock_find_in_range_node(size, align, start, end, nid,
flags);
if (found && !memblock_reserve(found, size)) {
@@ -1420,6 +1423,9 @@ static void * __init memblock_alloc_internal(
if (WARN_ON_ONCE(slab_is_available()))
return kzalloc_node(size, GFP_NOWAIT, nid);
+ if (WARN_ON_ONCE(!align))
+ align = SMP_CACHE_BYTES;
+
if (max_addr > memblock.current_limit)
max_addr = memblock.current_limit;
again:
--
2.7.4
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists