[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1412092150050.16275@nanos>
Date: Tue, 9 Dec 2014 21:51:44 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Pranith Kumar <bobby.prani@...il.com>
cc: Ingo Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>,
"maintainer:X86 ARCHITECTURE..." <x86@...nel.org>,
Toshi Kani <toshi.kani@...com>,
Igor Mammedov <imammedo@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Lan Tianyu <tianyu.lan@...el.com>,
"open list:X86 ARCHITECTURE..." <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] smpboot: Check for successfull allocation of cpumask
vars
On Tue, 9 Dec 2014, Pranith Kumar wrote:
> On Tue, Dec 9, 2014 at 3:10 PM, Thomas Gleixner <tglx@...utronix.de> wrote:
> > On Tue, 9 Dec 2014, Pranith Kumar wrote:
> >> zalloc_cpumask_var() can return 0 on allocation failure when
> >> CONFIG_CPUMASK_OFFSTACK is set. Check for the return value and WARN() on failure
> >> of an allocation in such cases.
> >
> > And that warning helps in which way?
> >
> > It just prints a completely useless backtrace and breaks out of the
> > loop, but it does not prevent that later on code will trip over the
> > non allocated per cpu data.
> >
>
> I agree. May be just a pr_warn() saying that an allocation failed perhaps?
Yep.
> To prevent further accesses, we can clear the cpu bit from the cpu
> masks(online/possible/present) for the failed cpu and continue trying
> to allocate for other cpus. We don't break out of the loop. Removing
> the cpu from the cpu masks will disable accesses of the non allocated
> per cpu data.
>
> What do you suggest we do in such cases?
Pretty much what you said, but we should definitely break out of the
loop. There is no point to try more allocations if we failed the first
one.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists