[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b2b640ac-0cc3-b09e-1f6c-f01234295b3b@arm.com>
Date: Wed, 12 Feb 2020 14:54:45 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Suzuki Kuruppassery Poulose <suzuki.poulose@....com>,
Ionela Voinescu <ionela.voinescu@....com>,
catalin.marinas@....com, will@...nel.org, mark.rutland@....com,
maz@...nel.org, sudeep.holla@....com, lukasz.luba@....com,
rjw@...ysocki.net
Cc: peterz@...radead.org, mingo@...hat.com, vincent.guittot@...aro.org,
viresh.kumar@...aro.org, linux-arm-kernel@...ts.infradead.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org
Subject: Re: [PATCH v3 1/7] arm64: add support for the AMU extension v1
On 12/02/2020 11:30, Suzuki Kuruppassery Poulose wrote:
>> +static bool has_amu(const struct arm64_cpu_capabilities *cap,
>> + int __unused)
>> +{
>> + /*
>> + * The AMU extension is a non-conflicting feature: the kernel can
>> + * safely run a mix of CPUs with and without support for the
>> + * activity monitors extension. Therefore, if not disabled through
>> + * the kernel command line early parameter, enable the capability
>> + * to allow any late CPU to use the feature.
>> + *
>> + * With this feature enabled, the cpu_enable function will be called
>> + * for all CPUs that match the criteria, including secondary and
>> + * hotplugged, marking this feature as present on that respective CPU.
>> + * The enable function will also print a detection message.
>> + */
>> +
>> + if (!disable_amu && !zalloc_cpumask_var(&amu_cpus, GFP_KERNEL)) {
>
> This looks problematic. Don't we end up in allocating the memory during
> "each CPU" check and thus leaking memory ? Do we really need to allocate
> this dynamically ?
>
For the static vs dynamic thing, I think it's not *too* important here since
we don't risk pwning the stack because of the cpumask. That said, if we are
somewhat pedantic about memory usage, the static allocation is done
against NR_CPUS whereas the dynamic one is done against nr_cpu_ids.
Pretty inconsequential for a single cpumask, but I guess it all adds up
eventually...
Powered by blists - more mailing lists