[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1508031346290.921@vshiva-Udesk>
Date: Mon, 3 Aug 2015 13:49:27 -0700 (PDT)
From: Vikas Shivappa <vikas.shivappa@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
cc: Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
linux-kernel@...r.kernel.org, vikas.shivappa@...el.com,
x86@...nel.org, hpa@...or.com, tglx@...utronix.de,
mingo@...nel.org, tj@...nel.org, matt.fleming@...el.com,
will.auld@...el.com, glenn.p.williamson@...el.com,
kanaka.d.juvva@...el.com
Subject: Re: [PATCH 9/9] x86/intel_rdt: Intel haswell Cache Allocation
enumeration
On Wed, 29 Jul 2015, Peter Zijlstra wrote:
> On Wed, Jul 01, 2015 at 03:21:10PM -0700, Vikas Shivappa wrote:
>> + boot_cpu_data.x86_cache_max_closid = 4;
>> + boot_cpu_data.x86_cache_max_cbm_len = 20;
>
> That's just vile. And I'm surprised it even works, I would've expected
> boot_cpu_data to be const.
This is updated only once as the cpuid enum is not done for hsw servers. For all
the hsw servers these numbers are always the same and hence hardcoded , the
comment says its hardcoded ,will update a comment to include this info as well.
>
> So the CQM code has paranoid things like:
>
> max_rmid = MAX_INT;
> for_each_possible_cpu(cpu)
> max_rmid = min(max_rmid, cpu_data(cpu)->x86_cache_max_rmid);
>
> And then uses max_rmid. This has the advantage that if you mix parts in
> a multi-socket environment and hotplug socket 0 to a later part which a
> bigger {rm,clos}id your allocation isn't suddenly too small.
>
> Please do similar things and only ever look at cpu_data once, at init
> time.
Cache alloc is under CPU_SUP_INTEL and all the cores should have the same
features. We use the bsp structure in cache alloc which should have the minimum
features.
Thanks,
Vikas
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists