[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCZbNjoOa5hHQAew@lx-t490>
Date: Thu, 15 May 2025 23:23:02 +0200
From: "Ahmed S. Darwish" <darwi@...utronix.de>
To: Sohil Mehta <sohil.mehta@...el.com>
Cc: Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Cooper <andrew.cooper3@...rix.com>,
"H. Peter Anvin" <hpa@...or.com>,
John Ogness <john.ogness@...utronix.de>, x86@...nel.org,
x86-cpuid@...ts.linux.dev, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1 04/26] x86/cpuid: Introduce centralized CPUID data
On Tue, 13 May 2025, Sohil Mehta wrote:
>
> I am finding the structure names a bit confusing. Can we make it
> slightly more descriptive since they are directly used in common code?
>
> How about struct leaf_0xN_sl_M or struct leaf_0xN_subl_M?
>
> The actual struct names would be:
> leaf_0x1_sl_0 or leaf_0x1_subl_0
> leaf_0x4_sl_0 or leaf_0x4_subl_0
>
The problem is that at the call sites, even with abbreviated variable
names, the lines are already wide. Adding "sl_" makes things worse.
For example, at patch 23/26 ("x86/cacheinfo: Use scanned
CPUID(0x80000005) and CPUID(0x80000006)"), we have:
const struct leaf_0x80000005_0 *el5 = cpudata_cpuid_index(c, 0x80000005, index);
const struct leaf_0x80000006_0 *el6 = cpudata_cpuid_index(c, 0x80000006, index);
Making that even wider with an "sl_":
const struct leaf_0x80000005_sl_0 *el5 = cpudata_cpuid_index(c, 0x80000005, index);
const struct leaf_0x80000006_sl_0 *el6 = cpudata_cpuid_index(c, 0x80000006, index);
or "subl_":
const struct leaf_0x80000005_subl_0 *el5 = cpudata_cpuid_index(c, 0x80000005, index);
const struct leaf_0x80000006_subl_0 *el6 = cpudata_cpuid_index(c, 0x80000006, index);
makes everything overly verbose, without IMHO much benefit.
I'll sleep over this a bit before sending v2.
>
> Avoid using "next commits". How about:
>
> Generic scanning logic for filling the CPUID data will be added later.
>
Makes sense, will do.
Thanks!
~ Ahmed
Powered by blists - more mailing lists