[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7eb4762e-723b-51e8-3d70-1c28568ac4f5@intel.com>
Date: Fri, 10 Jun 2022 05:35:29 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Li kunyu <kunyu@...china.com>, chenhuacai@...nel.org,
rafael@...nel.org, len.brown@...el.com, pavel@....cz,
mingo@...hat.com, bp@...en8.de
Cc: tglx@...utronix.de, dave.hansen@...ux.intel.com, x86@...nel.org,
hpa@...or.com, linux-ia64@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH] x86: Change the return type of acpi_map_cpu2node to void
On 6/10/22 03:44, Li kunyu wrote:
> Reduce eax register calls by removing unused return values.
Please stop sending these patches, at least with these repetitive,
inaccurate descriptions.
This patch has *ZERO* to do with EAX. For one, it's patching two
architectures that might not even have an EAX. (I'm blissfully unaware
of what the ia64 calling conventions are and I want to keep it that way.)
Second, (and this is important), look carefully at the function in question:
static int acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
See the "static"? That tells the compiler that acpi_map_cpu2node() is
only used locally. It lets the compiler do all kinds of fancy things,
like inline the function which allows the compiler to do all kinds of
fun optimizations. Now, armed with that knowledge, please take a look
at what effect your patch has in practice.
Take your patch, and disassemble acpi_map_cpu() before and after
applying it. First of all, even before your patch, do you see a:
call ffffffff81d0000d <acpi_map_cpu2node>
?
Do you see a call to numa_set_node()? That's odd considering that
acpi_map_cpu() doesn't directly call numa_set_node(). Right? Do you
see unnecessary manipulation of EAX? Now, apply your patch.
Disassemble the function again. What changed?
Now, armed with the knowledge of what your patch actually does to the
code, would you like to try and write a better changelog? Or, better
yet, maybe it will dissuade you from sending this again.
Powered by blists - more mailing lists