[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5d62c1d0-7425-d5bb-ecb5-1dc3b4d7d245@intel.com>
Date: Fri, 5 Aug 2022 11:47:10 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: ira.weiny@...el.com, Rik van Riel <riel@...riel.com>,
Borislav Petkov <bp@...en8.de>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [RFC PATCH 5/5] x86/entry: Store CPU info on exception entry
On 8/5/22 10:30, ira.weiny@...el.com wrote:
> +static inline void arch_save_aux_pt_regs(struct pt_regs *regs)
> +{
> + struct pt_regs_auxiliary *aux_pt_regs = &to_extended_pt_regs(regs)->aux;
> +
> + aux_pt_regs->cpu = raw_smp_processor_id();
> +}
This is in a fast path that all interrupt and exception entry uses. So,
I was curious what the overhead is.
Code generation in irqentry_enter() gets a _bit_ more complicated
because arch_save_aux_pt_regs() has to be done on the way out of the
function and the compiler can't (for instance) do a
mov $0x1,%eax
ret
to return. But, the gist of the change is still only two instructions
that read a pretty hot, read-only per-cpu cacheline:
mov %gs:0x7e21fa4a(%rip),%eax # 15a38 <cpu_number>
mov %eax,-0x8(%rbx)
That doesn't seem too bad.
Powered by blists - more mailing lists