[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQK0rzHWxxx7LMFSTuBx18A+6H6AEkKFHNDkPwbPUbsk4Q@mail.gmail.com>
Date: Thu, 6 Jul 2023 12:51:16 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Tero Kristo <tero.kristo@...ux.intel.com>
Cc: John Fastabend <john.fastabend@...il.com>,
Shuah Khan <shuah@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>, X86 ML <x86@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
"open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH 1/2] x86/tsc: Add new BPF helper call bpf_rdtsc
On Thu, Jul 6, 2023 at 4:59 AM Tero Kristo <tero.kristo@...ux.intel.com> wrote:
>
>
> On 06/07/2023 08:16, John Fastabend wrote:
> > Alexei Starovoitov wrote:
> >> On Mon, Jul 3, 2023 at 3:58 AM Tero Kristo <tero.kristo@...ux.intel.com> wrote:
> >>> Currently the raw TSC counter can be read within kernel via rdtsc_ordered()
> >>> and friends, and additionally even userspace has access to it via the
> >>> RDTSC assembly instruction. BPF programs on the other hand don't have
> >>> direct access to the TSC counter, but alternatively must go through the
> >>> performance subsystem (bpf_perf_event_read), which only provides relative
> >>> value compared to the start point of the program, and is also much slower
> >>> than the direct read. Add a new BPF helper definition for bpf_rdtsc() which
> >>> can be used for any accurate profiling needs.
> >>>
> >>> A use-case for the new API is for example wakeup latency tracing via
> >>> eBPF on Intel architecture, where it is extremely beneficial to be able
> >>> to get raw TSC timestamps and compare these directly to the value
> >>> programmed to the MSR_IA32_TSC_DEADLINE register. This way a direct
> >>> latency value from the hardware interrupt to the execution of the
> >>> interrupt handler can be calculated. Having the functionality within
> >>> eBPF also has added benefits of allowing to filter any other relevant
> >>> data like C-state residency values, and also to drop any irrelevant
> >>> data points directly in the kernel context, without passing all the
> >>> data to userspace for post-processing.
> >>>
> >>> Signed-off-by: Tero Kristo <tero.kristo@...ux.intel.com>
> >>> ---
> >>> arch/x86/include/asm/msr.h | 1 +
> >>> arch/x86/kernel/tsc.c | 23 +++++++++++++++++++++++
> >>> 2 files changed, 24 insertions(+)
> >>>
> >>> diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
> >>> index 65ec1965cd28..3dde673cb563 100644
> >>> --- a/arch/x86/include/asm/msr.h
> >>> +++ b/arch/x86/include/asm/msr.h
> >>> @@ -309,6 +309,7 @@ struct msr *msrs_alloc(void);
> >>> void msrs_free(struct msr *msrs);
> >>> int msr_set_bit(u32 msr, u8 bit);
> >>> int msr_clear_bit(u32 msr, u8 bit);
> >>> +u64 bpf_rdtsc(void);
> >>>
> >>> #ifdef CONFIG_SMP
> >>> int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
> >>> diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
> >>> index 344698852146..ded857abef81 100644
> >>> --- a/arch/x86/kernel/tsc.c
> >>> +++ b/arch/x86/kernel/tsc.c
> >>> @@ -15,6 +15,8 @@
> >>> #include <linux/timex.h>
> >>> #include <linux/static_key.h>
> >>> #include <linux/static_call.h>
> >>> +#include <linux/btf.h>
> >>> +#include <linux/btf_ids.h>
> >>>
> >>> #include <asm/hpet.h>
> >>> #include <asm/timer.h>
> >>> @@ -29,6 +31,7 @@
> >>> #include <asm/intel-family.h>
> >>> #include <asm/i8259.h>
> >>> #include <asm/uv/uv.h>
> >>> +#include <asm/tlbflush.h>
> >>>
> >>> unsigned int __read_mostly cpu_khz; /* TSC clocks / usec, not used here */
> >>> EXPORT_SYMBOL(cpu_khz);
> >>> @@ -1551,6 +1554,24 @@ void __init tsc_early_init(void)
> >>> tsc_enable_sched_clock();
> >>> }
> >>>
> >>> +u64 bpf_rdtsc(void)
> >>> +{
> >>> + /* Check if Time Stamp is enabled only in ring 0 */
> >>> + if (cr4_read_shadow() & X86_CR4_TSD)
> >>> + return 0;
> >> Why check this? It's always enabled in the kernel, no?
>
> It is always enabled, but there are certain syscalls that can be used to
> disable the TSC access for oneself. prctl(PR_SET_TSC, ...) and
> seccomp(SET_MODE_STRICT,...). Not having the check in place would in
> theory allow a restricted BPF program to circumvent this (if there ever
> was such a thing.) But yes, I do agree this part is a bit debatable
> whether it should be there at all.
What do you mean 'circumvent' ?
It's a tracing bpf prog running in the kernel loaded by root
and reading tsc for the purpose of the kernel.
There is no unprivileged access to tsc here.
>
> >>> +
> >>> + return rdtsc_ordered();
> >> Why _ordered? Why not just rdtsc ?
> >> Especially since you want to trace latency. Extra lfence will ruin
> >> the measurements.
> >>
> > If we used it as a fast way to order events on multiple CPUs I
> > guess we need the lfence? We use ktime_get_ns() now for things
> > like this when we just need an order counter. We have also
> > observed time going backwards with this and have heuristics
> > to correct it but its rare.
>
> Yeah, I think it is better to induce some extra latency instead of
> having some weird ordering issues with the timestamps.
lfence is not 'some extra latency'.
I suspect rdtsc_ordered() will be slower than bpf_ktime_get_ns().
What's the point of using it then?
>
> Also, things like the ftrace also use rdtsc_ordered() as its underlying
> clock, if you use x86-tsc as the trace clock (see
> arch/x86/kernel/trace_clock.c.)
>
> -Tero
>
Powered by blists - more mailing lists