lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241116193123.GP22801@noisy.programming.kicks-ass.net>
Date: Sat, 16 Nov 2024 20:31:23 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Changwoo Min <multics69@...il.com>
Cc: tj@...nel.org, void@...ifault.com, mingo@...hat.com,
	changwoo@...lia.com, kernel-dev@...lia.com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/5] sched_ext: Implement scx_bpf_clock_get_ns()

On Sun, Nov 17, 2024 at 01:01:24AM +0900, Changwoo Min wrote:
> Returns a high-performance monotonically non-decreasing clock for the
> current CPU. The clock returned is in nanoseconds.
> 
> It provides the following properties:
> 
> 1) High performance: Many BPF schedulers call bpf_ktime_get_ns()
>  frequently to account for execution time and track tasks' runtime
>  properties. Unfortunately, in some hardware platforms, bpf_ktime_get_ns()
>  -- which eventually reads a hardware timestamp counter -- is neither
>  performant nor scalable. scx_bpf_clock_get_ns() aims to provide a
>  high-performance clock by using the rq clock in the scheduler core
>  whenever possible.
> 
> 2) High enough resolution for the BPF scheduler use cases: In most BPF
>  scheduler use cases, the required clock resolution is lower than the
>  most accurate hardware clock (e.g., rdtsc in x86). scx_bpf_clock_get_ns()
>  basically uses the rq clock in the scheduler core whenever it is valid.
>  It considers that the rq clock is valid from the time the rq clock is
>  updated (update_rq_clock) until the rq is unlocked (rq_unpin_lock).
>  In addition, it invalidates the rq clock after long operations --
>  ops.running() and ops.update_idle() -- in the BPF scheduler.
> 
> 3) Monotonically non-decreasing clock for the same CPU:
>  scx_bpf_clock_get_ns() guarantees the clock never goes backward when
>  comparing them in the same CPU. On the other hand, when comparing clocks
>  in different CPUs, there is no such guarantee -- the clock can go backward.
>  It provides a monotonically *non-decreasing* clock so that it would provide
>  the same clock values in two different scx_bpf_clock_get_ns() calls in the
>  same CPU during the same period of when the rq clock is valid.

Have you seen the insides of kernel/sched/clock.c ?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ