[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250110083102.GA4213@noisy.programming.kicks-ass.net>
Date: Fri, 10 Jan 2025 09:31:02 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Changwoo Min <changwoo@...lia.com>
Cc: tj@...nel.org, void@...ifault.com, arighi@...dia.com, mingo@...hat.com,
kernel-dev@...lia.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 2/6] sched_ext: Implement scx_bpf_now()
On Thu, Jan 09, 2025 at 10:14:52PM +0900, Changwoo Min wrote:
> Returns a high-performance monotonically non-decreasing clock for the current
> CPU. The clock returned is in nanoseconds.
>
> It provides the following properties:
>
> 1) High performance: Many BPF schedulers call bpf_ktime_get_ns() frequently
> to account for execution time and track tasks' runtime properties.
> Unfortunately, in some hardware platforms, bpf_ktime_get_ns() -- which
> eventually reads a hardware timestamp counter -- is neither performant nor
> scalable. scx_bpf_now() aims to provide a high-performance clock by
> using the rq clock in the scheduler core whenever possible.
>
> 2) High enough resolution for the BPF scheduler use cases: In most BPF
> scheduler use cases, the required clock resolution is lower than the most
> accurate hardware clock (e.g., rdtsc in x86). scx_bpf_now() basically
> uses the rq clock in the scheduler core whenever it is valid. It considers
> that the rq clock is valid from the time the rq clock is updated
> (update_rq_clock) until the rq is unlocked (rq_unpin_lock).
>
> 3) Monotonically non-decreasing clock for the same CPU: scx_bpf_now()
> guarantees the clock never goes backward when comparing them in the same
> CPU. On the other hand, when comparing clocks in different CPUs, there
> is no such guarantee -- the clock can go backward. It provides a
> monotonically *non-decreasing* clock so that it would provide the same
> clock values in two different scx_bpf_now() calls in the same CPU
> during the same period of when the rq clock is valid.
>
> An rq clock becomes valid when it is updated using update_rq_clock()
> and invalidated when the rq is unlocked using rq_unpin_lock().
>
> Let's suppose the following timeline in the scheduler core:
>
> T1. rq_lock(rq)
> T2. update_rq_clock(rq)
> T3. a sched_ext BPF operation
> T4. rq_unlock(rq)
> T5. a sched_ext BPF operation
> T6. rq_lock(rq)
> T7. update_rq_clock(rq)
>
> For [T2, T4), we consider that rq clock is valid (SCX_RQ_CLK_VALID is
> set), so scx_bpf_now() calls during [T2, T4) (including T3) will
> return the rq clock updated at T2. For duration [T4, T7), when a BPF
> scheduler can still call scx_bpf_now() (T5), we consider the rq clock
> is invalid (SCX_RQ_CLK_VALID is unset at T4). So when calling
> scx_bpf_now() at T5, we will return a fresh clock value by calling
> sched_clock_cpu() internally. Also, to prevent getting outdated rq clocks
> from a previous scx scheduler, invalidate all the rq clocks when unloading
> a BPF scheduler.
>
> One example of calling scx_bpf_now(), when the rq clock is invalid
> (like T5), is in scx_central [1]. The scx_central scheduler uses a BPF
> timer for preemptive scheduling. In every msec, the timer callback checks
> if the currently running tasks exceed their timeslice. At the beginning of
> the BPF timer callback (central_timerfn in scx_central.bpf.c), scx_central
> gets the current time. When the BPF timer callback runs, the rq clock could
> be invalid, the same as T5. In this case, scx_bpf_now() returns a fresh
> clock value rather than returning the old one (T2).
>
> [1] https://github.com/sched-ext/scx/blob/main/scheds/c/scx_central.bpf.c
>
> Signed-off-by: Changwoo Min <changwoo@...lia.com>
This one looks good, thanks!
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Powered by blists - more mailing lists