[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1ae1700c-9102-44b1-93f0-1c2ebc4f433e@arm.com>
Date: Mon, 4 Aug 2025 14:02:47 +0100
From: Christian Loehle <christian.loehle@....com>
To: Andrea Righi <arighi@...dia.com>
Cc: tj@...nel.org, void@...ifault.com, linux-kernel@...r.kernel.org,
sched-ext@...ts.linux.dev, changwoo@...lia.com, hodgesd@...a.com,
mingo@...hat.com, peterz@...radead.org
Subject: Re: [PATCH v2 3/3] sched_ext: Guarantee rq lock on scx_bpf_cpu_rq()
On 8/4/25 13:41, Andrea Righi wrote:
> On Mon, Aug 04, 2025 at 12:27:43PM +0100, Christian Loehle wrote:
>> Most fields in scx_bpf_cpu_rq() assume that its rq_lock is held.
>> Furthermore they become meaningless without rq lock, too.
>> Only return scx_bpf_cpu_rq() if we hold rq lock of that rq.
>>
>> All upstream scx schedulers can be converted into the new
>> scx_bpf_remote_curr() instead.
>>
>> Signed-off-by: Christian Loehle <christian.loehle@....com>
>> ---
>> kernel/sched/ext.c | 10 +++++++++-
>> 1 file changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
>> index 1d9d9cbed0aa..0b05ddc1f100 100644
>> --- a/kernel/sched/ext.c
>> +++ b/kernel/sched/ext.c
>> @@ -7420,10 +7420,18 @@ __bpf_kfunc s32 scx_bpf_task_cpu(const struct task_struct *p)
>> */
>> __bpf_kfunc struct rq *scx_bpf_cpu_rq(s32 cpu)
>> {
>> + struct rq *rq;
>> +
>> if (!kf_cpu_valid(cpu, NULL))
>> return NULL;
>>
>> - return cpu_rq(cpu);
>> + rq = cpu_rq(cpu);
>> + if (rq != scx_locked_rq_state) {
>
> I think you want to check rq != scx_locked_rq(), since scx_locked_rq_state
> is a per-CPU variable.
Duh, of course. m(
>
> We may also want to add a preempt_disable/enable() for consistency. How
> about something like this?
>
> preempt_disable();
> rq = cpu_rq(cpu);
> if (rq != scx_locked_rq()) {
> scx_kf_error("Accessing CPU%d rq from CPU%d without holding its lock",
> cpu, smp_processor_id());
> rq = NULL;
> }
> preempt_enable();
Ack
Powered by blists - more mailing lists