[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y2t2DcHHLVB3tDfw@hirez.programming.kicks-ass.net>
Date: Wed, 9 Nov 2022 10:42:37 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
"H . Peter Anvin" <hpa@...or.com>, Paul Turner <pjt@...gle.com>,
linux-api@...r.kernel.org, Christian Brauner <brauner@...nel.org>,
Florian Weimer <fw@...eb.enyo.de>, David.Laight@...lab.com,
carlos@...hat.com, Peter Oskolkov <posk@...k.io>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
Chris Kennelly <ckennelly@...gle.com>
Subject: Re: [PATCH v5 08/24] sched: Introduce per memory space current
virtual cpu id
On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
> +void sched_vcpu_exit_signals(struct task_struct *t)
> +{
> + struct mm_struct *mm = t->mm;
> + unsigned long flags;
> +
> + if (!mm)
> + return;
> + local_irq_save(flags);
> + mm_vcpu_put(mm, t->mm_vcpu);
> + t->mm_vcpu = -1;
> + t->mm_vcpu_active = 0;
> + local_irq_restore(flags);
> +}
> +
> +void sched_vcpu_before_execve(struct task_struct *t)
> +{
> + struct mm_struct *mm = t->mm;
> + unsigned long flags;
> +
> + if (!mm)
> + return;
> + local_irq_save(flags);
> + mm_vcpu_put(mm, t->mm_vcpu);
> + t->mm_vcpu = -1;
> + t->mm_vcpu_active = 0;
> + local_irq_restore(flags);
> +}
> +
> +void sched_vcpu_after_execve(struct task_struct *t)
> +{
> + struct mm_struct *mm = t->mm;
> + unsigned long flags;
> +
> + WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
> +
> + local_irq_save(flags);
> + t->mm_vcpu = mm_vcpu_get(mm);
> + t->mm_vcpu_active = 1;
> + local_irq_restore(flags);
> + rseq_set_notify_resume(t);
> +}
> +static inline void mm_vcpu_put(struct mm_struct *mm, int vcpu)
> +{
> + lockdep_assert_irqs_disabled();
> + if (vcpu < 0)
> + return;
> + spin_lock(&mm->vcpu_lock);
> + __cpumask_clear_cpu(vcpu, mm_vcpumask(mm));
> + spin_unlock(&mm->vcpu_lock);
> +}
> +
> +static inline int mm_vcpu_get(struct mm_struct *mm)
> +{
> + int ret;
> +
> + lockdep_assert_irqs_disabled();
> + spin_lock(&mm->vcpu_lock);
> + ret = __mm_vcpu_get(mm);
> + spin_unlock(&mm->vcpu_lock);
> + return ret;
> +}
This:
local_irq_disable()
spin_lock()
thing is a PREEMPT_RT anti-pattern.
At the very very least this should then be raw_spin_lock(), not in the
least because you're calling this from under rq->lock, which itself is a
raw_spin_lock_t.
Powered by blists - more mailing lists