lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 9 Nov 2022 10:09:36 -0500
From:   Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
        "Paul E . McKenney" <paulmck@...nel.org>,
        Boqun Feng <boqun.feng@...il.com>,
        "H . Peter Anvin" <hpa@...or.com>, Paul Turner <pjt@...gle.com>,
        linux-api@...r.kernel.org, Christian Brauner <brauner@...nel.org>,
        Florian Weimer <fw@...eb.enyo.de>, David.Laight@...lab.com,
        carlos@...hat.com, Peter Oskolkov <posk@...k.io>,
        Alexander Mikhalitsyn <alexander@...alicyn.com>,
        Chris Kennelly <ckennelly@...gle.com>
Subject: Re: [PATCH v5 08/24] sched: Introduce per memory space current
 virtual cpu id

On 2022-11-09 04:42, Peter Zijlstra wrote:
> On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
> 
>> +void sched_vcpu_exit_signals(struct task_struct *t)
>> +{
>> +	struct mm_struct *mm = t->mm;
>> +	unsigned long flags;
>> +
>> +	if (!mm)
>> +		return;
>> +	local_irq_save(flags);
>> +	mm_vcpu_put(mm, t->mm_vcpu);
>> +	t->mm_vcpu = -1;
>> +	t->mm_vcpu_active = 0;
>> +	local_irq_restore(flags);
>> +}
>> +
>> +void sched_vcpu_before_execve(struct task_struct *t)
>> +{
>> +	struct mm_struct *mm = t->mm;
>> +	unsigned long flags;
>> +
>> +	if (!mm)
>> +		return;
>> +	local_irq_save(flags);
>> +	mm_vcpu_put(mm, t->mm_vcpu);
>> +	t->mm_vcpu = -1;
>> +	t->mm_vcpu_active = 0;
>> +	local_irq_restore(flags);
>> +}
>> +
>> +void sched_vcpu_after_execve(struct task_struct *t)
>> +{
>> +	struct mm_struct *mm = t->mm;
>> +	unsigned long flags;
>> +
>> +	WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
>> +
>> +	local_irq_save(flags);
>> +	t->mm_vcpu = mm_vcpu_get(mm);
>> +	t->mm_vcpu_active = 1;
>> +	local_irq_restore(flags);
>> +	rseq_set_notify_resume(t);
>> +}
> 
>> +static inline void mm_vcpu_put(struct mm_struct *mm, int vcpu)
>> +{
>> +	lockdep_assert_irqs_disabled();
>> +	if (vcpu < 0)
>> +		return;
>> +	spin_lock(&mm->vcpu_lock);
>> +	__cpumask_clear_cpu(vcpu, mm_vcpumask(mm));
>> +	spin_unlock(&mm->vcpu_lock);
>> +}
>> +
>> +static inline int mm_vcpu_get(struct mm_struct *mm)
>> +{
>> +	int ret;
>> +
>> +	lockdep_assert_irqs_disabled();
>> +	spin_lock(&mm->vcpu_lock);
>> +	ret = __mm_vcpu_get(mm);
>> +	spin_unlock(&mm->vcpu_lock);
>> +	return ret;
>> +}
> 
> 
> This:
> 
> 	local_irq_disable()
> 	spin_lock()
> 
> thing is a PREEMPT_RT anti-pattern.
> 
> At the very very least this should then be raw_spin_lock(), not in the
> least because you're calling this from under rq->lock, which itself is a
> raw_spin_lock_t.

Very good point, will fix using raw_spinlock_t.

Thanks,

Mathieu

> 

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ