lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181005163018.icbknlzymwjhdehi@linutronix.de>
Date:   Fri, 5 Oct 2018 18:30:18 +0200
From:   Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To:     Clark Williams <williams@...hat.com>,
        Alexander Potapenko <glider@...gle.com>,
        Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com,
        linux-mm@...ck.org
Cc:     linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] kasan: convert kasan/quarantine_lock to raw_spinlock

On 2018-09-18 10:29:31 [-0500], Clark Williams wrote:

So I received this from Clark:

> The static lock quarantine_lock is used in mm/kasan/quarantine.c to protect
> the quarantine queue datastructures. It is taken inside quarantine queue
> manipulation routines (quarantine_put(), quarantine_reduce() and quarantine_remove_cache()),
> with IRQs disabled. This is no problem on a stock kernel but is problematic
> on an RT kernel where spin locks are converted to rt_mutex_t, which can sleep.
> 
> Convert the quarantine_lock to a raw spinlock. The usage of quarantine_lock
> is confined to quarantine.c and the work performed while the lock is held is limited.
> 
> Signed-off-by: Clark Williams <williams@...hat.com>

This is the minimum to get this working on RT splat free. There is one
memory deallocation with irqs off which should work on RT in its current
way.
Once this and the on_each_cpu() invocation, I was wondering if…

> ---
>  mm/kasan/quarantine.c | 18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index 3a8ddf8baf7d..b209dbaefde8 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -103,7 +103,7 @@ static int quarantine_head;
>  static int quarantine_tail;
>  /* Total size of all objects in global_quarantine across all batches. */
>  static unsigned long quarantine_size;
> -static DEFINE_SPINLOCK(quarantine_lock);
> +static DEFINE_RAW_SPINLOCK(quarantine_lock);
>  DEFINE_STATIC_SRCU(remove_cache_srcu);
>  
>  /* Maximum size of the global queue. */
> @@ -190,7 +190,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
>  	if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
>  		qlist_move_all(q, &temp);
>  
> -		spin_lock(&quarantine_lock);
> +		raw_spin_lock(&quarantine_lock);
>  		WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes);
>  		qlist_move_all(&temp, &global_quarantine[quarantine_tail]);
>  		if (global_quarantine[quarantine_tail].bytes >=
> @@ -203,7 +203,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
>  			if (new_tail != quarantine_head)
>  				quarantine_tail = new_tail;
>  		}
> -		spin_unlock(&quarantine_lock);
> +		raw_spin_unlock(&quarantine_lock);
>  	}
>  
>  	local_irq_restore(flags);
> @@ -230,7 +230,7 @@ void quarantine_reduce(void)
>  	 * expected case).
>  	 */
>  	srcu_idx = srcu_read_lock(&remove_cache_srcu);
> -	spin_lock_irqsave(&quarantine_lock, flags);
> +	raw_spin_lock_irqsave(&quarantine_lock, flags);
>  
>  	/*
>  	 * Update quarantine size in case of hotplug. Allocate a fraction of
> @@ -254,7 +254,7 @@ void quarantine_reduce(void)
>  			quarantine_head = 0;
>  	}
>  
> -	spin_unlock_irqrestore(&quarantine_lock, flags);
> +	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
>  
>  	qlist_free_all(&to_free, NULL);
>  	srcu_read_unlock(&remove_cache_srcu, srcu_idx);
> @@ -310,17 +310,17 @@ void quarantine_remove_cache(struct kmem_cache *cache)
>  	 */
>  	on_each_cpu(per_cpu_remove_cache, cache, 1);
>  
> -	spin_lock_irqsave(&quarantine_lock, flags);
> +	raw_spin_lock_irqsave(&quarantine_lock, flags);
>  	for (i = 0; i < QUARANTINE_BATCHES; i++) {
>  		if (qlist_empty(&global_quarantine[i]))
>  			continue;
>  		qlist_move_cache(&global_quarantine[i], &to_free, cache);
>  		/* Scanning whole quarantine can take a while. */
> -		spin_unlock_irqrestore(&quarantine_lock, flags);
> +		raw_spin_unlock_irqrestore(&quarantine_lock, flags);
>  		cond_resched();
> -		spin_lock_irqsave(&quarantine_lock, flags);
> +		raw_spin_lock_irqsave(&quarantine_lock, flags);
>  	}
> -	spin_unlock_irqrestore(&quarantine_lock, flags);
> +	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
>  
>  	qlist_free_all(&to_free, cache);
>  
> -- 
> 2.17.1
> 

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ