lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190104183715.GC187360@arrakis.emea.arm.com>
Date:   Fri, 4 Jan 2019 18:37:16 +0000
From:   Catalin Marinas <catalin.marinas@....com>
To:     zhe.he@...driver.com
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: kmemleak: Turn kmemleak_lock to spin lock and RCU
 primitives

On Fri, Jan 04, 2019 at 10:29:13PM +0800, zhe.he@...driver.com wrote:
> It's not necessary to keep consistency between readers and writers of
> kmemleak_lock. RCU is more proper for this case. And in order to gain better
> performance, we turn the reader locks to RCU read locks and writer locks to
> normal spin locks.

This won't work.

> @@ -515,9 +515,7 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
>  	struct kmemleak_object *object;
>  
>  	rcu_read_lock();
> -	read_lock_irqsave(&kmemleak_lock, flags);
>  	object = lookup_object(ptr, alias);
> -	read_unlock_irqrestore(&kmemleak_lock, flags);

The comment on lookup_object() states that the kmemleak_lock must be
held. That's because we don't have an RCU-like mechanism for removing
removing objects from the object_tree_root:

>  
>  	/* check whether the object is still available */
>  	if (object && !get_object(object))
> @@ -537,13 +535,13 @@ static struct kmemleak_object *find_and_remove_object(unsigned long ptr, int ali
>  	unsigned long flags;
>  	struct kmemleak_object *object;
>  
> -	write_lock_irqsave(&kmemleak_lock, flags);
> +	spin_lock_irqsave(&kmemleak_lock, flags);
>  	object = lookup_object(ptr, alias);
>  	if (object) {
>  		rb_erase(&object->rb_node, &object_tree_root);
>  		list_del_rcu(&object->object_list);
>  	}
> -	write_unlock_irqrestore(&kmemleak_lock, flags);
> +	spin_unlock_irqrestore(&kmemleak_lock, flags);

So here, while list removal is RCU-safe, rb_erase() is not.

If you have time to implement an rb_erase_rcu(), than we could reduce
the locking in kmemleak.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ