lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 8 Feb 2010 14:18:58 -0500
From:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	linux-kernel@...r.kernel.org, laijs@...fujitsu.com,
	dipankar@...ibm.com, akpm@...ux-foundation.org,
	mathieu.desnoyers@...ymtl.ca, josh@...htriplett.org,
	dvhltc@...ibm.com, niv@...ibm.com, tglx@...utronix.de,
	peterz@...radead.org, rostedt@...dmis.org, Valdis.Kletnieks@...edu,
	dhowells@...hat.com
Subject: lockdep rcu-preempt and synchronize_srcu() awareness

Hi,

I just though about the following deadlock scenario involving rcu preempt and
mutexes. I see that lockdep does not warn about it, and it actually triggers a
deadlock on my box. It might be worth addressing for TREE_PREEMPT_RCU configs.

CPU A:
    mutex_lock(&test_mutex);
    synchronize_rcu();
    mutex_unlock(&test_mutex);

CPU B:
    rcu_read_lock();
    mutex_lock(&test_mutex);
    mutex_unlock(&test_mutex);
    rcu_read_unlock();

But given that it's not legit to take a mutex from within a rcu read lock in
non-preemptible configs, I guess it's not much of a real-life problem, but I
think SRCU is also affected, because there is no lockdep annotation around
synchronize_srcu().

So I think it would be good to mark rcu_read_lock/unlock as not permitting
"might_sleep()" in non preemptable RCU configs, and having a look at lockdep
SRCU support might be worthwhile.

The following test module triggers the problem:


/* test-rcu-lockdep.c
 *
 * Test RCU-awareness of lockdep. Don't look at the interface, it's aweful.
 * run, in parallel:
 *
 * cat /proc/testa 
 * cat /proc/testb
 */

#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/proc_fs.h>
#include <linux/sched.h>
#include <linux/delay.h>

struct proc_dir_entry *pentrya = NULL;
struct proc_dir_entry *pentryb = NULL;

static DEFINE_MUTEX(test_mutex);

static int my_opena(struct inode *inode, struct file *file)
{
	mutex_lock(&test_mutex);
	synchronize_rcu();
	mutex_unlock(&test_mutex);

	return -EPERM;
}


static struct file_operations my_operationsa = {
	.open = my_opena,
};

static int my_openb(struct inode *inode, struct file *file)
{
	rcu_read_lock();
	mutex_lock(&test_mutex);
	ssleep(1);
	mutex_unlock(&test_mutex);
	rcu_read_unlock();


	return -EPERM;
}


static struct file_operations my_operationsb = {
	.open = my_openb,
};

int init_module(void)
{
	pentrya = create_proc_entry("testa", 0444, NULL);
	if (pentrya)
		pentrya->proc_fops = &my_operationsa;

	pentryb = create_proc_entry("testb", 0444, NULL);
	if (pentryb)
		pentryb->proc_fops = &my_operationsb;

	return 0;
}

void cleanup_module(void)
{
	remove_proc_entry("testa", NULL);
	remove_proc_entry("testb", NULL);
}

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Mathieu Desnoyers");
MODULE_DESCRIPTION("lockdep rcu test");



Thanks,

Mathieu
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ