lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090217172113.GA26459@redhat.com>
Date:	Tue, 17 Feb 2009 18:21:13 +0100
From:	Oleg Nesterov <oleg@...hat.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Nick Piggin <npiggin@...e.de>,
	Jens Axboe <jens.axboe@...cle.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...e.hu>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Steven Rostedt <rostedt@...dmis.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] generic-smp: remove kmalloc()

On 02/17, Peter Zijlstra wrote:
>
> Ok, so this is on top of Nick's cleanup from earlier today, and folds
> everything.
>
> No more RCU games as the storage for per-cpu entries is permanent - cpu
> hotplug should be good because it does a synchronize_sched().
>
> What we do play games with is the global list, we can extract entries
> and place them to the front while its being observed. This means that
> the list iteration can see some entries twice (not a problem since we
> remove ourselves from the cpumask), but cannot miss entries.

I think this all is correct.

But I am wondering, don't we have another problem. Before this patch,
smp_call_function_many(wait => 0) always succeeds, no matter which
locks the caller holds.

After this patch we can deadlock, csd_lock() can spin forever if the
caller shares the lock with another func in flight.

IOW,
	void func(void *arg)
	{
		lock(LOCK);
		unlock(LOCK);
	}

CPU 0 does:

	smp_call_function(func, NULL, 0);
	lock(LOCK);
	smp_call_function(another_func, NULL, 0);
	unlock(LOCK);

If CPU 0 takes LOCK before CPU 1 calls func, the 2nd smp_call_function()
hangs in csd_lock().

I am not sure this is the real problem (even if I am right), perhaps
the answer is "don't do that".

But, otoh, afaics we can tweak generic_smp_call_function_interrupt()
a bit to avoid this problem. Something like

	list_for_each_entry_rcu(data, &call_function.queue, csd.list) {
		void (*func)(void *);
		void *info;
		int refs;

		spin_lock(&data->lock);
		if (!cpumask_test_cpu(cpu, data->cpumask)) {
			spin_unlock(&data->lock);
			continue;
		}
		cpumask_clear_cpu(cpu, data->cpumask);
		WARN_ON(data->refs == 0);
		refs = --data->refs;
		func = data->csd.func;
		info = data->csd.info;
		wait = (data->flags & CSD_FLAG_WAIT);
		spin_unlock(&data->lock);

		if (!refs) {
			spin_lock(&call_function.lock);
			list_del_rcu(&data->csd.list);
			spin_unlock(&call_function.lock);
			csd_unlock(&data->csd);
		}
		
		func(info);
		if (!refs && wait)
			csd_complete(&data->csd);
	}

I am afraid I missed something, and the code above looks wrong
because it does csd_unlock() first, then csd_complete().

But if wait == T, then nobody can reuse this per-cpu entry, the
caller of smp_call_function_many() must spin in csd_wait() on
the same CPU.

What do you think?

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ