lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090128140725.782f5cc1.akpm@linux-foundation.org>
Date:	Wed, 28 Jan 2009 14:07:25 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	mingo@...e.hu, tglx@...utronix.de, peterz@...radead.org,
	arjan@...radead.org, rusty@...tcorp.com.au, jens.axboe@...cle.com
Subject: Re: Buggy IPI and MTRR code on low memory

On Wed, 28 Jan 2009 16:23:32 -0500 (EST)
Steven Rostedt <rostedt@...dmis.org> wrote:

> 
> On Wed, 28 Jan 2009, Andrew Morton wrote:
> 
> > On Wed, 28 Jan 2009 13:12:02 -0800
> > Andrew Morton <akpm@...ux-foundation.org> wrote:
> > 
> > > Thought: do we need to do the kmalloc at all?  Perhaps we can instead
> > > use a statically allocated per-cpu call_single_data local to
> > > kernel/smp.c?  It would need a spinlock or something to protect it...
> > 
> > (not a spinlock - get_cpu_var/put_cpu_var will suffice)
> 
> Is that enough?
> 
> The calling IPIs may process the data after smp_call_function is made. 
> What happens if two smp_call_functions are executed one after the other? 
> The second one may corrupt the data if the IPI function has not executed 
> yet.
> 
> We may still need that "RELEASE" flag for that case.
> 

Good point.

Forget I said that - smp_call_function_single() is calling a function
on a *different* CPU, so data which is local to the calling CPU is of
course in the wrong place.

So if we're going to use per-cpu data then we'd need to protect it with
a lock.  We could (should?) have a separate lock for each destination
CPU.

We could make smp_call_function_single() block until the IPI handler
has consumed the call_single_data, in which case we might as well put
the call_single_data, onto the caller's stack, as you've done.

Or we could take the per-cpu spinlock in smp_call_function_single(),
and release it in the IPI handler, after the call_single_data has been
consumed, which is a bit more efficient.  But I have a suspicion that
this is AB/BA deadlockable.

<tries to think of any scenarios>

<fails>

So we have

smp_call_function_single(int cpu)
{
	spin_lock(per_cpu(cpu, locks));
	per_cpu(cpu, call_single_data) = <stuff>
	send_ipi(cpu);
	return;
}

ipi_handler(...)
{
	int cpu = smp_processor_id();
	call_single_data csd = per_cpu(cpu, call_single_data);

	spin_unlock(per_cpu(cpu, locks));
	use(csd);
}

does that work?

Dunno if it's any better than what you have now.  It does however
remove the unpleasant "try kmalloc and if that failed, try something
else" mess.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ