lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090129161712.GC28984@elte.hu>
Date:	Thu, 29 Jan 2009 17:17:12 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Steven Rostedt <rostedt@...dmis.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Rusty Russell <rusty@...tcorp.com.au>, npiggin@...e.de,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Arjan van de Ven <arjan@...radead.org>,
	jens.axboe@...cle.com
Subject: Re: [PATCH -v2] use per cpu data for single cpu ipi calls


* Peter Zijlstra <peterz@...radead.org> wrote:

> On Thu, 2009-01-29 at 10:08 -0500, Steven Rostedt wrote:
> > The smp_call_function can be passed a wait parameter telling it to
> > wait for all the functions running on other CPUs to complete before
> > returning, or to return without waiting. Unfortunately, this is
> > currently just a suggestion and not manditory. That is, the
> > smp_call_function can decide not to return and wait instead.
> > 
> > The reason for this is because it uses kmalloc to allocate storage
> > to send to the called CPU and that CPU will free it when it is done.
> > But if we fail to allocate the storage, the stack is used instead.
> > This means we must wait for the called CPU to finish before
> > continuing.
> > 
> > Unfortunatly, some callers do no abide by this hint and act as if
> > the non-wait option is mandatory. The MTRR code for instance will
> > deadlock if the smp_call_function is set to wait. This is because
> > the smp_call_function will wait for the other CPUs to finish their
> > called functions, but those functions are waiting on the caller to
> > continue.
> > 
> > This patch changes the generic smp_call_function code to use per cpu
> > variables if the allocation of the data fails for a single CPU call. The
> > smp_call_function_many will fall back to the smp_call_function_single
> > if it fails its alloc. The smp_call_function_single is modified
> > to not force the wait state.
> > 
> > Since we now are using a single data per cpu we must synchronize the
> > callers to prevent a second caller modifying the data before the
> > first called IPI functions complete. To do so, I added a flag to
> > the call_single_data called CSD_FLAG_LOCK. When the single CPU is
> > called (which can be called when a many call fails an alloc), we
> > set the LOCK bit on this per cpu data. When the caller finishes
> > it clears the LOCK bit.
> > 
> > The caller must wait till the LOCK bit is cleared before setting
> > it. When it is cleared, there is no IPI function using it.
> > A spinlock is used to synchronize the setting of the bit between
> > callers. Since only one callee can be called at a time, and it
> > is the only thing to clear it, the IPI does not need to use
> > any locking.
> > 
> >  [
> >    changes for v2:
> > 
> >    -- kept kmalloc and only use per cpu if kmalloc fails.
> >           (Requested by Peter Zijlstra)
> > 
> >    -- added per cpu spinlocks
> >           (Requested by Andrew Morton and Peter Zijlstra)
> >  ]
> > 
> > Signed-off-by: Steven Rostedt <srostedt@...hat.com>
> 
> Looks nice, thanks!
> 
> Acked-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>

started testing it in tip/core/urgent, thanks guys!

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ