lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200807291430.08220.nickpiggin@yahoo.com.au>
Date:	Tue, 29 Jul 2008 14:30:07 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Jeremy Fitzhardinge <jeremy@...p.org>,
	Jens Axboe <jens.axboe@...cle.com>, Andi Kleen <ak@...e.de>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: x86: Is there still value in having a special tlb flush IPI vector?

On Tuesday 29 July 2008 09:34, Ingo Molnar wrote:
> * Jeremy Fitzhardinge <jeremy@...p.org> wrote:
> > Now that normal smp_function_call is no longer an enormous bottleneck,
> > is there still value in having a specialised IPI vector for tlb
> > flushes?  It seems like quite a lot of duplicate code.
> >
> > The 64-bit tlb flush multiplexes the various cpus across 8 vectors to
> > increase scalability. If this is a big issue, then the smp function
> > call code can (and should) do the same thing.  (Though looking at it
> > more closely, the way the code uses the 8 vectors is actually a less
> > general way of doing what smp_call_function is doing anyway.)

It definitely is not a clear win. They do not have the same characteristics.
So numbers will be needed.

smp_call_function is now properly scalable in smp_call_function_single
form. The more general case of multiple targets is not so easy and it still
takes a global lock and touches global cachelines.

I don't think it is a good use of time, honestly. Do you have a good reason?


> yep, and we could eliminate the reschedule IPI as well.

No. The rewrite makes it now very good at synchronously sending a function
to a single other CPU.

Sending asynchronously requires a slab allocation and then a remote slab free
(which is nasty for slab) at the other end, and bouncing of locks and
cachelines. No way you want to do that in the reschedule IPI.

Not to mention the minor problem that it still deadlocks when called with
interrupts disabled ;)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ