lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080327103707.GH12346@kernel.dk>
Date:	Thu, 27 Mar 2008 11:37:08 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	linux-kernel@...r.kernel.org, npiggin@...e.de, paulus@...ba.org,
	tglx@...utronix.de, mingo@...hat.com, tony.luck@...el.com,
	Alan.Brunelle@...com
Subject: Re: [PATCH 0/5] Generic smp_call_function(), improvements, and  smp_call_function_single()

On Thu, Mar 27 2008, Ingo Molnar wrote:
> 
> * Jens Axboe <jens.axboe@...cle.com> wrote:
> 
> > which is pretty much identical to io-cpu-affinity, except it uses 
> > kernel threads for completion.
> > 
> > The reason why I dropped the kthread approach is that it was slower. 
> > Time from signal to run was about 33% faster with IPI than with 
> > wake_up_process(). Doing benchmark runs, and the IPI approach won 
> > hands down in cache misses as well.
> 
> with irq threads we'll have all irq context run in kthread context 
> again. Could you show me how you measured the performance of the kthread 
> approach versus the raw-IPI approach?

There were 3 different indicators that the irq thread approach was
slower:

- Time from signal to actual run of the trigger was ~2usec vs ~3usec for
  IPI vs kthread. That was a microbenchmark.
- Cache misses were higher with the kthread approach.
- Actual performance in non-micro benchmarks was lower with the kthread
  approach.

I'll defer to Alan for the actual numbers, most of this was done in
private mails back and forth doing performance analysis. The initial
testing was done with the IPI hack, then we moved to the kthread
approach. Later the two were pitted against each other and the kthread
part was definitely slower. It ended up using more system time than the
IPI approach. So the kthread approach was than abandoned and all testing
has been on the smp_call_function_single() branch since then.

I very much wanted the kthread approach to work, since it's easier to
work with. It's not for lack of will or trying... I'll be happy to
supply you otherwise identical patches for this, the only difference
being kthread of IPI completions if you want to play with this.

> we can do a million kthread context switches per CPU per second, so 
> kthread context-switch cost cannot be a true performance limit, unless 
> you micro-benchmarked this.

At which point you wont be doing much else, so a cs microbenchmark is
not really that interesting.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ