lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 31 Oct 2008 12:57:13 -0700
From:	Stephen Hemminger <shemminger@...tta.com>
To:	Eric Dumazet <dada1@...mosbay.com>
Cc:	David Miller <davem@...emloft.net>, ilpo.jarvinen@...sinki.fi,
	zbr@...emap.net, rjw@...k.pl, mingo@...e.hu,
	s0mbre@...rvice.net.ru, a.p.zijlstra@...llo.nl,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	efault@....de, akpm@...ux-foundation.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.

On Fri, 31 Oct 2008 11:45:33 +0100
Eric Dumazet <dada1@...mosbay.com> wrote:

> David Miller a écrit :
> > From: "Ilpo Järvinen" <ilpo.jarvinen@...sinki.fi>
> > Date: Fri, 31 Oct 2008 11:40:16 +0200 (EET)
> > 
> >> Let me remind that it is just a single process, so no ping-pong & other 
> >> lock related cache effects should play any significant role here, no? (I'm 
> >> no expert though :-)).
> > 
> > Not locks or ping-pongs perhaps, I guess.  So it just sends and
> > receives over a socket, implementing both ends of the communication
> > in the same process?
> > 
> > If hash chain conflicts do happen for those 2 sockets, just traversing
> > the chain 2 entries deep could show up.
> 
> tbench is very sensible to cache line ping-pongs (on SMP machines of course)
> 
> Just to prove my point, I coded the following patch and tried it
> on a HP BL460c G1. This machine has 2 quad cores cpu 
> (Intel(R) Xeon(R) CPU E5450  @3.00GHz)
> 
> tbench 8 went from 2240 MB/s to 2310 MB/s after this patch applied
> 
> [PATCH] net: Introduce netif_set_last_rx() helper
> 
> On SMP machine, loopback device (and possibly others net device)
> should try to avoid dirty the memory cache line containing "last_rx"
> field. Got 3% increase on tbench on a 8 cpus machine.
> 
> Signed-off-by: Eric Dumazet <dada1@...mosbay.com>
> ---
>  drivers/net/loopback.c    |    2 +-
>  include/linux/netdevice.h |   16 ++++++++++++++++
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> 

Why bother with last_rx at all on loopback.  I have been thinking
we should figure out a way to get rid of last_rx all together. It only
seems to be used by bonding, and the bonding driver could do the calculation
in its receive handling.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ