lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 15 Apr 2010 04:40:06 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Stephen Hemminger <shemminger@...tta.com>
Cc:	Changli Gao <xiaosuo@...il.com>, Tom Herbert <therbert@...gle.com>,
	hadi@...erus.ca, netdev@...r.kernel.org, robert@...julf.net,
	David Miller <davem@...emloft.net>,
	Andi Kleen <andi@...stfloor.org>
Subject: Re: rps perfomance WAS(Re: rps: question

Le mercredi 14 avril 2010 à 16:02 -0700, Stephen Hemminger a écrit :
> On Thu, 15 Apr 2010 06:51:29 +0800
> Changli Gao <xiaosuo@...il.com> wrote:
> 
> > On Thu, Apr 15, 2010 at 4:57 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> > >
> > > RPS can be tuned (Changli wants a finer tuning...), it would be
> > > intereting to tune multiqueue devices too. I dont know if its possible
> > > right now.
> > >
> > > On my Nehalem machine (16 logical cpus), its NetXtreme II BCM57711E
> > > 10Gigabit has 16 queues. It might be good to use less queues according
> > > to your results on some workloads, and eventually use RPS on a second
> > > layering.
> > 
> > My idear is: run a daemon in userland to monitor the softnet
> > statistics, and tun the RPS setting if necessary. It seems that the
> > current softnet statistics data isn't correct.
> > 
> > Long time ago, I did a test, and the conclution was
> > call_function_single IPI was more expensive than resched IPI, so I
> > moved to kernel thread from softirq for packet processing. I'll redo
> > the test later.
> > 
> 
> The big thing is data, data, data... Performance can only be examined
> with real hard data with multiple different kind of hardware.  Also, check for
> regressions in lmbench and TPC benchmarks. Yes this is hard, but papers
> on this would allow for rational rather than speculative choices.
> 
> Adding more tuning knobs is not the answer unless you can show when
> the tuning helps.
> 

Agree 100%, and irqbalance is the existing daemon. It should be used and
changed if necessary.

Changli, my stronges argument about your patches is that our scheduler
and memory affinity api (numactl driven) is bitmask oriented, giving the
same weight to individual cpu or individual memory node.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ