[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <497F5F86.9010101@hp.com>
Date: Tue, 27 Jan 2009 11:24:54 -0800
From: Rick Jones <rick.jones2@...com>
To: Netfilter Developers <netfilter-devel@...r.kernel.org>
CC: Patrick McHardy <kaber@...sh.net>,
Eric Dumazet <dada1@...mosbay.com>,
Linux Network Development list <netdev@...r.kernel.org>,
Stephen Hemminger <shemminger@...tta.com>
Subject: Re: 32 core net-next stack/netfilter "scaling"
>> I will give it a try and let folks know the results - unless told
>> otherwise, I will ass-u-me I only need rerun the "full_iptables" test
>> case.
>
>
> The runemomniagg2.sh script is still running, but the initial cycles
> profile suggests that the main change is converting the write_lock time
> into spinlock contention time with 78.39% of the cycles spent in
> ia64_spinlock_contention. When the script completes I'll upload the
> profiles and the netperf results to the same base URL as in the basenote
> under "contrack01/"
The script completed - although at some point I hit an fd limit - I think I have
an fd leak in netperf somewhere :( .
Anyhow, there are still some netperfs that end-up kicking the bucket during the
run - I suspect starvation because where in the other configs (no iptables, and
empty iptables) each netperf seems to consume about 50% of a CPU - stands to
reason - 64 netperfs, 32 cores - in the "full" case I see many netperfs consuming
100% of a CPU. My gut is thinking that one or more netperf contexts gets stuck
doing something on behalf of others. There is also ksoftirqd time for a few of
those processes.
Anyhow, the spread on trans/s/netperf is now 600 to 500 or 6000, which does
represent an improvement.
rick jones
PS - just to be certain that running-out of fd's didn't skew the results I'm
rerunning the script with ulimit -n 10240 and will see if that changes the
results any. And I suppose I need to go fd leak hunting in netperf omni code :(
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists