[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49D14B43.6030203@hp.com>
Date: Mon, 30 Mar 2009 15:44:19 -0700
From: Rick Jones <rick.jones2@...com>
To: Eric Dumazet <dada1@...mosbay.com>
CC: Jesper Dangaard Brouer <hawk@...u.dk>,
netdev <netdev@...r.kernel.org>,
Netfilter Developers <netfilter-devel@...r.kernel.org>
Subject: Re: [PATCH] netfilter: finer grained nf_conn locking
> Indeed, tbench is a mix of tcp and process scheduler test/bench
If I were inclined to run networking tests (eg netperf) over loopback and wanted
to maximize the trips up and down the protocol stack while minimizing scheduler
overheads, I might be inclinded to configure --enable-burst with netperf and then
run N/2 concurrent instances of something like:
netperf -T M,N -t TCP_RR -l 30 -- -b 128 -D &
where M and N were chosen to have each netperf and netserver pair bound to a pair
of suitable cores, and the value in the -b option wash picked to maximize the CPU
utilization on those cores. Then, in theory there would be little to no process
to process context switching and presumably little in the way of scheduler effect.
What I don't know is if such a setup would have both netperf and netserver each
consuming 100% of a CPU or if one of them might "peg" before the other. If one
did peg before the other, I might be inclined to switch to running N concurrent
instances, with -T M to bind each netperf/netserver pair to the same core.
There would then be the process to process context switching though it would be
limited to "related" processes.
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists