[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49B5FA84.9000301@cosmosbay.com>
Date: Tue, 10 Mar 2009 06:28:36 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Brian Bloniarz <bmb@...enacr.com>
CC: kchang@...enacr.com, netdev@...r.kernel.org
Subject: Re: Multicast packet loss
Brian Bloniarz a écrit :
> Eric Dumazet wrote:
>> Here is a patch that helps. It's still an RFC of course, since its
>> somewhat ugly :)
>
> Hi Eric,
>
> I did some experimenting with this patch today -- we're users, not
> kernel hackers,
> but the performance looks great. We see no loss with mcasttest, and no
> loss with
> our internal test programs (which do much more user-space work). We're very
> encouraged :)
>
> One thing I'm curious about: previously, setting
> /proc/irq/<eth0>/smp_affinity
> to one CPU made things perform better, but with this patch, performance
> is better
> with smp_affinity == ff than with smp_affinity == 1. Do you know why that
> is? Our tests are all with bnx2 msi_disable=1. I can investigate with
> oprofile
> tomorrow.
>
Well, smp_affinity could help in my opininon if you dedicate
one cpu for the NIC, and others for user apps, if the average
work done per packet is large. If load is light, its better
to use the same cpu to perform all the work, since no expensive
bus trafic is needed between cpu to exchange memory lines.
If you only change /proc/irq/<eth0>/smp_affinity, and let scheduler
chose any cpu for your user-space work that can have long latencies,
I would not expect better performances.
Try to cpu affine your taks to 0xFE to get better determinism.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists