[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49B6F645.2070408@athenacr.com>
Date: Tue, 10 Mar 2009 19:22:45 -0400
From: Brian Bloniarz <bmb@...enacr.com>
To: Eric Dumazet <dada1@...mosbay.com>
CC: kchang@...enacr.com, netdev@...r.kernel.org
Subject: Re: Multicast packet loss
Hi Eric,
FYI: with your patch applied and lockdep enabled, I see:
[ 39.114628] ================================================
[ 39.121964] [ BUG: lock held when returning to user space! ]
[ 39.127704] ------------------------------------------------
[ 39.133461] msgtest/5242 is leaving the kernel with locks still held!
[ 39.140132] 1 lock held by msgtest/5242:
[ 39.144287] #0: (clock-AF_INET){-.-?}, at: [<ffffffff8041f5b9>] sock_def_readable+0x19/0xb0
I can't reproduced this with the mcasttest program yet, it
was with an internal test program which does some userspace
processing on the messages. I'll let you know if I find a way
to reproduce it with a simple program I can share.
> Well, smp_affinity could help in my opininon if you dedicate
> one cpu for the NIC, and others for user apps, if the average
> work done per packet is large. If load is light, its better
> to use the same cpu to perform all the work, since no expensive
> bus trafic is needed between cpu to exchange memory lines.
I tried this setup as well: an 8-core box with 4 userspace
processes, each affined to an individual CPU1-4. The IRQ was on
CPU0. On most kernels, this setup loses fewer packets than the default
affinity (though they both lose some). With your patch enabled, the
default affinity loses 0 packets, and this setup loses some.
Thanks,
Brian Bloniarz
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists