[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A6A2125329CFD4D8CC40C9E8ABCAB9F2497EFC946@MILEXCH2.ds.jdsu.net>
Date: Wed, 19 May 2010 03:36:59 -0700
From: Jon Zhou <Jon.Zhou@...u.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: any change in socket systemcall or packet_mmap regarding
multiqueue nic?
-----Original Message-----
From: Eric Dumazet [mailto:eric.dumazet@...il.com]
Sent: Wednesday, May 19, 2010 12:25 PM
To: Jon Zhou
Cc: netdev@...r.kernel.org
Subject: Re: any change in socket systemcall or packet_mmap regarding multiqueue nic?
Le mardi 18 mai 2010 à 19:55 -0700, Jon Zhou a écrit :
> hi
> the multiqueue networking can utilize multi-core to process packets from multiqueue nic,
> but any change in related userspace application part, such as socket system call, packet_mmap? these userspace API can also utilize multicore to process packets from kernel?
> otherwise they have to read data in serialization
>
Thats a bit general question. Works are in progress.
So far, you can use a new condition in filters to match a given queue
index for incoming packets. A sniffer could setup N different sockets to
receive data from N NIC queues.
jon->is it something like "ioctl(fd,SOL_SOCKET,queue_id...),could you tell the keyword?
For tcp flows, nothing is needed, since all packets of a given flow
should use same queue.
btw,do you think RFS is helpful for this?
However the current tx queue selection is based on sk->sk_hash value, a
linux side computed value, and this differs from the rx queue selection
done by the NIC firmware. So tx packets use a different queue than rx
packets for a given tcp flow. This means this is suboptimal: tcp_ack()
can run on a different cpu than TX completion handler.
TX completion handler touches the cloned skb that TCP used to transmit
buffer. Its freeing touches the dataref atomic counter in packet.
This should be addressed somehow.
Powered by blists - more mailing lists