[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1313159376.2354.26.camel@edumazet-HP-Compaq-6005-Pro-SFF-PC>
Date: Fri, 12 Aug 2011 16:29:36 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Jason Wang <jasowang@...hat.com>
Cc: mst@...hat.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, davem@...emloft.net,
krkumar2@...ibm.com, rusty@...tcorp.com.au, qemu-devel@...gnu.org,
kvm@...r.kernel.org, mirq-linux@...e.qmqm.pl
Subject: Re: [net-next RFC PATCH 4/7] tuntap: multiqueue support
Le vendredi 12 août 2011 à 09:55 +0800, Jason Wang a écrit :
>+ rxq = skb_get_rxhash(skb);
>+ if (rxq) {
>+ tfile = rcu_dereference(tun->tfiles[rxq % numqueues]);
>+ if (tfile)
>+ goto out;
>+ }
You can avoid an expensive divide with following trick :
u32 idx = ((u64)rxq * numqueues) >> 32;
> -static struct tun_struct *tun_get(struct file *file)
> +static void tun_detach_all(struct net_device *dev)
> {
> - return __tun_get(file->private_data);
> + struct tun_struct *tun = netdev_priv(dev);
> + struct tun_file *tfile, *tfile_list[MAX_TAP_QUEUES];
> + int i, j = 0;
> +
> + spin_lock(&tun_lock);
> +
> + for (i = 0; i < MAX_TAP_QUEUES && tun->numqueues; i++) {
> + tfile = rcu_dereference_protected(tun->tfiles[i],
> + lockdep_is_held(&tun_lock));
> + if (tfile) {
> + wake_up_all(&tfile->wq.wait);
> + tfile_list[i++] = tfile;
typo here, you want tfile_list[j++] = tfile;
> + rcu_assign_pointer(tun->tfiles[i], NULL);
> + rcu_assign_pointer(tfile->tun, NULL);
> + --tun->numqueues;
> + }
> + }
> + BUG_ON(tun->numqueues != 0);
> + spin_unlock(&tun_lock);
> +
> + synchronize_rcu();
> + for(--j; j >= 0; j--)
> + sock_put(&tfile_list[j]->sk);
> }
>
Could you take a look at net/packet/af_packet.c, to check how David did
the whole fanout thing ?
__fanout_unlink()
Trick is to not leave NULL entries in the tun->tfiles[] array.
It makes things easier in hot path.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists