lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 29 Jan 2010 00:34:23 +0530
From:	Krishna Kumar2 <krkumar2@...ibm.com>
To:	Ben Hutchings <bhutchings@...arflare.com>
Cc:	linux-net-drivers@...arflare.com, netdev@...r.kernel.org
Subject: Re: [RFC] [PATCH] net: Add support for	ndo_select_queue()	functions to
 cache the queue mapping

> Ben Hutchings <bhutchings@...arflare.com>
>
> Re: [RFC] [PATCH] net: Add support for ndo_select_queue() functions
> to cache the queue mapping
>
> On Fri, 2010-01-29 at 00:11 +0530, Krishna Kumar2 wrote:
> > > Ben Hutchings <bhutchings@...arflare.com>
> > >
> > > On Thu, 2010-01-28 at 23:39 +0530, Krishna Kumar2 wrote:
> [...]
> > > > Other than that, I saw netif_sk_tx_queue_set is not called.
> > > > And dev_pick_tx has already capped automatically, you probably
> > > > don't need another here?
> > >
> > > Only the return value of ndo_select_queue() is capped; the cached
value
> > > is assumed to be valid.
> >
> > +void netif_sk_tx_queue_set(struct net_device *dev, struct sock *sk,
> > +                             u16 queue_index)
> > +{
> > +     sk_tx_queue_set(sk, dev_cap_txqueue(dev, queue_index));
> > +}
> >
> > I guess I didn't understand this then, who calls this function?
> [...]
>
> The driver's ndo_select_queue() implementation calls it before
> returning, if and only if sk_may_set_tx_queue() is true and its
> selection is dependent only on the flow id.
>
> As an example, ixgbe's selection function:
>
>         static u16 ixgbe_select_queue(struct net_device *dev, struct
> sk_buff *skb)
>         {
>            struct ixgbe_adapter *adapter = netdev_priv(dev);
>            int txq = smp_processor_id();
>
>            if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE)
>               return txq;
>
>         #ifdef IXGBE_FCOE
>            if ((adapter->flags & IXGBE_FLAG_FCOE_ENABLED) &&
>                (skb->protocol == htons(ETH_P_FCOE))) {
>               txq &= (adapter->ring_feature[RING_F_FCOE].indices - 1);
>               txq += adapter->ring_feature[RING_F_FCOE].mask;
>               return txq;
>            }
>         #endif
>            if (adapter->flags & IXGBE_FLAG_DCB_ENABLED)
>               return (skb->vlan_tci & IXGBE_TX_FLAGS_VLAN_PRIO_MASK) >>
13;
>
>            return skb_tx_hash(dev, skb);
>         }
>
> would not call netif_sk_tx_queue_set() in the first two cases, but could
> do so in the last two cases if sk_may_set_tx_queue(skb->sk) is true.

That's a good optimization - saves calls to driver and hash
calculation for every skb (for devices that have select_txq
but find that they can cache after the first time).

Thanks,

- KK

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ