lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 29 Jan 2010 00:11:42 +0530
From:	Krishna Kumar2 <krkumar2@...ibm.com>
To:	Ben Hutchings <bhutchings@...arflare.com>
Cc:	linux-net-drivers@...arflare.com, netdev@...r.kernel.org
Subject: Re: [RFC] [PATCH] net: Add support for ndo_select_queue()	functions to
 cache the queue mapping

> Ben Hutchings <bhutchings@...arflare.com>
>
> On Thu, 2010-01-28 at 23:39 +0530, Krishna Kumar2 wrote:
> > > Ben Hutchings <bhutchings@...arflare.com>
> > >
> > > We think it's worth matching up RX and TX queue selection for a
socket
> > > when the queues share an interrupt.  Currently the default TX queue
> > > selection is unlikely to match RX queue selection.  TX queue
selectionc
> > > can be overridden by the driver to match RX queue selection, but at
the
> > > expense of caching.  We found that without caching the cost of
> > > recalculating the the hash in software for each packet outweighed the
> > > benefit of good queue selection.
> > >
> > > This adds two simple helper functions: sk_may_set_tx_queue() reports
> > > whether the queue mapping should be cacheable for a socket, and
> > > netif_sk_tx_queue_set() sets it after validating the queue index.
This
> > > will allow drivers to evalulate expensive queue selection functions
> > > (such as Toeplitz hash functions) only once for connected sockets,
and
> > > to avoid doing so for other skbs.
> [...]
> > I am confused about this - isn't the txq# and rxq# already
> > matched due to the check for skb_rx_queue_recorded?
>
> When forwarding, yes.
>
> > sk_dst_cache is also not set for these routing/forwarding
> > workloads, this can be relied only for locally connected
> > xmits.
>
> Correct, that's exactly what it's for.
>
> > Other than that, I saw netif_sk_tx_queue_set is not called.
> > And dev_pick_tx has already capped automatically, you probably
> > don't need another here?
>
> Only the return value of ndo_select_queue() is capped; the cached value
> is assumed to be valid.

+void netif_sk_tx_queue_set(struct net_device *dev, struct sock *sk,
+                             u16 queue_index)
+{
+     sk_tx_queue_set(sk, dev_cap_txqueue(dev, queue_index));
+}

I guess I didn't understand this then, who calls this function? Is
it after getting the txq from the driver? In which case, are you
planning to do something like:

static struct netdev_queue *dev_pick_tx(struct net_device *dev,
                                        struct sk_buff *skb)
{
      u16 queue_index;
      struct sock *sk = skb->sk;

      if (sk_tx_queue_recorded(sk)) {
            queue_index = sk_tx_queue_get(sk);
      } else {
            const struct net_device_ops *ops = dev->netdev_ops;

            if (ops->ndo_select_queue) {
                  queue_index = ops->ndo_select_queue(dev, skb);
                  queue_index = dev_cap_txqueue(dev, queue_index);
            } else {
                  queue_index = 0;
                  if (dev->real_num_tx_queues > 1)
                        queue_index = skb_tx_hash(dev, skb); /*already
capped */
            }

            if (sk_may_set_tx_queue(sk))
                  sk_tx_queue_set(sk, queue_index);
        }
        skb_set_queue_mapping(skb, queue_index);
        return netdev_get_tx_queue(dev, queue_index);
}

Thanks,

- KK

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists