[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-JDvEHrWLx7w6zUpct0KbNvWUofG_LRuFFVbozCb-UGkA@mail.gmail.com>
Date: Wed, 25 Apr 2018 15:00:08 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Magnus Karlsson <magnus.karlsson@...il.com>
Cc: Björn Töpel <bjorn.topel@...il.com>,
"Karlsson, Magnus" <magnus.karlsson@...el.com>,
Alexander Duyck <alexander.h.duyck@...el.com>,
Alexander Duyck <alexander.duyck@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Alexei Starovoitov <ast@...com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Daniel Borkmann <daniel@...earbox.net>,
"Michael S. Tsirkin" <mst@...hat.com>,
Network Development <netdev@...r.kernel.org>,
michael.lundkvist@...csson.com,
"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
"Singhai, Anjali" <anjali.singhai@...el.com>,
"Zhang, Qi Z" <qi.z.zhang@...el.com>
Subject: Re: [PATCH bpf-next 13/15] xsk: support for Tx
>>> +{
>>> + struct net_device *dev = skb->dev;
>>> + struct sk_buff *orig_skb = skb;
>>> + struct netdev_queue *txq;
>>> + int ret = NETDEV_TX_BUSY;
>>> + bool again = false;
>>> +
>>> + if (unlikely(!netif_running(dev) || !netif_carrier_ok(dev)))
>>> + goto drop;
>>> +
>>> + skb = validate_xmit_skb_list(skb, dev, &again);
>>> + if (skb != orig_skb)
>>> + return NET_XMIT_DROP;
>>
>> Need to free generated segment list on error, see packet_direct_xmit.
>
> I do not use segments in the TX code for reasons of simplicity and the
> free is in the calling function. But as I will create a common
> packet_direct_xmit according to your suggestion, it will have a
> kfree_skb_list() there as in af_packet.c.
Ah yes. For these sockets it is guaranteed that sbks are not gso skbs.
Of course, makes sense.
>> static inline struct xdp_desc *xskq_peek_desc(struct xsk_queue *q,
>> + struct xdp_desc *desc)
>> +{
>> + struct xdp_rxtx_ring *ring;
>> +
>> + if (q->cons_tail == q->cons_head) {
>> + WRITE_ONCE(q->ring->consumer, q->cons_tail);
>> + q->cons_head = q->cons_tail + xskq_nb_avail(q, RX_BATCH_SIZE);
>> +
>> + /* Order consumer and data */
>> + smp_rmb();
>> +
>> + return xskq_validate_desc(q, desc);
>> + }
>> +
>> + ring = (struct xdp_rxtx_ring *)q->ring;
>> + *desc = ring->desc[q->cons_tail & q->ring_mask];
>> + return desc;
>>
>> This only validates descriptors if taking the branch.
>
> Yes, that is because we only want to validate the descriptors once
> even if we call this function multiple times for the same entry.
Then I am probably misreading this function. But isn't head increased
by up to RX_BATCH_SIZE frames at once. If so, then for many frames
the branch is not taken.
Powered by blists - more mailing lists