[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20100901.133733.223467599.davem@davemloft.net>
Date: Wed, 01 Sep 2010 13:37:33 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: davej@...hat.com
Cc: simon.kagstrom@...insight.net, netdev@...r.kernel.org
Subject: Re: via-velocity dma-debug warnings again. (2.6.35.2)
From: David Miller <davem@...emloft.net>
Date: Wed, 01 Sep 2010 13:35:47 -0700 (PDT)
> From: David Miller <davem@...emloft.net>
> Date: Wed, 01 Sep 2010 13:34:14 -0700 (PDT)
>
>> Ugh, while writing this I spotted another bug. It can't do this
>> ETH_ZLEN thing, it has to use skb_padto(). Otherwise it's just
>> transmitting arbitrary kernel memory at the end of the SKB
>> buffer onto the network which is a big no-no. I'll fix that
>> with another patch.
>
> Actually, these ETH_ZLEN things in the length calculation can
> just be deleted. It does in fact use skb_padto() properly earlier
> in the xmit function.
New patch:
via-velocity: Fix TX buffer unmapping.
Fix several bugs in TX buffer DMA unmapping:
1) Use pci_unmap_page() as appropriate.
2) Don't try to fetch the length from the DMA descriptor,
the chip and modify that value. Use the correct lengths,
calculated the same way as is done at map time.
3) Kill meaningless NULL checks (against embedded sized
arrays which can never be NULL, and against the address
of the non-zero indexed entry of an array).
4) max() on ETH_ZLEN is not necessary and just adds
confusion, since the xmit function does a proper
skb_padto() very early on.
Reported-by: Dave Jones <davej@...hat.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
diff --git a/drivers/net/via-velocity.c b/drivers/net/via-velocity.c
index fd69095..4167e1f 100644
--- a/drivers/net/via-velocity.c
+++ b/drivers/net/via-velocity.c
@@ -1705,28 +1705,21 @@ err_free_dma_rings_0:
* recycle it, if not then unmap the buffer.
*/
static void velocity_free_tx_buf(struct velocity_info *vptr,
- struct velocity_td_info *tdinfo, struct tx_desc *td)
+ struct velocity_td_info *tdinfo)
{
struct sk_buff *skb = tdinfo->skb;
+ int i;
- /*
- * Don't unmap the pre-allocated tx_bufs
- */
- if (tdinfo->skb_dma) {
- int i;
-
- for (i = 0; i < tdinfo->nskb_dma; i++) {
- size_t pktlen = max_t(size_t, skb->len, ETH_ZLEN);
+ pci_unmap_single(vptr->pdev, tdinfo->skb_dma[0],
+ skb_headlen(skb), PCI_DMA_TODEVICE);
- /* For scatter-gather */
- if (skb_shinfo(skb)->nr_frags > 0)
- pktlen = max_t(size_t, pktlen,
- td->td_buf[i].size & ~TD_QUEUE);
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- pci_unmap_single(vptr->pdev, tdinfo->skb_dma[i],
- le16_to_cpu(pktlen), PCI_DMA_TODEVICE);
- }
+ pci_unmap_page(vptr->pdev, tdinfo->skb_dma[i + 1],
+ frag->size, PCI_DMA_TODEVICE);
}
+
dev_kfree_skb_irq(skb);
tdinfo->skb = NULL;
}
@@ -1739,22 +1732,8 @@ static void velocity_free_td_ring_entry(struct velocity_info *vptr,
int q, int n)
{
struct velocity_td_info *td_info = &(vptr->tx.infos[q][n]);
- int i;
-
- if (td_info == NULL)
- return;
- if (td_info->skb) {
- for (i = 0; i < td_info->nskb_dma; i++) {
- if (td_info->skb_dma[i]) {
- pci_unmap_single(vptr->pdev, td_info->skb_dma[i],
- td_info->skb->len, PCI_DMA_TODEVICE);
- td_info->skb_dma[i] = 0;
- }
- }
- dev_kfree_skb(td_info->skb);
- td_info->skb = NULL;
- }
+ velocity_free_tx_buf(vptr, td_info);
}
/**
@@ -1925,7 +1904,7 @@ static int velocity_tx_srv(struct velocity_info *vptr)
stats->tx_packets++;
stats->tx_bytes += tdinfo->skb->len;
}
- velocity_free_tx_buf(vptr, tdinfo, td);
+ velocity_free_tx_buf(vptr, tdinfo);
vptr->tx.used[qnum]--;
}
vptr->tx.tail[qnum] = idx;
@@ -2534,9 +2513,7 @@ static netdev_tx_t velocity_xmit(struct sk_buff *skb,
return NETDEV_TX_OK;
}
- pktlen = skb_shinfo(skb)->nr_frags == 0 ?
- max_t(unsigned int, skb->len, ETH_ZLEN) :
- skb_headlen(skb);
+ pktlen = skb_headlen(skb);
spin_lock_irqsave(&vptr->lock, flags);
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists