lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 03 Sep 2008 16:25:34 +0200 From: Eric Dumazet <dada1@...mosbay.com> To: Lennert Buytenhek <buytenh@...tstofly.org> Cc: netdev@...r.kernel.org Subject: Re: [PATCH 2/2] mv643xx_eth: hook up skb recycling Lennert Buytenhek a écrit : > This increases the maximum loss-free packet forwarding rate in > routing workloads by typically about 25%. > > Signed-off-by: Lennert Buytenhek <buytenh@...vell.com> > Interesting... > refilled = 0; > while (refilled < budget && rxq->rx_desc_count < rxq->rx_ring_size) { > struct sk_buff *skb; > int unaligned; > int rx; > > - skb = dev_alloc_skb(skb_size + dma_get_cache_alignment() - 1); > + skb = __skb_dequeue(&mp->rx_recycle); Here you take one skb at the head of queue > + if (skb == NULL) > + skb = dev_alloc_skb(mp->skb_size + > + dma_get_cache_alignment() - 1); > + > if (skb == NULL) { > mp->work_rx_oom |= 1 << rxq->index; > goto oom; > @@ -600,8 +591,8 @@ static int rxq_refill(struct rx_queue *rxq, int budget) > rxq->rx_used_desc = 0; > > rxq->rx_desc_area[rx].buf_ptr = dma_map_single(NULL, skb->data, > - skb_size, DMA_FROM_DEVICE); > - rxq->rx_desc_area[rx].buf_size = skb_size; > + mp->skb_size, DMA_FROM_DEVICE); > + rxq->rx_desc_area[rx].buf_size = mp->skb_size; > rxq->rx_skb[rx] = skb; > wmb(); > rxq->rx_desc_area[rx].cmd_sts = BUFFER_OWNED_BY_DMA | > @@ -905,8 +896,13 @@ static int txq_reclaim(struct tx_queue *txq, int budget, int force) > else > dma_unmap_page(NULL, addr, count, DMA_TO_DEVICE); > > - if (skb) > - dev_kfree_skb(skb); > + if (skb != NULL) { > + if (skb_queue_len(&mp->rx_recycle) < 1000 && > + skb_recycle_check(skb, mp->skb_size)) > + __skb_queue_tail(&mp->rx_recycle, skb); > + else > + dev_kfree_skb(skb); > + } Here you put a skb at the head of queue. So you use a FIFO mode. To have best performance (cpu cache hot), you might try to use a LIFO mode (use __skb_queue_head()) ? Could you give us your actual bench results (number of packets received per second, number of transmited packets per second), and your machine setup. Thank you -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists