[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimorcNAEbSpuE_dUXKuOBZDcDcGSJ2cGefniThc@mail.gmail.com>
Date: Thu, 24 Feb 2011 15:27:55 +0800
From: Po-Yu Chuang <ratbert.chuang@...il.com>
To: David Miller <davem@...emloft.net>
Cc: mirqus@...il.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, bhutchings@...arflare.com,
eric.dumazet@...il.com, joe@...ches.com, dilinger@...ued.net,
ratbert@...aday-tech.com
Subject: Re: [PATCH v4] net: add Faraday FTMAC100 10/100 Ethernet driver
Hi David,
On Tue, Feb 1, 2011 at 12:35 PM, David Miller <davem@...emloft.net> wrote:
> From: Po-Yu Chuang <ratbert.chuang@...il.com>
> Date: Tue, 1 Feb 2011 11:56:16 +0800
>
>> If I simply allocate a page for each rx ring entry, I still need to allocate
>> an skb and copy at least packet header in first page to skb->data. Then
>> add the page of rest of payload to skb by skb_fill_page_desc().
>
> You should attach the pages, the use __pskb_pull_tail() to bring in the
> headers to the linear skb->data area.
>
> See drivers/net/niu.c:niu_process_rx_pkt().
I tried two ways to implement zero-copy.
One is to preallocate skb big enough for any rx packet and use the skb
as rx buffer.
The other is use page as rx buffer, use skb_fill_page_desc() to add a
data page to
skb and then pull only header to skb by __pskb_pull_tail() as you suggested.
Two implementations are slower than the original memcpy version.
(benchmarked with iperf)
I guess the problem is because a HW restriction that the rx buffer must be
64 bits aligned. Since I cannot make rx buffer starts at offset 2 bytes, the
IP header, TCP header and data are not 4 bytes aligned. The performance
drops drastically.
Therefore, I will submit later a v6 which is still using memcpy().
best regards,
Po-Yu Chuang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists