lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 4 Apr 2014 14:52:08 +0800
From:	Zhangfei Gao <zhangfei.gao@...il.com>
To:	Russell King - ARM Linux <linux@....linux.org.uk>
Cc:	Arnd Bergmann <arnd@...db.de>, Mark Rutland <mark.rutland@....com>,
	"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
	Florian Fainelli <f.fainelli@...il.com>,
	"eric.dumazet@...il.com" <eric.dumazet@...il.com>,
	Sergei Shtylyov <sergei.shtylyov@...entembedded.com>,
	netdev <netdev@...r.kernel.org>,
	David Laight <David.Laight@...lab.com>,
	Zhangfei Gao <zhangfei.gao@...aro.org>,
	"David S. Miller" <davem@...emloft.net>,
	linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH 3/3] net: hisilicon: new hip04 ethernet driver

Dear Russell

On Thu, Apr 3, 2014 at 11:27 PM, Russell King - ARM Linux
<linux@....linux.org.uk> wrote:
> On Wed, Apr 02, 2014 at 11:21:45AM +0200, Arnd Bergmann wrote:
>> - As David Laight pointed out earlier, you must also ensure that
>>   you don't have too much /data/ pending in the descriptor ring
>>   when you stop the queue. For a 10mbit connection, you have already
>>   tested (as we discussed on IRC) that 64 descriptors with 1500 byte
>>   frames gives you a 68ms round-trip ping time, which is too much.
>>   Conversely, on 1gbit, having only 64 descriptors actually seems
>>   a little low, and you may be able to get better throughput if
>>   you extend the ring to e.g. 512 descriptors.
>
> You don't manage that by stopping the queue - there's separate interfaces
> where you report how many bytes you've queued (netdev_sent_queue()) and
> how many bytes/packets you've sent (netdev_tx_completed_queue()).  This
> allows the netdev schedulers to limit how much data is held in the queue,
> preserving interactivity while allowing the advantages of larger rings.

My god, it's awesome.
The latency can be solved via adding netdev_sent_queue in xmit, and
netdev_completed_queue in reclaim.
In the experiment,
iperf -P 3 could get 930M, and ping could get response within 0.4 ms
in the meantime.

Is that mean the timer -> reclaim should be removed at all?
The background are.
1. No xmit_complete interrupt.
2. Only xmit call reclaim used buffer can achieve best throughput.
Adding timer in case no next xmit for reclaim.

>
>> > +       phys = dma_map_single(&ndev->dev, skb->data, skb->len, DMA_TO_DEVICE);
>> > +       if (dma_mapping_error(&ndev->dev, phys)) {
>> > +               dev_kfree_skb(skb);
>> > +               return NETDEV_TX_OK;
>> > +       }
>> > +
>> > +       priv->tx_skb[tx_head] = skb;
>> > +       priv->tx_phys[tx_head] = phys;
>> > +       desc->send_addr = cpu_to_be32(phys);
>> > +       desc->send_size = cpu_to_be16(skb->len);
>> > +       desc->cfg = cpu_to_be32(DESC_DEF_CFG);
>> > +       phys = priv->tx_desc_dma + tx_head * sizeof(struct tx_desc);
>> > +       desc->wb_addr = cpu_to_be32(phys);
>>
>> One detail: since you don't have cache-coherent DMA, "desc" will
>> reside in uncached memory, so you try to minimize the number of accesses.
>> It's probably faster if you build the descriptor on the stack and
>> then atomically copy it over, rather than assigning each member at
>> a time.
>
> DMA coherent memory is write combining, so multiple writes will be
> coalesced.  This also means that barriers may be required to ensure the
> descriptors are pushed out in a timely manner if something like writel()
> is not used in the transmit-triggering path.
>
Currently writel is used in xmit.
And regmap_write -> writel is used in poll.

Thanks
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists