lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140403152746.GQ7528@n2100.arm.linux.org.uk>
Date:	Thu, 3 Apr 2014 16:27:46 +0100
From:	Russell King - ARM Linux <linux@....linux.org.uk>
To:	Arnd Bergmann <arnd@...db.de>
Cc:	Zhangfei Gao <zhangfei.gao@...aro.org>, davem@...emloft.net,
	f.fainelli@...il.com, sergei.shtylyov@...entembedded.com,
	mark.rutland@....com, David.Laight@...lab.com,
	eric.dumazet@...il.com, linux-arm-kernel@...ts.infradead.org,
	netdev@...r.kernel.org, devicetree@...r.kernel.org
Subject: Re: [PATCH 3/3] net: hisilicon: new hip04 ethernet driver

On Wed, Apr 02, 2014 at 11:21:45AM +0200, Arnd Bergmann wrote:
> - As David Laight pointed out earlier, you must also ensure that
>   you don't have too much /data/ pending in the descriptor ring
>   when you stop the queue. For a 10mbit connection, you have already
>   tested (as we discussed on IRC) that 64 descriptors with 1500 byte
>   frames gives you a 68ms round-trip ping time, which is too much.
>   Conversely, on 1gbit, having only 64 descriptors actually seems
>   a little low, and you may be able to get better throughput if
>   you extend the ring to e.g. 512 descriptors.

You don't manage that by stopping the queue - there's separate interfaces
where you report how many bytes you've queued (netdev_sent_queue()) and
how many bytes/packets you've sent (netdev_tx_completed_queue()).  This
allows the netdev schedulers to limit how much data is held in the queue,
preserving interactivity while allowing the advantages of larger rings.

> > +       phys = dma_map_single(&ndev->dev, skb->data, skb->len, DMA_TO_DEVICE);
> > +       if (dma_mapping_error(&ndev->dev, phys)) {
> > +               dev_kfree_skb(skb);
> > +               return NETDEV_TX_OK;
> > +       }
> > +
> > +       priv->tx_skb[tx_head] = skb;
> > +       priv->tx_phys[tx_head] = phys;
> > +       desc->send_addr = cpu_to_be32(phys);
> > +       desc->send_size = cpu_to_be16(skb->len);
> > +       desc->cfg = cpu_to_be32(DESC_DEF_CFG);
> > +       phys = priv->tx_desc_dma + tx_head * sizeof(struct tx_desc);
> > +       desc->wb_addr = cpu_to_be32(phys);
> 
> One detail: since you don't have cache-coherent DMA, "desc" will
> reside in uncached memory, so you try to minimize the number of accesses.
> It's probably faster if you build the descriptor on the stack and
> then atomically copy it over, rather than assigning each member at
> a time.

DMA coherent memory is write combining, so multiple writes will be
coalesced.  This also means that barriers may be required to ensure the
descriptors are pushed out in a timely manner if something like writel()
is not used in the transmit-triggering path.

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ