lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 30 Apr 2014 10:09:43 +0200
From:	Stefan Wahren <stefan.wahren@...e.com>
To:	Arnd Bergmann <arnd@...db.de>
CC:	davem@...emloft.net, robh+dt@...nel.org, pawel.moll@....com,
	mark.rutland@....com, ijc+devicetree@...lion.org.uk,
	galak@...eaurora.org, f.fainelli@...il.com, netdev@...r.kernel.org,
	devicetree@...r.kernel.org
Subject: Re: [PATCH RFC 2/2] net: qualcomm: new Ethernet over SPI driver for
 QCA7000

Hi,

Am 29.04.2014 20:14, schrieb Arnd Bergmann:
> On Tuesday 29 April 2014 17:54:12 Stefan Wahren wrote:
>>> As far as I know it's also not mandatory.
>>>
>>> If the hardware interfaces require calling sleeping functions, it
>>> may not actually be possible, but if you can use it, it normally
>>> provides better performance.
>> As i understood NAPI is good for high load on 1000 MBit ethernet, but
>> the QCA7000 has
>> in best case only a 10 MBit powerline connection. Additionally these
>> packets must be transfered
>> over a half duplex SPI. So i think the current driver implementation
>> isn't a bottle neck.
> Ok, makes sense. What is the slowest speed you might see then?

a typical Homeplug GreenPHY connection has nearly 8 MBit within one
network. The more powerline
networks exits, the slowier the connection becomes. This comes from the
time sharing on the
physical layer. Unfortunately i don't have the equipment to test many
parallel networks and give
you some precise numbers.

> You already have a relatively small queue of at most 10 frames,
> but if this goes below 10 Mbit, that can still be noticeable
> bufferbloat. 
>
> Try adding calls to netdev_sent_queue, netdev_completed_queue and
> netdev_reset_queue to let the network stack know how much data
> is currently queued up for the tx thread.

Okay, i'll try that. Thanks for the hints.

>
> On a related note, there is one part I don't understand:
>
> +netdev_tx_t
> +qcaspi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
> +{
> +       u32 frame_len;
> +       u8 *ptmp;
> +       struct qcaspi *qca = netdev_priv(dev);
> +       u32 new_tail;
> +       struct sk_buff *tskb;
> +       u8 pad_len = 0;
> +
> +       if (skb->len < QCAFRM_ETHMINLEN)
> +               pad_len = QCAFRM_ETHMINLEN - skb->len;
> +
> +       if (qca->txq.skb[qca->txq.tail]) {
> +               netdev_warn(qca->net_dev, "queue was unexpectedly full!\n");
> +               netif_stop_queue(qca->net_dev);
> +               qca->stats.queue_full++;
> +               return NETDEV_TX_BUSY;
> +       }
>
> You print a 'netdev_warn' message here when the queue is full, expecting
> this to be rare. If the device is so slow, why doesn't this happen
> all the time?
>
> 	Arnd

Until now, i never experienced that the queue runs full. But i will do
some tests
to reproduce this.

Stefan

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ