lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yyj5emcw/VIrfaan@lunn.ch>
Date:   Tue, 20 Sep 2022 01:21:30 +0200
From:   Andrew Lunn <andrew@...n.ch>
To:     Vladimir Oltean <vladimir.oltean@....com>
Cc:     "mattias.forsblad@...il.com" <mattias.forsblad@...il.com>,
        netdev <netdev@...r.kernel.org>,
        Florian Fainelli <f.fainelli@...il.com>,
        Christian Marangi <ansuelsmth@...il.com>
Subject: Re: [PATCH rfc v0 4/9] net: dsa: qca8k: dsa_inband_request: More
 normal return values

On Mon, Sep 19, 2022 at 11:02:14PM +0000, Vladimir Oltean wrote:
> On Tue, Sep 20, 2022 at 12:18:48AM +0200, Andrew Lunn wrote:
> > wait_for_completion_timeout() has unusual return values.  It can
> > return negative error conditions. If it times out, it returns 0, and
> > on success it returns the number of remaining jiffies for the timeout.
> 
> The one that also returns negative errors is wait_for_completion_interruptible()
> (and its variants).  In my experience the interruptible version is also
> a huge foot gun, since user space can kill the process waiting for the
> RMU response, and the RMU response can still come afterwards, while no
> one is waiting for it.  The noninterruptible wait that we use here
> really returns an unsigned long, so no negatives.

The driver needs to handle the reply coming later independent of ^C
handling, etc. The qca8k has a timeout of 5ms. I don't know if that is
actually enough, if 1G of traffic is being passed over the interface,
and the TX queue is full, and the request frame does not get put at
the head of the queue.  And if there is 1G of traffic also being
received from the switch, how long are the queues for the reply? Does
the switch put the reply at the head of the queue?

This is one thing i want to play with sometime soon, heavily load the
CPU link and see how well the RMU interface to mv88e6xxx works, are
the timeouts big enough? Do frames get dropped and are retires needed?
Do we need to play with the QoS bits of the skb to make Linux put the
RMU packets at the head of the queue etc.

I would also like to have another look at the code and make sure it is
sane for exactly this case.

     Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ