lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 10 May 2021 14:36:08 +0200
From:   Marc Kleine-Budde <mkl@...gutronix.de>
To:     Dario Binacchi <dariobin@...ero.it>
Cc:     linux-kernel@...r.kernel.org,
        "David S. Miller" <davem@...emloft.net>,
        Gianluca Falavigna <gianluca.falavigna@...ind.it>,
        Jakub Kicinski <kuba@...nel.org>,
        Oliver Hartkopp <socketcan@...tkopp.net>,
        Vincent Mailhol <mailhol.vincent@...adoo.fr>,
        Wolfgang Grandegger <wg@...ndegger.com>,
        linux-can@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH 3/3] can: c_can: cache frames to operate as a true FIFO

On 10.05.2021 14:25:15, Marc Kleine-Budde wrote:
> On 09.05.2021 14:43:09, Dario Binacchi wrote:
> > As reported by a comment in the c_can_start_xmit() this was not a FIFO.
> > C/D_CAN controller sends out the buffers prioritized so that the lowest
> > buffer number wins.
> > 
> > What did c_can_start_xmit() do if it found tx_active = 0x80000000 ? It
> > waited until the only frame of the FIFO was actually transmitted by the
> > controller. Only one message in the FIFO but we had to wait for it to
> > empty completely to ensure that the messages were transmitted in the
> > order in which they were loaded.
> > 
> > By storing the frames in the FIFO without requiring its transmission, we
> > will be able to use the full size of the FIFO even in cases such as the
> > one described above. The transmission interrupt will trigger their
> > transmission only when all the messages previously loaded but stored in
> > less priority positions of the buffers have been transmitted.
> 
> The algorithm you implemented looks a bit too complicated to me. Let me
> sketch the algorithm that's implemented by several other drivers.
> 
> - have a power of two number of TX objects
> - add a number of objects to struct priv (tx_num)
>   (or make it a define, if the number of tx objects is compile time fixed)
> - add two "unsigned int" variables to your struct priv,
>   one "tx_head", one "tx_tail"
> - the hard_start_xmit() writes to priv->tx_head & (priv->tx_num - 1)
> - increment tx_head
> - stop the tx_queue if there is no space or if the object with the
>   lowest prio has been written
> - in TX complete IRQ, handle priv->tx_tail object
> - increment tx_tail
> - wake queue if there is space but don't wake if we wait for the lowest
>   prio object to be TX completed.
> 
> Special care needs to be taken to implement that lock-less and race
> free. I suggest to look the the mcp251xfd driver.

After converting the driver to the above outlined implementation it
should be more straight forward to add the caching you implemented.  

regards,
Marc

-- 
Pengutronix e.K.                 | Marc Kleine-Budde           |
Embedded Linux                   | https://www.pengutronix.de  |
Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ