lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 3 May 2022 17:21:16 +1000
From:   Andrew Dennison <andrew.dennison@...ec.com.au>
To:     Marc Kleine-Budde <mkl@...gutronix.de>
Cc:     Pavel Pisa <pisa@....felk.cvut.cz>, linux-can@...r.kernel.org,
        Oliver Hartkopp <socketcan@...tkopp.net>,
        Wolfgang Grandegger <wg@...ndegger.com>,
        David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Marin Jerabek <martin.jerabek01@...il.com>,
        Ondrej Ille <ondrej.ille@...il.com>,
        Jiri Novak <jnovak@....cvut.cz>,
        Jaroslav Beran <jara.beran@...il.com>,
        Petr Porazil <porazil@...ron.com>, Pavel Machek <pavel@....cz>,
        Carsten Emde <c.emde@...dl.org>,
        Drew Fustini <pdp7pdp7@...il.com>,
        Matej Vasilevski <matej.vasilevski@...il.com>
Subject: Re: [PATCH v1 0/4] can: ctucanfd: clenup acoording to the actual
 rules and documentation linking

plain text this time...

On Tue, 3 May 2022 at 16:46, Marc Kleine-Budde <mkl@...gutronix.de> wrote:
>
> On 03.05.2022 16:32:32, Andrew Dennison wrote:
> > > > When value is configurable then for (uncommon) number
> > > > of buffers which is not power of two, there will be likely
> > > > a problem with way how buffers queue is implemented
> > >
> >
> > Only power of 2 makes sense to me: I didn't consider those corner
> > cases but the driver could just round down to the next power of 2 and
> > warn about a misconfiguration of the IP core.
>
> +1
>
> > I added the dynamic detection because the IP core default had changed
> > to 2 TX buffers and this broke some hard coded assumptions in the
> > driver in a rather obscure way that had me debugging for a bit...
>
> The mainline driver uses a hard coded default of 4 still... Can you
> provide that patch soonish?

I was using the out of tree driver but can have a look at this, unless
Pavel wants to merge this in his tree and submit?

>
> > > You can make use of more TX buffers, if you implement (fully
> > > hardware based) TX IRQ coalescing (== handle more than one TX
> > > complete interrupt at a time) like in the mcp251xfd driver, or BQL
> > > support (== send more than one TX CAN frame at a time). I've played
> > > a bit with BQL support on the mcp251xfd driver (which is attached by
> > > SPI), but with mixed results. Probably an issue with proper
> > > configuration.
> >
> > Reducing CAN IRQ load would be good.
>
> IRQ coalescing comes at the price of increased latency, but if you have
> a timeout in hardware you can configure the latencies precisely.
>
> > > > We need 2 * priv->ntxbufs range to distinguish empty and full
> > > > queue... But modulo is not nice either so I probably come with
> > > > some other solution in a longer term. In the long term, I want to
> > > > implement virtual queues to allow multiqueue to use dynamic Tx
> > > > priority of up to 8 the buffers...
> > >
> > > ACK, multiqueue TX support would be nice for things like the
> > > Earliest TX Time First scheduler (ETF). 1 TX queue for ETF, the
> > > other for bulk messages.
> >
> > Would be nice, I have multi-queue in the CAN layer I wrote for a
> > little RTOS (predates socketcan) and have used for a while.
>
> Out of interest:
> What are the use cases? How did you decide which queue to use?

I had a queue per fd, with queues sorted by id of the next message,
then sent the lowest ID next for hardware with a single queue. For
hardware with lots of buffers there was a hw buffer per queue. I
didn't have to deal with the generic cases that would need to be
handled in linux. I must say ctucanfd has a much nicer interface than
the other can hardware I've used.

Kind regards,

Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ