[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d174364c12366b1f5eeb616cba078f6682d629f5.camel@redhat.com>
Date: Mon, 05 Feb 2024 16:49:33 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Herve Codina <herve.codina@...tlin.com>
Cc: Vadim Fedorenko <vadim.fedorenko@...ux.dev>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski
<kuba@...nel.org>, linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, Andrew Lunn <andrew@...n.ch>, Mark Brown
<broonie@...nel.org>, Christophe Leroy <christophe.leroy@...roup.eu>,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>
Subject: Re: [PATCH v2 1/6] net: wan: Add support for QMC HDLC
On Mon, 2024-02-05 at 15:22 +0100, Herve Codina wrote:
> Hi Paolo,
>
> On Thu, 01 Feb 2024 12:41:32 +0100
> Paolo Abeni <pabeni@...hat.com> wrote:
>
> [...]
> > > +static inline struct qmc_hdlc *netdev_to_qmc_hdlc(struct net_device *netdev)
> > > +{
> > > + return dev_to_hdlc(netdev)->priv;
> > > +}
> >
> > Please, no 'inline' function in c files. You could move this function
> > and the struct definition into a new, local, header file.
>
> 'inline' function specifier will be removed in the next iteration on the series.
>
> >
> > > +static int qmc_hdlc_recv_queue(struct qmc_hdlc *qmc_hdlc, struct qmc_hdlc_desc *desc, size_t size);
> > > +
> > > +#define QMC_HDLC_RX_ERROR_FLAGS (QMC_RX_FLAG_HDLC_OVF | \
> > > + QMC_RX_FLAG_HDLC_UNA | \
> > > + QMC_RX_FLAG_HDLC_ABORT | \
> > > + QMC_RX_FLAG_HDLC_CRC)
> > > +
> > > +static void qmc_hcld_recv_complete(void *context, size_t length, unsigned int flags)
> > > +{
> > > + struct qmc_hdlc_desc *desc = context;
> > > + struct net_device *netdev = desc->netdev;
> > > + struct qmc_hdlc *qmc_hdlc = netdev_to_qmc_hdlc(netdev);
> > > + int ret;
> >
> > Please, respect the reverse x-mas tree order for local variable
> > definition.
>
> desc depends on context, netdev depends on desc and qmc_hdlc depends on netdev.
> I think the declaration order is correct here even it doesn't respect the reverse
> x-mas tree.
>
> [...]
> > > +static netdev_tx_t qmc_hdlc_xmit(struct sk_buff *skb, struct net_device *netdev)
> > > +{
> > > + struct qmc_hdlc *qmc_hdlc = netdev_to_qmc_hdlc(netdev);
> > > + struct qmc_hdlc_desc *desc;
> > > + unsigned long flags;
> > > + int ret;
> > > +
> > > + spin_lock_irqsave(&qmc_hdlc->tx_lock, flags);
> > > + desc = &qmc_hdlc->tx_descs[qmc_hdlc->tx_out];
> > > + if (WARN_ONCE(!desc->skb, "No tx descriptors available\n")) {
> > > + /* Should never happen.
> > > + * Previous xmit should have already stopped the queue.
> > > + */
> > > + netif_stop_queue(netdev);
> > > + spin_unlock_irqrestore(&qmc_hdlc->tx_lock, flags);
> > > + return NETDEV_TX_BUSY;
> > > + }
> > > + spin_unlock_irqrestore(&qmc_hdlc->tx_lock, flags);
> > > +
> > > + desc->netdev = netdev;
> > > + desc->dma_size = skb->len;
> > > + desc->skb = skb;
> > > + ret = qmc_hdlc_xmit_queue(qmc_hdlc, desc);
> > > + if (ret) {
> > > + desc->skb = NULL; /* Release the descriptor */
> > > + if (ret == -EBUSY) {
> > > + netif_stop_queue(netdev);
> > > + return NETDEV_TX_BUSY;
> > > + }
> > > + dev_kfree_skb(skb);
> > > + netdev->stats.tx_dropped++;
> > > + return NETDEV_TX_OK;
> > > + }
> > > +
> > > + qmc_hdlc->tx_out = (qmc_hdlc->tx_out + 1) % ARRAY_SIZE(qmc_hdlc->tx_descs);
> > > +
> > > + spin_lock_irqsave(&qmc_hdlc->tx_lock, flags);
> > > + if (qmc_hdlc->tx_descs[qmc_hdlc->tx_out].skb)
> > > + netif_stop_queue(netdev);
> > > + spin_unlock_irqrestore(&qmc_hdlc->tx_lock, flags);
> >
> > The locking schema is quite bad, as the drivers acquires and releases 3
> > locks for each tx packet. You could improve that using the qmc_chan-
> > > tx_lock to protect the whole qmc_hdlc_xmit() function, factoring out a
> > lockless variant of qmc_hdlc_xmit_queue(), and using it here.
>
> I will change on next iteration and keep 2 lock/unlock instead of 3:
> - one in qmc_hdlc_xmit()
> - one in qmc_hdlc_xmit_complete()
> to protect the descriptors accesses.
>
> >
> > In general is quite bad that the existing infra does not allow
> > leveraging NAPI. Have you considered expanding the QMC to accomodate
> > such user?
>
> I cannot mask/unmask the 'end of transfer' interrupt.
> Indeed, other streams use this interrupt among them audio streams and so
> masking the interrupt for HDLC data will also mask the interrupt for audio
> data.
Uhm... I fear the above makes the available options list empty :(
> At the HDLC driver level, the best I can to is to store a queue of complete
> HDLC skbs (queue filled on interrupts) and send them to the network stack
> when the napi poll() is called.
>
> I am not sure that this kind of queue (additional level between always
> enabled interrupts and the network stack) makes sense.
>
> Do you have any opinion about this additional queue management for NAPI
> support?
With such idea in place, what HDLC-level data will be accessed by the
napi context? The RX interrupts will remain unmasked after the
interrupt and before the napi poll right? That would be
problematic/could cause drop if the ingress pkt/interrupt rate will be
higher that what the napi could process - and that in turn could bring
back old bad livelock times :(
Cheers,
Paolo
Powered by blists - more mailing lists