[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220517091134.4b67b4a0@pc-20.home>
Date: Tue, 17 May 2022 09:11:34 +0200
From: Maxime Chevallier <maxime.chevallier@...tlin.com>
To: Vladimir Oltean <vladimir.oltean@....com>
Cc: "davem@...emloft.net" <davem@...emloft.net>,
Rob Herring <robh+dt@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"thomas.petazzoni@...tlin.com" <thomas.petazzoni@...tlin.com>,
Andrew Lunn <andrew@...n.ch>,
Florian Fainelli <f.fainelli@...il.com>,
Heiner Kallweit <hkallweit1@...il.com>,
Russell King <linux@...linux.org.uk>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Luka Perkov <luka.perkov@...tura.hr>,
Robert Marko <robert.marko@...tura.hr>
Subject: Re: [PATCH net-next v2 1/5] net: ipqess: introduce the Qualcomm
IPQESS driver
Hello Vlad,
On Sat, 14 May 2022 20:44:38 +0000
Vladimir Oltean <vladimir.oltean@....com> wrote:
> On Sat, May 14, 2022 at 05:06:52PM +0200, Maxime Chevallier wrote:
> > +/* locking is handled by the caller */
> > +static int ipqess_rx_buf_alloc_napi(struct ipqess_rx_ring *rx_ring)
> > +{
> > + struct ipqess_buf *buf = &rx_ring->buf[rx_ring->head];
> > +
> > + buf->skb = napi_alloc_skb(&rx_ring->napi_rx,
> > IPQESS_RX_HEAD_BUFF_SIZE);
> > + if (!buf->skb)
> > + return -ENOMEM;
> > +
> > + return ipqess_rx_buf_prepare(buf, rx_ring);
> > +}
> > +
> > +static int ipqess_rx_buf_alloc(struct ipqess_rx_ring *rx_ring)
> > +{
> > + struct ipqess_buf *buf = &rx_ring->buf[rx_ring->head];
> > +
> > + buf->skb = netdev_alloc_skb_ip_align(rx_ring->ess->netdev,
> > +
> > IPQESS_RX_HEAD_BUFF_SIZE); +
> > + if (!buf->skb)
> > + return -ENOMEM;
> > +
> > + return ipqess_rx_buf_prepare(buf, rx_ring);
> > +}
> > +
> > +static void ipqess_refill_work(struct work_struct *work)
> > +{
> > + struct ipqess_rx_ring_refill *rx_refill =
> > container_of(work,
> > + struct ipqess_rx_ring_refill, refill_work);
> > + struct ipqess_rx_ring *rx_ring = rx_refill->rx_ring;
> > + int refill = 0;
> > +
> > + /* don't let this loop by accident. */
> > + while (atomic_dec_and_test(&rx_ring->refill_count)) {
> > + napi_disable(&rx_ring->napi_rx);
> > + if (ipqess_rx_buf_alloc(rx_ring)) {
> > + refill++;
> > + dev_dbg(rx_ring->ppdev,
> > + "Not all buffers were
> > reallocated");
> > + }
> > + napi_enable(&rx_ring->napi_rx);
> > + }
> > +
> > + if (atomic_add_return(refill, &rx_ring->refill_count))
> > + schedule_work(&rx_refill->refill_work);
> > +}
> > +
> > +static int ipqess_rx_poll(struct ipqess_rx_ring *rx_ring, int
> > budget) +{
>
> > + while (done < budget) {
>
> > + num_desc += atomic_xchg(&rx_ring->refill_count, 0);
> > + while (num_desc) {
> > + if (ipqess_rx_buf_alloc_napi(rx_ring)) {
> > + num_desc =
> > atomic_add_return(num_desc,
> > +
> > &rx_ring->refill_count);
> > + if (num_desc >= ((4 *
> > IPQESS_RX_RING_SIZE + 6) / 7))
>
> DIV_ROUND_UP(IPQESS_RX_RING_SIZE * 4, 7)
> Also, why this number?
Ah this was from the original out-of-tree driver... I'll try to figure
out what's going on an replace that by some #define that would make
more sense.
> > +
> > schedule_work(&rx_ring->ess->rx_refill[rx_ring->ring_id].refill_work);
> > + break;
> > + }
> > + num_desc--;
> > + }
> > + }
> > +
> > + ipqess_w32(rx_ring->ess,
> > IPQESS_REG_RX_SW_CONS_IDX_Q(rx_ring->idx),
> > + rx_ring_tail);
> > + rx_ring->tail = rx_ring_tail;
> > +
> > + return done;
> > +}
>
> > +static void ipqess_rx_ring_free(struct ipqess *ess)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < IPQESS_NETDEV_QUEUES; i++) {
> > + int j;
> > +
> > + atomic_set(&ess->rx_ring[i].refill_count, 0);
> > + cancel_work_sync(&ess->rx_refill[i].refill_work);
>
> When refill_work is currently scheduled and executing the while loop,
> will refill_count underflow due to the possibility of calling
> atomic_dec_and_test(0)?
Good question, I'll double-check, you might be correct. Nice catch
> > +
> > + for (j = 0; j < IPQESS_RX_RING_SIZE; j++) {
> > + dma_unmap_single(&ess->pdev->dev,
> > +
> > ess->rx_ring[i].buf[j].dma,
> > +
> > ess->rx_ring[i].buf[j].length,
> > + DMA_FROM_DEVICE);
> > +
> > dev_kfree_skb_any(ess->rx_ring[i].buf[j].skb);
> > + }
> > + }
> > +
Thanks,
Maxime
Powered by blists - more mailing lists