[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGngYiVpToas=offBuPgQ6t8wru64__NQ7MnNDYb0i0E+m6ebw@mail.gmail.com>
Date: Tue, 8 Dec 2020 17:23:08 -0500
From: Sven Van Asbroeck <thesven73@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Bryan Whitehead <bryan.whitehead@...rochip.com>,
Microchip Linux Driver Support <UNGLinuxDriver@...rochip.com>,
David S Miller <davem@...emloft.net>,
Andrew Lunn <andrew@...n.ch>, netdev <netdev@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net v1 1/2] lan743x: improve performance: fix
rx_napi_poll/interrupt ping-pong
On Tue, Dec 8, 2020 at 2:50 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> >
> > +done:
> > /* update RX_TAIL */
> > lan743x_csr_write(adapter, RX_TAIL(rx->channel_number),
> > rx_tail_flags | rx->last_tail);
> > -done:
> > +
>
> I assume this rings the doorbell to let the device know that more
> buffers are available? If so it's a little unusual to do this at the
> end of NAPI poll. The more usual place would be to do this every n
> times a new buffer is allocated (in lan743x_rx_init_ring_element()?)
> That's to say for example ring the doorbell every time a buffer is put
> at an index divisible by 16.
Yes, I believe it tells the device that new buffers have become available.
I wonder why it's so unusual to do this at the end of a napi poll? Omitting
this could result in sub-optimal use of buffers, right?
Example:
- tail is at position 0
- core calls napi_poll(weight=64)
- napi poll consumes 15 buffers and pushes 15 skbs, then ring empty
- index not divisible by 16, so tail is not updated
- weight not reached, so napi poll re-enables interrupts and bails out
Result: now there are 15 buffers which the device could potentially use, but
because the tail wasn't updated, it doesn't know about them.
It does make sense to update the tail more frequently than only at the end
of the napi poll, though?
I'm new to napi polling, so I'm quite interested to learn about this.
> >
> > + /* up to half of elements in a full rx ring are
> > + * extension frames. these do not generate skbs.
> > + * to prevent napi/interrupt ping-pong, limit default
> > + * weight to the smallest no. of skbs that can be
> > + * generated by a full rx ring.
> > + */
> > netif_napi_add(adapter->netdev,
> > &rx->napi, lan743x_rx_napi_poll,
> > - rx->ring_size - 1);
> > + (rx->ring_size - 1) / 2);
>
> This is rather unusual, drivers should generally pass NAPI_POLL_WEIGHT
> here.
I agree. The problem is that a full ring buffer of 64 buffers will only
contain 32 buffers with network data - the others are timestamps.
So napi_poll(weight=64) can never reach its full weight. Even with a full
buffer, it always assumes that it has to stop polling, and re-enable
interrupts, which results in a ping-pong.
Would it be better to fix the weight counting? Increase the count
for every buffer consumed, instead of for every skb pushed?
Powered by blists - more mailing lists