[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221104105955.2c3c74a7@kernel.org>
Date: Fri, 4 Nov 2022 10:59:55 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Nick Child <nnac123@...ux.ibm.com>
Cc: netdev@...r.kernel.org, nick.child@....com, bjking1@...ux.ibm.com,
ricklind@...ibm.com, dave.taht@...il.com
Subject: Re: [PATCH v2 net] ibmveth: Reduce maximum tx queues to 8
On Fri, 4 Nov 2022 09:06:02 -0500 Nick Child wrote:
> On 11/3/22 22:59, Jakub Kicinski wrote:
> > On Wed, 2 Nov 2022 13:38:37 -0500 Nick Child wrote:
> >> Previously, the maximum number of transmit queues allowed was 16. Due to
> >> resource concerns, limit to 8 queues instead.
> >>
> >> Since the driver is virtualized away from the physical NIC, the purpose
> >> of multiple queues is purely to allow for parallel calls to the
> >> hypervisor. Therefore, there is no noticeable effect on performance by
> >> reducing queue count to 8.
> >
> > I'm not sure if that's the point Dave was making but we should be
> > influencing the default, not the MAX. Why limit the MAX?
>
> The MAX is always allocated in the drivers probe function. In the
> drivers open and ethtool-set-channels functions we set
> real_num_tx_queues. So the number of allocated queues is always MAX
> but the number of queues actually in use may differ and can be set by
> the user.
> I hope this explains. Otherwise, please let me know.
Perhaps I don't understand the worry. Is allowing 16 queues a problem
because it limits how many instances the hypervisor can support?
Or is the concern coming from your recent work on BQL and having many
queues exacerbating buffer bloat?
Powered by blists - more mailing lists