[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-J4kS5SiEum-d7SfY7wF+go746QMkoer49G+knVSXtVGg@mail.gmail.com>
Date: Tue, 24 Jul 2018 18:31:54 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: caleb.raitto@...il.com, Jason Wang <jasowang@...hat.com>,
David Miller <davem@...emloft.net>,
Network Development <netdev@...r.kernel.org>,
Caleb Raitto <caraitto@...gle.com>
Subject: Re: [PATCH net-next] virtio_net: force_napi_tx module param.
On Tue, Jul 24, 2018 at 6:23 PM Michael S. Tsirkin <mst@...hat.com> wrote:
>
> On Tue, Jul 24, 2018 at 04:52:53PM -0400, Willem de Bruijn wrote:
> > >From the above linked patch, I understand that there are yet
> > other special cases in production, such as a hard cap on #tx queues to
> > 32 regardless of number of vcpus.
>
> I don't think upstream kernels have this limit - we can
> now use vmalloc for higher number of queues.
Yes. that patch* mentioned it as a google compute engine imposed
limit. It is exactly such cloud provider imposed rules that I'm
concerned about working around in upstream drivers.
* for reference, I mean https://patchwork.ozlabs.org/patch/725249/
Powered by blists - more mailing lists