lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 10 Dec 2014 02:56:45 -0500 (EST)
From:	Pankaj Gupta <pagupta@...hat.com>
To:	Jason Wang <jasowang@...hat.com>,
	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	davem@...emloft.net, dgibson@...hat.com, vfalico@...il.com,
	edumazet@...gle.com, vyasevic@...hat.com, hkchu@...gle.com,
	wuzhy@...ux.vnet.ibm.com, xemul@...allels.com, therbert@...gle.com,
	bhutchings@...arflare.com, xii@...gle.com,
	stephen@...workplumber.org, jiri@...nulli.us,
	sergei shtylyov <sergei.shtylyov@...entembedded.com>
Subject: Re: [PATCH v3 net-next 2/2 tuntap: Increase the number of queues in
 tun.


> 
> On 12/04/2014 06:20 PM, Michael S. Tsirkin wrote:
> > On Thu, Dec 04, 2014 at 03:03:34AM +0008, Jason Wang wrote:
> >> > 
> >> > 
> >> > On Wed, Dec 3, 2014 at 5:52 PM, Michael S. Tsirkin <mst@...hat.com>
> >> > wrote:
> >>> > >On Wed, Dec 03, 2014 at 12:49:37PM +0530, Pankaj Gupta wrote:
> >>>> > >> Networking under kvm works best if we allocate a per-vCPU RX and TX
> >>>> > >> queue in a virtual NIC. This requires a per-vCPU queue on the host
> >>>> > >>side.
> >>>> > >> It is now safe to increase the maximum number of queues.
> >>>> > >> Preceding patche: 'net: allow large number of rx queues'
> >>> > >
> >>> > >s/patche/patch/
> >>> > >
> >>>> > >> made sure this won't cause failures due to high order memory
> >>>> > >> allocations. Increase it to 256: this is the max number of vCPUs
> >>>> > >> KVM supports.
> >>>> > >> Signed-off-by: Pankaj Gupta <pagupta@...hat.com>
> >>>> > >> Reviewed-by: David Gibson <dgibson@...hat.com>
> >>> > >
> >>> > >Hmm it's kind of nasty that each tun device is now using x16 memory.
> >>> > >Maybe we should look at using a flex array instead, and removing the
> >>> > >limitation altogether (e.g. make it INT_MAX)?
> >> > 
> >> > But this only happens when IFF_MULTIQUEUE were used.
> > I refer to this field:
> >         struct tun_file __rcu   *tfiles[MAX_TAP_QUEUES];
> > if we make MAX_TAP_QUEUES 256, this will use 4K bytes,
> > apparently unconditionally.
> >
> >
> 
> How about just allocate one tfile if IFF_MULTIQUEUE were disabled?

Yes, we can also go with one tfile if for single queue. 

As tfiles is array of 'tun_file' pointers with size 256. For multiqueue
we would be using 256 queues. But for single queue if we have one tfile
makes sense.

Also, we are making sure to avoid memory failures with vzalloc.

> 
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ