lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 4 Dec 2014 05:42:54 -0500 (EST)
From:	Pankaj Gupta <pagupta@...hat.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	Jason Wang <jasowang@...hat.com>, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org, davem@...emloft.net, dgibson@...hat.com,
	vfalico@...il.com, edumazet@...gle.com, vyasevic@...hat.com,
	hkchu@...gle.com, wuzhy@...ux.vnet.ibm.com, xemul@...allels.com,
	therbert@...gle.com, bhutchings@...arflare.com, xii@...gle.com,
	stephen@...workplumber.org, jiri@...nulli.us,
	sergei shtylyov <sergei.shtylyov@...entembedded.com>
Subject: Re: [PATCH v3 net-next 2/2 tuntap: Increase the number of queues in
 tun.


> 
> On Thu, Dec 04, 2014 at 03:03:34AM +0008, Jason Wang wrote:
> > 
> > 
> > On Wed, Dec 3, 2014 at 5:52 PM, Michael S. Tsirkin <mst@...hat.com> wrote:
> > >On Wed, Dec 03, 2014 at 12:49:37PM +0530, Pankaj Gupta wrote:
> > >> Networking under kvm works best if we allocate a per-vCPU RX and TX
> > >> queue in a virtual NIC. This requires a per-vCPU queue on the host
> > >>side.
> > >> It is now safe to increase the maximum number of queues.
> > >> Preceding patche: 'net: allow large number of rx queues'
> > >
> > >s/patche/patch/
> > >
> > >> made sure this won't cause failures due to high order memory
> > >> allocations. Increase it to 256: this is the max number of vCPUs
> > >> KVM supports.
> > >> Signed-off-by: Pankaj Gupta <pagupta@...hat.com>
> > >> Reviewed-by: David Gibson <dgibson@...hat.com>
> > >
> > >Hmm it's kind of nasty that each tun device is now using x16 memory.
> > >Maybe we should look at using a flex array instead, and removing the
> > >limitation altogether (e.g. make it INT_MAX)?
> > 
> > But this only happens when IFF_MULTIQUEUE were used.
> 
> I refer to this field:
>         struct tun_file __rcu   *tfiles[MAX_TAP_QUEUES];
> if we make MAX_TAP_QUEUES 256, this will use 4K bytes,
> apparently unconditionally.

Are you saying use flow array for tfiles in-place of array of tun_file
pointer and grow dynamically when/if needed?

If yes, I agree it will be all order-1 allocation but it will add some level
of indirection as pointed by DaveM for flow, this time for tfiles. But yes, dynamically
allocating flex array as per usage will help to minimise memory pressure which in this
case is high, 256.

> 
> 
> > And core has vmalloc() fallback.
> > So probably not a big issue?
> > >
> > >
> > >
> > >> ---
> > >>  drivers/net/tun.c | 9 +++++----
> > >>  1 file changed, 5 insertions(+), 4 deletions(-)
> > >> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> > >> index e3fa65a..a19dc5f8 100644
> > >> --- a/drivers/net/tun.c
> > >> +++ b/drivers/net/tun.c
> > >> @@ -113,10 +113,11 @@ struct tap_filter {
> > >>  	unsigned char	addr[FLT_EXACT_COUNT][ETH_ALEN];
> > >>  };
> > >>    -/* DEFAULT_MAX_NUM_RSS_QUEUES were chosen to let the rx/tx queues
> > >>allocated for
> > >> - * the netdevice to be fit in one page. So we can make sure the
> > >>success of
> > >> - * memory allocation. TODO: increase the limit. */
> > >> -#define MAX_TAP_QUEUES DEFAULT_MAX_NUM_RSS_QUEUES
> > >> +/* MAX_TAP_QUEUES 256 is chosen to allow rx/tx queues to be equal
> > >> + * to max number of vCPUS in guest. Also, we are making sure here
> > >> + * queue memory allocation do not fail.
> > >
> > >It's not queue memory allocation anymore, is it?
> > >I would say "
> > >This also helps the tfiles field fit in 4K, so the whole tun
> > >device only needs an order-1 allocation.
> > >"
> > >
> > >> + */
> > >> +#define MAX_TAP_QUEUES 256
> > >>  #define MAX_TAP_FLOWS  4096
> > >>  #define TUN_FLOW_EXPIRE (3 * HZ)
> > >> --  1.8.3.1
> > >> --
> > >> To unsubscribe from this list: send the line "unsubscribe netdev" in
> > >> the body of a message to majordomo@...r.kernel.org
> > >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists