lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 20 Oct 2016 20:01:37 +0000
From:   "Vatsavayi, Raghu" <Raghu.Vatsavayi@...ium.com>
To:     David Miller <davem@...emloft.net>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "Chickles, Derek" <Derek.Chickles@...ium.com>,
        "Burla, Satananda" <Satananda.Burla@...ium.com>,
        "Manlunas, Felix" <Felix.Manlunas@...ium.com>
Subject: RE: [PATCH net-next V2 1/9] liquidio CN23XX: HW config for VF support



> -----Original Message-----
> From: David Miller [mailto:davem@...emloft.net]
> Sent: Thursday, October 20, 2016 11:13 AM
> To: Vatsavayi, Raghu
> Cc: netdev@...r.kernel.org; Vatsavayi, Raghu; Chickles, Derek; Burla,
> Satananda; Manlunas, Felix
> Subject: Re: [PATCH net-next V2 1/9] liquidio CN23XX: HW config for VF
> support
> 
> From: Raghu Vatsavayi <rvatsavayi@...iumnetworks.com>
> Date: Wed, 19 Oct 2016 22:40:38 -0700
> 
> > +/* Default behaviour of Liquidio is to provide one queue per VF. But
> > +Liquidio
> > + * can also provide multiple queues to each VF. If user wants to
> > +change the
> > + * default behaviour HW should be provided configuration info at init
> > +time,
> > + * based on which it will create control queues for communicating with
> FW.
> > + */
> > +static u32 max_vfs[2] = { 0, 0 };
> > +module_param_array(max_vfs, int, NULL, 0444);
> > +MODULE_PARM_DESC(max_vfs, "Assign two comma-separated unsigned
> > +integers that specify max number of VFs for PF0 (left of the comma)
> > +and PF1 (right of the comma); for 23xx only. By default HW will
> > +configure as many VFs as queues after allocating PF queues.To
> > +increase queues for VF use this parameter. Use sysfs to create these
> > +VFs.");
> > +
> > +static unsigned int num_queues_per_pf[2] = { 0, 0 };
> > +module_param_array(num_queues_per_pf, uint, NULL, 0444);
> > +MODULE_PARM_DESC(num_queues_per_pf, "two comma-separated
> unsigned
> > +integers that specify number of queues per PF0 (left of the comma)
> > +and PF1 (right of the comma); for 23xx only");
> > +
> >  static int ptp_enable = 1;
> 
> We cannot continue to allow drivers to add custom module parameters to
> control this.  It is the worst user experience possible.
> 
> We need a tree-wide generic, consistent, manner in which to configure and
> control this kind of thing.

Sure Dave, I will remove max_vfs module parameter and will use tree wide generic 
sysfs interface to enable VFs. Also if user wants to have multiple queues then because 
of the way Liquidio HW works we need num_queues_per_pf and num_queues_per_vf 
module parameters at HW/module init time. This is required only in case of non-default
case of multi-queues per VF because HW has to carve these queues before FW can start
communicating with PF/VF host drivers, so we may include these two. 

I will soon forward you the patches with the changes that you recommended.

Thanks Much.
Raghu.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ