lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Jan 2022 13:00:33 +0100
From:   Alexander Lobakin <alexandr.lobakin@...el.com>
To:     Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc:     Alexander Lobakin <alexandr.lobakin@...el.com>,
        bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
        netdev@...r.kernel.org, magnus.karlsson@...el.com
Subject: Re: [PATCH bpf-next v4 2/8] ice: xsk: force rings to be sized to power of 2

From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Date: Tue, 25 Jan 2022 12:49:36 +0100

> On Tue, Jan 25, 2022 at 12:42:02PM +0100, Alexander Lobakin wrote:
> > From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> > Date: Tue, 25 Jan 2022 12:28:52 +0100
> > 
> > > On Tue, Jan 25, 2022 at 12:23:06PM +0100, Alexander Lobakin wrote:
> > > > From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> > > > Date: Mon, 24 Jan 2022 17:55:41 +0100
> > > > 
> > > > > With the upcoming introduction of batching to XSK data path,
> > > > > performance wise it will be the best to have the ring descriptor count
> > > > > to be aligned to power of 2.
> > > > > 
> > > > > Check if rings sizes that user is going to attach the XSK socket fulfill
> > > > > the condition above. For Tx side, although check is being done against
> > > > > the Tx queue and in the end the socket will be attached to the XDP
> > > > > queue, it is fine since XDP queues get the ring->count setting from Tx
> > > > > queues.
> > > > > 
> > > > > Suggested-by: Alexander Lobakin <alexandr.lobakin@...el.com>
> > > > > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> > > > > ---
> > > > >  drivers/net/ethernet/intel/ice/ice_xsk.c | 9 +++++++++
> > > > >  1 file changed, 9 insertions(+)
> > > > > 
> > > > > diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > > > > index 2388837d6d6c..0350f9c22c62 100644
> > > > > --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> > > > > +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > > > > @@ -327,6 +327,14 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
> > > > >  	bool if_running, pool_present = !!pool;
> > > > >  	int ret = 0, pool_failure = 0;
> > > > >  
> > > > > +	if (!is_power_of_2(vsi->rx_rings[qid]->count) ||
> > > > > +	    !is_power_of_2(vsi->tx_rings[qid]->count)) {
> > > > > +		netdev_err(vsi->netdev,
> > > > > +			   "Please align ring sizes at idx %d to power of 2\n", qid);
> > > > 
> > > > Ideally I'd pass xdp->extack from ice_xdp() to print this message
> > > > directly in userspace (note that NL_SET_ERR_MSG{,_MOD}() don't
> > > > support string formatting, but the user already knows QID at this
> > > > point).
> > > 
> > > I thought about that as well but it seemed to me kinda off to have a
> > > single extack usage in here. Updating the rest of error paths in
> > > ice_xsk_pool_setup() to make use of extack is a candidate for a separate
> > > patch to me.
> > > 
> > > WDYT?
> > 
> > The rest uses string formatting to print the error code, and thus
> > would lose their meaning. This one to me is more of the same kind
> > as let's say "MTU too large for XDP" message, i.e. user config
> > constraints check fail. But I'm fine if you'd prefer to keep a
> > single source of output messages throughout the function.
> 
> Doubling the logs wouldn't hurt - keep current netdev_err with ret codes
> and have more meaningful messages carried up to userspace via
> NL_SET_ERR_MSG_MOD.

Ah, right, this works as well. Let's leave it as it is for now then.

> 
> > 
> > > 
> > > > 
> > > > > +		pool_failure = -EINVAL;
> > > > > +		goto failure;
> > > > > +	}
> > > > > +
> > > > >  	if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi);
> > > > >  
> > > > >  	if (if_running) {
> > > > > @@ -349,6 +357,7 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
> > > > >  			netdev_err(vsi->netdev, "ice_qp_ena error = %d\n", ret);
> > > > >  	}
> > > > >  
> > > > > +failure:
> > > > >  	if (pool_failure) {
> > > > >  		netdev_err(vsi->netdev, "Could not %sable buffer pool, error = %d\n",
> > > > >  			   pool_present ? "en" : "dis", pool_failure);
> > > > > -- 
> > > > > 2.33.1
> > > > 
> > > > Thanks,
> > > > Al
> > 
> > Al

Al

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ