[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20220125152439.831365-1-alexandr.lobakin@intel.com>
Date: Tue, 25 Jan 2022 16:24:39 +0100
From: Alexander Lobakin <alexandr.lobakin@...el.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: Alexander Lobakin <alexandr.lobakin@...el.com>,
bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
netdev@...r.kernel.org, magnus.karlsson@...el.com
Subject: Re: [PATCH bpf-next v4 2/8] ice: xsk: force rings to be sized to power of 2
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Date: Tue, 25 Jan 2022 16:01:00 +0100
> On Tue, Jan 25, 2022 at 01:00:33PM +0100, Alexander Lobakin wrote:
> > From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> > Date: Tue, 25 Jan 2022 12:49:36 +0100
> >
> > > On Tue, Jan 25, 2022 at 12:42:02PM +0100, Alexander Lobakin wrote:
> > > > From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> > > > Date: Tue, 25 Jan 2022 12:28:52 +0100
> > > >
> > > > > On Tue, Jan 25, 2022 at 12:23:06PM +0100, Alexander Lobakin wrote:
> > > > > > From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> > > > > > Date: Mon, 24 Jan 2022 17:55:41 +0100
> > > > > >
> > > > > > > With the upcoming introduction of batching to XSK data path,
> > > > > > > performance wise it will be the best to have the ring descriptor count
> > > > > > > to be aligned to power of 2.
> > > > > > >
> > > > > > > Check if rings sizes that user is going to attach the XSK socket fulfill
> > > > > > > the condition above. For Tx side, although check is being done against
> > > > > > > the Tx queue and in the end the socket will be attached to the XDP
> > > > > > > queue, it is fine since XDP queues get the ring->count setting from Tx
> > > > > > > queues.
> > > > > > >
> > > > > > > Suggested-by: Alexander Lobakin <alexandr.lobakin@...el.com>
> > > > > > > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> > > > > > > ---
> > > > > > > drivers/net/ethernet/intel/ice/ice_xsk.c | 9 +++++++++
> > > > > > > 1 file changed, 9 insertions(+)
> > > > > > >
> > > > > > > diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > > > > > > index 2388837d6d6c..0350f9c22c62 100644
> > > > > > > --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> > > > > > > +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > > > > > > @@ -327,6 +327,14 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
> > > > > > > bool if_running, pool_present = !!pool;
> > > > > > > int ret = 0, pool_failure = 0;
> > > > > > >
> > > > > > > + if (!is_power_of_2(vsi->rx_rings[qid]->count) ||
> > > > > > > + !is_power_of_2(vsi->tx_rings[qid]->count)) {
> > > > > > > + netdev_err(vsi->netdev,
> > > > > > > + "Please align ring sizes at idx %d to power of 2\n", qid);
> > > > > >
> > > > > > Ideally I'd pass xdp->extack from ice_xdp() to print this message
> > > > > > directly in userspace (note that NL_SET_ERR_MSG{,_MOD}() don't
> > > > > > support string formatting, but the user already knows QID at this
> > > > > > point).
> > > > >
> > > > > I thought about that as well but it seemed to me kinda off to have a
> > > > > single extack usage in here. Updating the rest of error paths in
> > > > > ice_xsk_pool_setup() to make use of extack is a candidate for a separate
> > > > > patch to me.
> > > > >
> > > > > WDYT?
> > > >
> > > > The rest uses string formatting to print the error code, and thus
> > > > would lose their meaning. This one to me is more of the same kind
> > > > as let's say "MTU too large for XDP" message, i.e. user config
> > > > constraints check fail. But I'm fine if you'd prefer to keep a
> > > > single source of output messages throughout the function.
> > >
> > > Doubling the logs wouldn't hurt - keep current netdev_err with ret codes
> > > and have more meaningful messages carried up to userspace via
> > > NL_SET_ERR_MSG_MOD.
> >
> > Ah, right, this works as well. Let's leave it as it is for now then.
>
> Well, I had a feeling that we don't utilize extack for a reason. Turns out
> for XDP_SETUP_XSK_POOL we simply don't provide it.
>
> struct netdev_bpf {
> enum bpf_netdev_command command;
> union {
> /* XDP_SETUP_PROG */
> struct {
> u32 flags;
> struct bpf_prog *prog;
> struct netlink_ext_ack *extack;
> };
> /* BPF_OFFLOAD_MAP_ALLOC, BPF_OFFLOAD_MAP_FREE */
> struct {
> struct bpf_offloaded_map *offmap;
> };
> /* XDP_SETUP_XSK_POOL */
> struct {
> struct xsk_buff_pool *pool;
> u16 queue_id;
> } xsk;
> };
>
> I forgot about that :<
Wooh, I missed it completely at some point. I thought it's always
there and available.
>From me for the series (writing it here instead of replying to 0/8
since you're going to drop v5 soon anyways :p):
Reviewed-by: Alexander Lobakin <alexandr.lobakin@...el.com>
>
> };
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > > + pool_failure = -EINVAL;
> > > > > > > + goto failure;
> > > > > > > + }
> > > > > > > +
> > > > > > > if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi);
> > > > > > >
> > > > > > > if (if_running) {
> > > > > > > @@ -349,6 +357,7 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
> > > > > > > netdev_err(vsi->netdev, "ice_qp_ena error = %d\n", ret);
> > > > > > > }
> > > > > > >
> > > > > > > +failure:
> > > > > > > if (pool_failure) {
> > > > > > > netdev_err(vsi->netdev, "Could not %sable buffer pool, error = %d\n",
> > > > > > > pool_present ? "en" : "dis", pool_failure);
> > > > > > > --
> > > > > > > 2.33.1
> > > > > >
> > > > > > Thanks,
> > > > > > Al
> > > >
> > > > Al
> >
> > Al
Thanks!
Al
Powered by blists - more mailing lists