lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20230507081053.GD525452@unreal> Date: Sun, 7 May 2023 11:10:53 +0300 From: Leon Romanovsky <leon@...nel.org> To: longli@...rosoft.com Cc: Jason Gunthorpe <jgg@...pe.ca>, Ajay Sharma <sharmaajay@...rosoft.com>, Dexuan Cui <decui@...rosoft.com>, "K. Y. Srinivasan" <kys@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com>, Wei Liu <wei.liu@...nel.org>, "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, linux-rdma@...r.kernel.org, linux-hyperv@...r.kernel.org, netdev@...r.kernel.org, linux-kernel@...r.kernel.org Subject: Re: [PATCH] RDMA/mana_ib: Use v2 version of cfg_rx_steer_req to enable RX coalescing On Fri, May 05, 2023 at 11:51:48AM -0700, longli@...uxonhyperv.com wrote: > From: Long Li <longli@...rosoft.com> > > With RX coalescing, one CQE entry can be used to indicate multiple packets > on the receive queue. This saves processing time and PCI bandwidth over > the CQ. > > Signed-off-by: Long Li <longli@...rosoft.com> > --- > drivers/infiniband/hw/mana/qp.c | 5 ++++- > include/net/mana/mana.h | 17 +++++++++++++++++ > 2 files changed, 21 insertions(+), 1 deletion(-) Why didn't you change mana_cfg_vport_steering() too? > > diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c > index 54b61930a7fd..83c768f96506 100644 > --- a/drivers/infiniband/hw/mana/qp.c > +++ b/drivers/infiniband/hw/mana/qp.c > @@ -13,7 +13,7 @@ static int mana_ib_cfg_vport_steering(struct mana_ib_dev *dev, > u8 *rx_hash_key) > { > struct mana_port_context *mpc = netdev_priv(ndev); > - struct mana_cfg_rx_steer_req *req = NULL; > + struct mana_cfg_rx_steer_req_v2 *req = NULL; There is no need in NULL here, req is going to be overwritten almost immediately. Thanks > struct mana_cfg_rx_steer_resp resp = {}; > mana_handle_t *req_indir_tab; > struct gdma_context *gc; > @@ -33,6 +33,8 @@ static int mana_ib_cfg_vport_steering(struct mana_ib_dev *dev, > mana_gd_init_req_hdr(&req->hdr, MANA_CONFIG_VPORT_RX, req_buf_size, > sizeof(resp)); > > + req->hdr.req.msg_version = GDMA_MESSAGE_V2; > + > req->vport = mpc->port_handle; > req->rx_enable = 1; > req->update_default_rxobj = 1; > @@ -46,6 +48,7 @@ static int mana_ib_cfg_vport_steering(struct mana_ib_dev *dev, > req->num_indir_entries = MANA_INDIRECT_TABLE_SIZE; > req->indir_tab_offset = sizeof(*req); > req->update_indir_tab = true; > + req->cqe_coalescing_enable = true; > > req_indir_tab = (mana_handle_t *)(req + 1); > /* The ind table passed to the hardware must have > diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h > index cd386aa7c7cc..f8314b7c386c 100644 > --- a/include/net/mana/mana.h > +++ b/include/net/mana/mana.h > @@ -596,6 +596,23 @@ struct mana_cfg_rx_steer_req { > u8 hashkey[MANA_HASH_KEY_SIZE]; > }; /* HW DATA */ > > +struct mana_cfg_rx_steer_req_v2 { > + struct gdma_req_hdr hdr; > + mana_handle_t vport; > + u16 num_indir_entries; > + u16 indir_tab_offset; > + u32 rx_enable; > + u32 rss_enable; > + u8 update_default_rxobj; > + u8 update_hashkey; > + u8 update_indir_tab; > + u8 reserved; > + mana_handle_t default_rxobj; > + u8 hashkey[MANA_HASH_KEY_SIZE]; > + u8 cqe_coalescing_enable; > + u8 reserved2[7]; > +}; /* HW DATA */ > + > struct mana_cfg_rx_steer_resp { > struct gdma_resp_hdr hdr; > }; /* HW DATA */ > -- > 2.17.1 >
Powered by blists - more mailing lists