lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 26 Feb 2024 10:09:46 +0200
From: Leon Romanovsky <leon@...nel.org>
To: Junxian Huang <huangjunxian6@...ilicon.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>, linux-rdma@...r.kernel.org,
	linuxarm@...wei.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 for-next 2/2] RDMA/hns: Support userspace configuring
 congestion control algorithm with QP granularity

On Thu, Feb 22, 2024 at 03:06:20PM +0800, Junxian Huang wrote:
> 
> 
> On 2024/2/21 23:52, Jason Gunthorpe wrote:
> > On Thu, Feb 08, 2024 at 11:50:38AM +0800, Junxian Huang wrote:
> >> Support userspace configuring congestion control algorithm with
> >> QP granularity. If the algorithm is not specified in userspace,
> >> use the default one.
> >>
> >> Signed-off-by: Junxian Huang <huangjunxian6@...ilicon.com>
> >> ---
> >>  drivers/infiniband/hw/hns/hns_roce_device.h | 23 +++++--
> >>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 14 +---
> >>  drivers/infiniband/hw/hns/hns_roce_hw_v2.h  |  3 +-
> >>  drivers/infiniband/hw/hns/hns_roce_main.c   |  3 +
> >>  drivers/infiniband/hw/hns/hns_roce_qp.c     | 71 +++++++++++++++++++++
> >>  include/uapi/rdma/hns-abi.h                 | 17 +++++
> >>  6 files changed, 112 insertions(+), 19 deletions(-)

<...>

> >> +
> >> +enum hns_roce_create_qp_comp_mask {
> >> +	HNS_ROCE_CREATE_QP_MASK_CONGEST_TYPE = 1 << 1,
> > 
> > Why 1<<1 not 1<<0?
> 
> This is to keep consistent with our internal ABI, there are some
> features not upstream yet.
> 

<...>

> >> @@ -114,6 +128,9 @@ struct hns_roce_ib_alloc_ucontext_resp {
> >>  	__u32	reserved;
> >>  	__u32	config;
> >>  	__u32	max_inline_data;
> >> +	__u8	reserved0;
> >> +	__u8	congest_type;
> > 
> > Why this layout?
> > > Jason
> 
> Same as the 1<<1 issue, to keep consistent with our internal ABI.

We are talking about upstream kernel UAPI, there is no internal ABI here.

Please fix it.

Thanks

> 
> Thanks,
> Junxian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ