lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250225075530.GD53094@unreal>
Date: Tue, 25 Feb 2025 09:55:30 +0200
From: Leon Romanovsky <leon@...nel.org>
To: Tatyana Nikolova <tatyana.e.nikolova@...el.com>
Cc: jgg@...dia.com, intel-wired-lan@...ts.osuosl.org,
	linux-rdma@...r.kernel.org, netdev@...r.kernel.org,
	david.m.ertman@...el.com
Subject: Re: [iwl-next v4 1/1] iidc/ice/irdma: Update IDC to support multiple
 consumers

On Mon, Feb 24, 2025 at 11:04:28PM -0600, Tatyana Nikolova wrote:
> From: Dave Ertman <david.m.ertman@...el.com>
> 
> To support RDMA for E2000 product, the idpf driver will use the IDC
> interface with the irdma auxiliary driver, thus becoming a second
> consumer of it. This requires the IDC be updated to support multiple
> consumers. The use of exported symbols no longer makes sense because it
> will require all core drivers (ice/idpf) that can interface with irdma
> auxiliary driver to be loaded even if hardware is not present for those
> drivers.

In auxiliary bus world, the code drivers (ice/idpf) need to created
auxiliary devices only if specific device present. That auxiliary device
will trigger the load of specific module (irdma in our case).

EXPORT_SYMBOL won't trigger load of irdma driver, but the opposite is
true, load of irdma will trigger load of ice/idpf drivers (depends on
their exported symbol).

> 
> To address this, implement an ops struct that will be universal set of
> naked function pointers that will be populated by each core driver for
> the irdma auxiliary driver to call.

No, we worked very hard to make proper HW discovery and driver autoload,
let's not return back. For now, it is no-go.

<...>

> +/* Following APIs are implemented by core PCI driver */
> +struct idc_rdma_core_ops {
> +	int (*vc_send_sync)(struct idc_rdma_core_dev_info *cdev_info, u8 *msg,
> +			    u16 len, u8 *recv_msg, u16 *recv_len);
> +	int (*vc_queue_vec_map_unmap)(struct idc_rdma_core_dev_info *cdev_info,
> +				      struct idc_rdma_qvlist_info *qvl_info,
> +				      bool map);
> +	/* vport_dev_ctrl is for RDMA CORE driver to indicate it is either ready
> +	 * for individual vport aux devices, or it is leaving the state where it
> +	 * can support vports and they need to be downed
> +	 */
> +	int (*vport_dev_ctrl)(struct idc_rdma_core_dev_info *cdev_info,
> +			      bool up);
> +	int (*request_reset)(struct idc_rdma_core_dev_info *cdev_info,
> +			     enum idc_rdma_reset_type reset_type);
> +};

Core driver can call to callbacks in irdma, like you already have for
irdma_iidc_event_handler(), but all calls from irdma to core driver must
be through exported symbols. It gives us race-free world in whole driver
except one very specific place (irdma_iidc_event_handler).

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ