lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220301220636.mqrzq7h3epfw3u3x@liuwe-devbox-debian-v2>
Date:   Tue, 1 Mar 2022 22:06:36 +0000
From:   Wei Liu <wei.liu@...nel.org>
To:     Iouri Tarassov <iourit@...ux.microsoft.com>
Cc:     kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
        wei.liu@...nel.org, linux-hyperv@...r.kernel.org,
        linux-kernel@...r.kernel.org, spronovo@...rosoft.com,
        spronovo@...ux.microsoft.com, gregkh@...uxfoundation.org
Subject: Re: [PATCH v3 02/30] drivers: hv: dxgkrnl: Driver initialization and
 loading

I will skip things that are pointed out by Greg.

On Tue, Mar 01, 2022 at 11:45:49AM -0800, Iouri Tarassov wrote:
> - Create skeleton and add basic functionality for the
> hyper-v compute device driver (dxgkrnl).
> 
> - Register for PCI and VM bus driver notifications and
> handle initialization of VM bus channels.
> 
> - Connect the dxgkrnl module to the drivers/hv/ Makefile and Kconfig
> 
> - Create a MAINTAINERS entry
> 
> A VM bus channel is a communication interface between the hyper-v guest
> and the host. The are two type of VM bus channels, used in the driver:
>   - the global channel
>   - per virtual compute device channel
> 

Same comment regarding the spelling of VMBus and Hyper-V. Please fix
other instances in code and comments.

> A PCI device is created for each virtual compute device, projected
> by the host. The device vendor is PCI_VENDOR_ID_MICROSOFT and device
> id is PCI_DEVICE_ID_VIRTUAL_RENDER. dxg_pci_probe_device handles
> arrival of such devices. The PCI config space of the virtual compute
> device has luid of the corresponding virtual compute device VM
> bus channel. This is how the compute device adapter objects are
> linked to VM bus channels.
> 
> VM bus interface version is exchanged by reading/writing the PCI config
> space of the virtual compute device.
> 
> The IO space is used to handle CPU accessible compute device
> allocations. Hyper-v allocates IO space for the global VM bus channel.
> 
> Signed-off-by: Iouri Tarassov <iourit@...ux.microsoft.com>
> ---
[...]
> +static inline void guid_to_luid(guid_t *guid, struct winluid *luid)
> +{
> +	*luid = *(struct winluid *)&guid->b[0];
> +}

This should be moved to the header where luid is defined -- presumably
this is useful for other things in the future too.

Also, please provide a comment on why this conversion is okay.

[...]
> + * A helper function to read PCI config space.
> + */
> +static int dxg_pci_read_dwords(struct pci_dev *dev, int offset, int size,
> +			       void *val)
> +{
> +	int off = offset;
> +	int ret;
> +	int i;
> +

I think you should check for alignment here? size has to be a round
number of sizeof(int) otherwise you risk reading cross boundary and
causes unintended side effect?

> +	for (i = 0; i < size / sizeof(int); i++) {
> +		ret = pci_read_config_dword(dev, off, &((int *)val)[i]);
> +		if (ret) {
> +			pr_err("Failed to read PCI config: %d", off);
> +			return ret;
> +		}
> +		off += sizeof(int);
> +	}
> +	return 0;
> +}
> +
> +static int dxg_pci_probe_device(struct pci_dev *dev,
> +				const struct pci_device_id *id)
> +{
> +	int ret;
> +	guid_t guid;
> +	u32 vmbus_interface_ver = DXGK_VMBUS_INTERFACE_VERSION;
> +	struct winluid vgpu_luid = {};
> +	struct dxgk_vmbus_guestcaps guest_caps = {.wsl2 = 1};
> +
> +	mutex_lock(&dxgglobal->device_mutex);
> +
> +	if (dxgglobal->vmbus_ver == 0)  {
> +		/* Report capabilities to the host */
> +
> +		ret = pci_write_config_dword(dev, DXGK_VMBUS_GUESTCAPS_OFFSET,
> +					guest_caps.guest_caps);
> +		if (ret)
> +			goto cleanup;
> +
> +		/* Negotiate the VM bus version */
> +
[...]
> +	pr_debug("Adapter channel: %pUb\n", &guid);
> +	pr_debug("Vmbus interface version: %d\n",
> +		dxgglobal->vmbus_ver);

No need to wrap the line here.

> +	pr_debug("Host vGPU luid: %x-%x\n",
> +		vgpu_luid.b, vgpu_luid.a);

Ditto.

> +
> +cleanup:
> +
> +	mutex_unlock(&dxgglobal->device_mutex);
> +
> +	if (ret)
> +		pr_debug("err: %s %d", __func__, ret);
> +	return ret;
> +}
> +
[...]
> +static void __exit dxg_drv_exit(void)
> +{
> +	dxgglobal_destroy();
> +}
> +
> +module_init(dxg_drv_init);
> +module_exit(dxg_drv_exit);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual GPU Driver");

Should be "virtual compute device driver" here? Please be consistent.

Thanks,
Wei.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ