lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bdfe5413-547a-67b0-b822-9852d3f94cc5@linux.intel.com>
Date: Wed, 26 Mar 2025 17:29:31 +0200 (EET)
From: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
To: Michał Winiarski <michal.winiarski@...el.com>
cc: linux-pci@...r.kernel.org, intel-xe@...ts.freedesktop.org, 
    dri-devel@...ts.freedesktop.org, LKML <linux-kernel@...r.kernel.org>, 
    Bjorn Helgaas <bhelgaas@...gle.com>, 
    Christian König <christian.koenig@....com>, 
    Krzysztof Wilczyński <kw@...ux.com>, 
    Rodrigo Vivi <rodrigo.vivi@...el.com>, 
    Michal Wajdeczko <michal.wajdeczko@...el.com>, 
    Lucas De Marchi <lucas.demarchi@...el.com>, 
    Thomas Hellström <thomas.hellstrom@...ux.intel.com>, 
    Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, 
    Maxime Ripard <mripard@...nel.org>, 
    Thomas Zimmermann <tzimmermann@...e.de>, David Airlie <airlied@...il.com>, 
    Simona Vetter <simona@...ll.ch>, Matt Roper <matthew.d.roper@...el.com>
Subject: Re: [PATCH v6 6/6] drm/xe/pf: Set VF LMEM BAR size

On Thu, 20 Mar 2025, Michał Winiarski wrote:

> LMEM is partitioned between multiple VFs and we expect that the more
> VFs we have, the less LMEM is assigned to each VF.
> This means that we can achieve full LMEM BAR access without the need to
> attempt full VF LMEM BAR resize via pci_resize_resource().
> 
> Always set the largest possible BAR size that allows to fit the number
> of enabled VFs.
> 
> Signed-off-by: Michał Winiarski <michal.winiarski@...el.com>
> ---
>  drivers/gpu/drm/xe/regs/xe_bars.h |  1 +
>  drivers/gpu/drm/xe/xe_pci_sriov.c | 22 ++++++++++++++++++++++
>  2 files changed, 23 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/regs/xe_bars.h b/drivers/gpu/drm/xe/regs/xe_bars.h
> index ce05b6ae832f1..880140d6ccdca 100644
> --- a/drivers/gpu/drm/xe/regs/xe_bars.h
> +++ b/drivers/gpu/drm/xe/regs/xe_bars.h
> @@ -7,5 +7,6 @@
>  
>  #define GTTMMADR_BAR			0 /* MMIO + GTT */
>  #define LMEM_BAR			2 /* VRAM */
> +#define VF_LMEM_BAR			9 /* VF VRAM */
>  
>  #endif
> diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.c b/drivers/gpu/drm/xe/xe_pci_sriov.c
> index aaceee748287e..57cdeb41ef1d9 100644
> --- a/drivers/gpu/drm/xe/xe_pci_sriov.c
> +++ b/drivers/gpu/drm/xe/xe_pci_sriov.c
> @@ -3,6 +3,10 @@
>   * Copyright © 2023-2024 Intel Corporation
>   */
>  
> +#include <linux/bitops.h>
> +#include <linux/pci.h>
> +
> +#include "regs/xe_bars.h"
>  #include "xe_assert.h"
>  #include "xe_device.h"
>  #include "xe_gt_sriov_pf_config.h"
> @@ -62,6 +66,18 @@ static void pf_reset_vfs(struct xe_device *xe, unsigned int num_vfs)
>  			xe_gt_sriov_pf_control_trigger_flr(gt, n);
>  }
>  
> +static int resize_vf_vram_bar(struct xe_device *xe, int num_vfs)
> +{
> +	struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
> +	u32 sizes;
> +
> +	sizes = pci_iov_vf_bar_get_sizes(pdev, VF_LMEM_BAR, num_vfs);
> +	if (!sizes)
> +		return 0;
> +
> +	return pci_iov_vf_bar_set_size(pdev, VF_LMEM_BAR, __fls(sizes));
> +}
> +
>  static int pf_enable_vfs(struct xe_device *xe, int num_vfs)
>  {
>  	struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
> @@ -88,6 +104,12 @@ static int pf_enable_vfs(struct xe_device *xe, int num_vfs)
>  	if (err < 0)
>  		goto failed;
>  
> +	if (IS_DGFX(xe)) {
> +		err = resize_vf_vram_bar(xe, num_vfs);
> +		if (err)
> +			xe_sriov_info(xe, "Failed to set VF LMEM BAR size: %d\n", err);

If you intended this error to not result in failure, please mention it 
in the changelog so that it's recorded somewhere for those who have to 
look up things from the git history one day :-).

> +	}
> +
>  	err = pci_enable_sriov(pdev, num_vfs);
>  	if (err < 0)
>  		goto failed;

Seems pretty straightforward after reading the support code on the PCI 
core side,

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>

-- 
 i.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ