[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <db1669c3-a1d8-47cd-a321-b6cecffd8c6f@intel.com>
Date: Thu, 23 Oct 2025 22:25:08 +0200
From: Michal Wajdeczko <michal.wajdeczko@...el.com>
To: Michał Winiarski <michal.winiarski@...el.com>, "Alex
Williamson" <alex.williamson@...hat.com>, Lucas De Marchi
<lucas.demarchi@...el.com>, Thomas Hellström
<thomas.hellstrom@...ux.intel.com>, Rodrigo Vivi <rodrigo.vivi@...el.com>,
Jason Gunthorpe <jgg@...pe.ca>, Yishai Hadas <yishaih@...dia.com>, Kevin Tian
<kevin.tian@...el.com>, <intel-xe@...ts.freedesktop.org>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>, Matthew Brost
<matthew.brost@...el.com>
CC: <dri-devel@...ts.freedesktop.org>, Jani Nikula
<jani.nikula@...ux.intel.com>, Joonas Lahtinen
<joonas.lahtinen@...ux.intel.com>, Tvrtko Ursulin <tursulin@...ulin.net>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, "Lukasz
Laguna" <lukasz.laguna@...el.com>
Subject: Re: [PATCH v2 20/26] drm/xe/pf: Add helper to retrieve VF's LMEM
object
On 10/22/2025 12:41 AM, Michał Winiarski wrote:
> From: Lukasz Laguna <lukasz.laguna@...el.com>
>
> Instead of accessing VF's lmem_obj directly, introduce a helper function
> to make the access more convenient.
>
> Signed-off-by: Lukasz Laguna <lukasz.laguna@...el.com>
> Signed-off-by: Michał Winiarski <michal.winiarski@...el.com>
> ---
> drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 31 ++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h | 1 +
> 2 files changed, 32 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index c857879e28fe5..28d648c386487 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1643,6 +1643,37 @@ int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid,
> "LMEM", n, err);
> }
>
> +static struct xe_bo *pf_get_vf_config_lmem_obj(struct xe_gt *gt, unsigned int vfid)
> +{
> + struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
> +
> + return config->lmem_obj;
> +}
> +
> +/**
> + * xe_gt_sriov_pf_config_get_lmem_obj - Take a reference to the struct &xe_bo backing VF LMEM.
* xe_gt_sriov_pf_config_get_lmem_obj() - Take ...
> + * @gt: the &xe_gt
> + * @vfid: the VF identifier
since you assert vfid below, add "(can't be 0)"
> + *
> + * This function can only be called on PF.
> + * The caller is responsible for calling xe_bo_put() on the returned object.
> + *
> + * Return: pointer to struct &xe_bo backing VF LMEM (if any).
> + */
> +struct xe_bo *xe_gt_sriov_pf_config_get_lmem_obj(struct xe_gt *gt, unsigned int vfid)
> +{
> + struct xe_bo *lmem_obj;
> +
> + xe_gt_assert(gt, vfid);
> +
> + mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
> + lmem_obj = pf_get_vf_config_lmem_obj(gt, vfid);
> + xe_bo_get(lmem_obj);
> + mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
> +
> + return lmem_obj;
or just
{
guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
return xe_bo_get(pf_get_vf_config_lmem_obj(gt, vfid));
}
> +}
> +
> static u64 pf_query_free_lmem(struct xe_gt *gt)
> {
> struct xe_tile *tile = gt->tile;
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> index 6916b8f58ebf2..03c5dc0cd5fef 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> @@ -36,6 +36,7 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
> int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
> int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs,
> u64 size);
> +struct xe_bo *xe_gt_sriov_pf_config_get_lmem_obj(struct xe_gt *gt, unsigned int vfid);
>
> u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid);
> int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid, u32 exec_quantum);
probably we should block VF's reprovisioning during the SAVE/RESTORE,
but that could be done later as follow up
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@...el.com>
Powered by blists - more mailing lists