[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d37d6ca4-9b95-44ab-9147-5c0dff4bedc9@gmail.com>
Date: Wed, 3 Dec 2025 17:33:26 -0800
From: Angela <angelagbtt1@...il.com>
To: Michał Winiarski <michal.winiarski@...el.com>,
Alex Williamson <alex@...zbot.org>,
Lucas De Marchi <lucas.demarchi@...el.com>,
Thomas Hellström <thomas.hellstrom@...ux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@...el.com>, Jason Gunthorpe <jgg@...pe.ca>,
Yishai Hadas <yishaih@...dia.com>, Kevin Tian <kevin.tian@...el.com>,
Shameer Kolothum <skolothumtho@...dia.com>, intel-xe@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Matthew Brost <matthew.brost@...el.com>,
Michal Wajdeczko <michal.wajdeczko@...el.com>
Cc: dri-devel@...ts.freedesktop.org, Jani Nikula
<jani.nikula@...ux.intel.com>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Tvrtko Ursulin <tursulin@...ulin.net>, David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>, Lukasz Laguna <lukasz.laguna@...el.com>,
Christoph Hellwig <hch@...radead.org>
Subject: Re: [PATCH v7 4/4] vfio/xe: Add device specific vfio_pci driver
variant for Intel graphics
On 11/27/25 01:39, Michał Winiarski wrote:
[snip]
> +static void xe_vfio_pci_reset_done(struct pci_dev *pdev)
> +{
> + struct xe_vfio_pci_core_device *xe_vdev = pci_get_drvdata(pdev);
> + int ret;
> +
> + if (!pdev->is_virtfn)
> + return;
> +
> + /*
> + * VF FLR requires additional processing done by PF driver.
> + * The processing is done after FLR is already finished from PCIe
> + * perspective.
> + * In order to avoid a scenario where VF is used while PF processing
> + * is still in progress, additional synchronization point is needed.
> + */
> + ret = xe_sriov_vfio_wait_flr_done(xe_vdev->xe, xe_vdev->vfid);
> + if (ret)
> + dev_err(&pdev->dev, "Failed to wait for FLR: %d\n", ret);
> +
> + if (!xe_vdev->vfid)
> + return;
> +
> + /*
> + * As the higher VFIO layers are holding locks across reset and using
> + * those same locks with the mm_lock we need to prevent ABBA deadlock
> + * with the state_mutex and mm_lock.
> + * In case the state_mutex was taken already we defer the cleanup work
> + * to the unlock flow of the other running context.
> + */
> + spin_lock(&xe_vdev->reset_lock);
> + xe_vdev->deferred_reset = true;
> + if (!mutex_trylock(&xe_vdev->state_mutex)) {
> + spin_unlock(&xe_vdev->reset_lock);
> + return;
> + }
> + spin_unlock(&xe_vdev->reset_lock);
> + xe_vfio_pci_state_mutex_unlock(xe_vdev);
> +
> + xe_vfio_pci_reset(xe_vdev);
> +}
[snip]
My first KVM review :)
I think xe_vfio_pci_reset(xe_vdev) need be protected by state_mutex. So,
we should move xe_vfio_pci_state_mutex_unlock(xe_vdev) after
xe_vfio_pci_reset(xe_vdev). Thoughts?
Thanks,
Angela
Powered by blists - more mailing lists