[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240324223455.1342824-663-sashal@kernel.org>
Date: Sun, 24 Mar 2024 18:34:01 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Cc: José Roberto de Souza <jose.souza@...el.com>,
Thomas Hellstrom <thomas.hellstrom@...ux.intel.com>,
Matthew Brost <matthew.brost@...el.com>,
Lucas De Marchi <lucas.demarchi@...el.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 6.8 662/715] drm/xe: Skip VMAs pin when requesting signal to the last XE_EXEC
From: José Roberto de Souza <jose.souza@...el.com>
[ Upstream commit dd8a07f06dfd946e0eea1a3323d52e7c28a6ed80 ]
Doing a XE_EXEC with num_batch_buffer == 0 makes signals passed as
argument to be signaled when the last real XE_EXEC is completed.
But to do that it was first pinning all VMAs in drm_gpuvm_exec_lock(),
this patch remove this pinning as it is not required.
This change also help Mesa implementing memory over-commiting recovery
as it needs to unbind not needed VMAs when the whole VM can't fit
in GPU memory but it can only do the unbiding when the last XE_EXEC
is completed.
So with this change Mesa can get the signal it want without getting
out-of-memory errors.
Fixes: eb9702ad2986 ("drm/xe: Allow num_batch_buffer / num_binds == 0 in IOCTLs")
Cc: Thomas Hellstrom <thomas.hellstrom@...ux.intel.com>
Co-developed-by: Matthew Brost <matthew.brost@...el.com>
Signed-off-by: José Roberto de Souza <jose.souza@...el.com>
Reviewed-by: Matthew Brost <matthew.brost@...el.com>
Signed-off-by: Matthew Brost <matthew.brost@...el.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240313171318.121066-1-jose.souza@intel.com
(cherry picked from commit 58480c1c912ff8146d067301a0d04cca318b4a66)
Signed-off-by: Lucas De Marchi <lucas.demarchi@...el.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/gpu/drm/xe/xe_exec.c | 41 ++++++++++++++++++++----------------
1 file changed, 23 insertions(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
index 17f26952e6656..222209b0d6904 100644
--- a/drivers/gpu/drm/xe/xe_exec.c
+++ b/drivers/gpu/drm/xe/xe_exec.c
@@ -196,6 +196,29 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
goto err_unlock_list;
}
+ if (!args->num_batch_buffer) {
+ err = xe_vm_lock(vm, true);
+ if (err)
+ goto err_unlock_list;
+
+ if (!xe_vm_in_lr_mode(vm)) {
+ struct dma_fence *fence;
+
+ fence = xe_sync_in_fence_get(syncs, num_syncs, q, vm);
+ if (IS_ERR(fence)) {
+ err = PTR_ERR(fence);
+ goto err_unlock_list;
+ }
+ for (i = 0; i < num_syncs; i++)
+ xe_sync_entry_signal(&syncs[i], NULL, fence);
+ xe_exec_queue_last_fence_set(q, vm, fence);
+ dma_fence_put(fence);
+ }
+
+ xe_vm_unlock(vm);
+ goto err_unlock_list;
+ }
+
vm_exec.vm = &vm->gpuvm;
vm_exec.num_fences = 1 + vm->xe->info.tile_count;
vm_exec.flags = DRM_EXEC_INTERRUPTIBLE_WAIT;
@@ -216,24 +239,6 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
goto err_exec;
}
- if (!args->num_batch_buffer) {
- if (!xe_vm_in_lr_mode(vm)) {
- struct dma_fence *fence;
-
- fence = xe_sync_in_fence_get(syncs, num_syncs, q, vm);
- if (IS_ERR(fence)) {
- err = PTR_ERR(fence);
- goto err_exec;
- }
- for (i = 0; i < num_syncs; i++)
- xe_sync_entry_signal(&syncs[i], NULL, fence);
- xe_exec_queue_last_fence_set(q, vm, fence);
- dma_fence_put(fence);
- }
-
- goto err_exec;
- }
-
if (xe_exec_queue_is_lr(q) && xe_exec_queue_ring_full(q)) {
err = -EWOULDBLOCK; /* Aliased to -EAGAIN */
skip_retry = true;
--
2.43.0
Powered by blists - more mailing lists