lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241127114638.11216-1-lamikr@gmail.com>
Date: Wed, 27 Nov 2024 03:46:37 -0800
From: Mika Laitio <lamikr@...il.com>
To: christian.koenig@....com,
	Xinhui.Pan@....com,
	airlied@...il.com,
	simona@...ll.ch,
	Hawking.Zhang@....com,
	sunil.khatri@....com,
	lijo.lazar@....com,
	kevinyang.wang@....com,
	amd-gfx@...ts.freedesktop.org,
	dri-devel@...ts.freedesktop.org,
	linux-kernel@...r.kernel.org,
	lamikr@...il.com
Subject: [PATCH 0/1] amdgpu fix for gfx1103 queue evict/restore crash v2

This is the corrected v2 version from the patch that was send earlier.
Fixes:
- add cover letter
- use "goto out_unlock" instead of "goto out" in restore_process_queues_cpsch 
method after the mutex has been acquired in the code.
- fixed typo on patch subject line and improved patch description

Patch will fix the evict/restore queue problem on AMD 
gfx1103 iGPU. Problem has not been seen on following other AMD GPUs tested:
- gfx1010 (RX 5700)
- gfx1030 (RZ 6800)
- gfx1035 (M680 iGPU) 
- gfx1102 (RX 7700S)

>From these devices the gfx1102 uses same codepath 
than gfx1102 and calls evict and restore queue methods which will
then call the MES firmware.

Fix will remove the evict/restore calls to MES in case the device is iGPU.
Added queues will still be removed normally when the program closes.

Easy way to trigger the problem is to build the the
ML/AI support for gfx1103 M780 iGPU with the
rocm sdk builder and then running the test application in loop.

Most of the testing has been done on 6.13 devel and 6.12 final kernels
but the same problem can also be triggered at least with the 6.8
and 6.11 kernels.

Adding delays to either to test application between calls \
(tested with 1 second) or to loop inside kernel which removes the
queues (tested with mdelay(10)) did not help to avoid the crash.

After applying the kernel fix, I and others have executed 
the test loop thousands of times without seeing the error to happen again.

On multi-gpu devices, correct gfx1103 needs to be forced in use by exporting
environment variable HIP_VISIBLE_DEVICES=<gpu-index>

Original bug and test case was made by jrl290 on rocm sdk builder bug issue 141.
Test app below to trigger the problem.

import torch
import numpy as np
from onnx import load
from onnx2pytorch import ConvertModel
import time

if __name__ == "__main__":
    ii = 0
    while True:
        ii = ii + 1
        print("Loop Start")
        model_path = "model.onnx"
        device = 'cuda'
        model_run = ConvertModel(load(model_path))
        model_run.to(device).eval()

        #This code causes the crash. Comment out to remove the crash
        random = np.random.rand(1, 4, 3072, 256)
        tensor = torch.tensor(random, dtype=torch.float32, device=device)

        #This code doesn't cause a crash
        tensor = torch.randn(1, 4, 3072, 256, dtype=torch.float32, device=device)

        print("[" + str(ii) + "], the crash happens here:")
        time.sleep(0.5)
        result = model_run(tensor).numpy(force=True)
        print(result.shape)
Mika Laitio (1):
  amdgpu fix for gfx1103 queue evict/restore crash

 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 24 ++++++++++++-------
 1 file changed, 16 insertions(+), 8 deletions(-)

-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ