[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250904002924.2bc63b73@minigeek.lan>
Date: Thu, 4 Sep 2025 00:29:24 +0100
From: Andre Przywara <andre.przywara@....com>
To: Lucas Stach <l.stach@...gutronix.de>, Russell King
<linux+etnaviv@...linux.org.uk>
Cc: etnaviv@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, Chen-Yu Tsai <wens@...e.org>, linux-sunxi
<linux-sunxi@...ts.linux.dev>
Subject: drm/etnaviv: detecting disabled Vivante GPU?
Hi,
the Allwinner A523/A527/T527 family of SoCs feature a Vivante
"VIP9000"(?) NPU, though it seems to be disabled on many SKUs.
See https://linux-sunxi.org/A523#Family_of_sun55iw3 for a table, the
row labelled "NPU" indicates which model has the IP. We suspect it's
all the same die, with the NPU selectively fused off on some packages.
Board vendors seem to use multiple SKUs of the SoC on the same board,
so it's hard to say which particular board has the NPU or not. We
figured that on unsupported SoCs all the NPU registers read as 0,
though, so were wondering if that could be considered as a bail-out
check for the driver?
At the moment I get this, on a SoC with a disabled NPU:
[ 1.677612] etnaviv etnaviv: bound 7122000.npu (ops gpu_ops)
[ 1.683849] etnaviv-gpu 7122000.npu: model: GC0, revision: 0
[ 1.690020] etnaviv-gpu 7122000.npu: Unknown GPU model
[ 1.696145] [drm] Initialized etnaviv 1.4.0 for etnaviv on minor 0
[ 1.953053] etnaviv-gpu 7122000.npu: GPU not yet idle, mask: 0x00000000
Chen-Yu got this on his board featuring the NPU:
etnaviv-gpu 7122000.npu: model: GC9000, revision: 9003
If I get the code correctly, then etnaviv_gpu_init() correctly detects
the "unsupported" GPU model, and returns -ENXIO, but load_gpu() in
etnaviv_drv.c then somewhat ignores this, since it keeps looking for more
GPUs, and fails to notice that *none* showed up:
/sys/kernel/debug/dri/etnaviv/gpu is empty in my case.
Quick questions:
- Is reading 0 from VIVS_HI_CHIP_IDENTITY (or any other of the ID
registers) an invalid ID, so we can use that to detect those disabled
NPUs? If not, can any other register used to check this? The whole
block seems to be RAZ/WI when the NPU is disabled.
- Would it be acceptable to change the logic to error out of the
driver's init or probe routine when no GPU/NPU has been found, at
best with a proper error message? As it stands at the moment, the
driver is loaded, but of course nothing is usable, so it keeps
confusing users.
Happy to provide a patch, but just wanted to test the waters.
Cheers,
Andre
Powered by blists - more mailing lists