[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <02377290-cb5f-48ca-afe3-0e59b70a43de@linux.intel.com>
Date: Wed, 15 Nov 2023 13:31:46 +0000
From: Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
To: "Teres Alexis, Alan Previn" <alan.previn.teres.alexis@...el.com>,
"ville.syrjala@...ux.intel.com" <ville.syrjala@...ux.intel.com>,
"Winkler, Tomas" <tomas.winkler@...el.com>
Cc: "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"intel-gfx@...ts.freedesktop.org" <intel-gfx@...ts.freedesktop.org>,
"Usyskin, Alexander" <alexander.usyskin@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Lubart, Vitaly" <vitaly.lubart@...el.com>
Subject: Re: [Intel-gfx] [char-misc-next 3/4] mei: pxp: re-enable client on
errors
On 14/11/2023 15:31, Teres Alexis, Alan Previn wrote:
> On Tue, 2023-11-14 at 16:00 +0200, Ville Syrjälä wrote:
>> On Wed, Oct 11, 2023 at 02:01:56PM +0300, Tomas Winkler wrote:
>>> From: Alexander Usyskin <alexander.usyskin@...el.com>
>>>
>>> Disable and enable mei-pxp client on errors to clean the internal state.
>>
>> This broke i915 on my Alderlake-P laptop.
>>
>
>
> Hi Alex, i just relooked at the series that got merged, and i noticed
> that in patch #3 of the series, you had changed mei_pxp_send_message
> to return bytes sent instead of zero on success. IIRC, we had
> agreed to not effect the behavior of this component interface (other
> than adding the timeout) - this was the intention of Patch #4 that i
> was pushing for in order to spec the interface (which continues
> to say zero on success). We should fix this to stay with the original
> behavior - where mei-pxp should NOT send partial packets and
> will only return zero in success case where success is sending of
> the complete packets - so we don't need to get back the "bytes sent"
> from mei_pxp_send_message. So i think this might be causing the problem.
>
>
> Side note to Ville:, are you enabling PXP kernel config by default in
> all MESA contexts? I recall that MESA folks were running some CI testing
> with enable pxp contexts, but didn't realize this is being enabled by
> default in all contexts. Please be aware that enabling pxp-contexts
> would temporarily disabled runtime-pm during that contexts lifetime.
> Also pxp contexts will be forced to be irrecoverable if it ever hangs.
> The former is a hardware architecture requirement but doesn't do anything
> if you're enabling display (which I beleive also blocks in ADL). The
> latter was a requirement to comply with Vulkan.
Regardless of the mei_pxp_send_message being temporarily broken, doesn't
Ville's logs suggest the PXP detection is altogether messed up? AFAIR
the plan was exactly to avoid stalls during Mesa init and new uapi was
added to achieve that. But it doesn't seem to be working?!
commit 3b918f4f0c8b5344af4058f1a12e2023363d0097
Author: Alan Previn <alan.previn.teres.alexis@...el.com>
Date: Wed Aug 2 11:25:50 2023 -0700
drm/i915/pxp: Optimize GET_PARAM:PXP_STATUS
After recent discussions with Mesa folks, it was requested
that we optimize i915's GET_PARAM for the PXP_STATUS without
changing the UAPI spec.
Add these additional optimizations:
- If any PXP initializatoin flow failed, then ensure that
we catch it so that we can change the returned PXP_STATUS
from "2" (i.e. 'PXP is supported but not yet ready')
to "-ENODEV". This typically should not happen and if it
does, we have a platform configuration issue.
- If a PXP arbitration session creation event failed
due to incorrect firmware version or blocking SOC fusing
or blocking BIOS configuration (platform reasons that won't
change if we retry), then reflect that blockage by also
returning -ENODEV in the GET_PARAM:PXP_STATUS.
- GET_PARAM:PXP_STATUS should not wait at all if PXP is
supported but non-i915 dependencies (component-driver /
firmware) we are still pending to complete the init flows.
In this case, just return "2" immediately (i.e. 'PXP is
supported but not yet ready').
AFAIU is things failed there shouldn't be long waits, repeated/constant
ones even less so.
Regards,
Tvrtko
Powered by blists - more mailing lists