[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e6075a6-20a9-42ee-8f10-377ba9b0291b@ti.com>
Date: Tue, 20 Aug 2024 15:00:31 +0530
From: Beleswar Prasad Padhi <b-padhi@...com>
To: Jan Kiszka <jan.kiszka@...mens.com>,
Bjorn Andersson
<andersson@...nel.org>,
Mathieu Poirier <mathieu.poirier@...aro.org>,
<linux-remoteproc@...r.kernel.org>
CC: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Apurva Nandan
<a-nandan@...com>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
Nishanth Menon <nm@...com>
Subject: Re: [PATCH] remoteproc: k3-r5: Fix driver shutdown
Hi Jan,
On 19-08-2024 22:17, Jan Kiszka wrote:
> From: Jan Kiszka <jan.kiszka@...mens.com>
>
> When k3_r5_cluster_rproc_exit is run, core 1 is shutdown and removed
> first. When core 0 should then be stopped before its removal, it will
> find core1->rproc as NULL already and crashes. Happens on rmmod e.g.
Did you check this on top of -next-20240820 tag? There was a series[0]
which was merged recently which fixed this condition. I don't see this
issue when trying on top of -next-20240820 tag.
[0]: https://lore.kernel.org/all/20240808074127.2688131-1-b-padhi@ti.com/
>
> Fixes: 3c8a9066d584 ("remoteproc: k3-r5: Do not allow core1 to power up before core0 via sysfs")
> CC: stable@...r.kernel.org
> Signed-off-by: Jan Kiszka <jan.kiszka@...mens.com>
> ---
>
> There might be one more because I can still make this driver crash
> after an operator error. Were error scenarios tested at all?
Can you point out what is this issue more specifically, and I can take
this up then.
>
> drivers/remoteproc/ti_k3_r5_remoteproc.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
> index eb09d2e9b32a..9ebd7a34e638 100644
> --- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
> +++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
> @@ -646,7 +646,8 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
> /* do not allow core 0 to stop before core 1 */
> core1 = list_last_entry(&cluster->cores, struct k3_r5_core,
> elem);
> - if (core != core1 && core1->rproc->state != RPROC_OFFLINE) {
> + if (core != core1 && core1->rproc &&
> + core1->rproc->state != RPROC_OFFLINE) {
> dev_err(dev, "%s: can not stop core 0 before core 1\n",
> __func__);
> ret = -EPERM;
Powered by blists - more mailing lists