[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230227102808.2cea9705@gandalf.local.home>
Date: Mon, 27 Feb 2023 10:28:08 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Schspa Shi <schspa@...il.com>, linux-kernel@...r.kernel.org,
cocci@...ia.fr, mcgrof@...nel.org,
Julia Lawall <Julia.Lawall@...ia.fr>,
Nicolas Palix <nicolas.palix@...g.fr>,
Matthias Brugger <matthias.bgg@...il.com>,
AngeloGioacchino Del Regno
<angelogioacchino.delregno@...labora.com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
buytenh@...tstofly.org, johannes.berg@...el.com,
gregkh@...uxfoundation.org, tomba@...nel.org, airlied@...il.com,
daniel@...ll.ch
Subject: Re: [RFC PATCH] cocci: cpi: add complete api check script
On Mon, 27 Feb 2023 12:20:28 +0100
Peter Zijlstra <peterz@...radead.org> wrote:
> On Mon, Feb 27, 2023 at 03:53:47PM +0800, Schspa Shi wrote:
> > When DECLARE_COMPLETION_ONSTACK was used, the user must to ensure the other
> > process won't reference the completion variable on stack. For a
> > killable/interruptiable version, we need extra code(add locks/use xchg) to
> > ensure this.
> >
> > This patch provide a SmPL script to detect bad
> > DECLARE_COMPLETION_ONSTACK(_MAP) API usage, but far from perfect.
>
> Documentation/process/submitting-patches.rst:instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
>
> But also, wth is SmPL, the actual thing included is a coccinelle script.
>
> > This is a common problem, and a lot of drivers have simpler problem. The
> > fellowing is a list of problems find by this SmPL patch, due to the complex
> > use of wait_for_complete* API, there will still be some false negatives and
> > false positives. This RFC patch is mainly used to discuss improvement
> > methods. If we introduce the wait_for_complete*_onstack API, it will be
> > easier to modify these problems, and the patch rules of SmPL will be very
> > easy. In the process of trying to write SmPL scripts, I strongly recommend
> > introducing two onstack APIs to complete this operation.
> >
> > file:/Users/schspa/work/src/linux/drivers/infiniband/ulp/srpt/ib_srpt.c::2962 was suspected to return a variable on stack
>
> What's with this retarded file path? Are you running on Windows or
> something daft like that?
>
> Please, make them relative to srctree.
Also, you want to state what sha1 or tag you ran this on. The one file I
looked at didn't match line numbers.
I looked at:
drivers/ufs/core/ufshcd.c
Which has (what I'm assuming is the issue that was detected?)
/* wait until the task management command is completed */
err = wait_for_completion_io_timeout(&wait,
msecs_to_jiffies(TM_CMD_TIMEOUT));
if (!err) {
ufshcd_add_tm_upiu_trace(hba, task_tag, UFS_TM_ERR);
dev_err(hba->dev, "%s: task management cmd 0x%.2x timed-out\n",
__func__, tm_function);
if (ufshcd_clear_tm_cmd(hba, task_tag))
dev_WARN(hba->dev, "%s: unable to clear tm cmd (slot %d) after timeout\n",
__func__, task_tag);
err = -ETIMEDOUT;
} else {
err = 0;
memcpy(treq, hba->utmrdl_base_addr + task_tag, sizeof(*treq));
ufshcd_add_tm_upiu_trace(hba, task_tag, UFS_TM_COMP);
}
spin_lock_irqsave(hba->host->host_lock, flags);
hba->tmf_rqs[req->tag] = NULL;
__clear_bit(task_tag, &hba->outstanding_tasks);
spin_unlock_irqrestore(hba->host->host_lock, flags);
IIUC, the above spin lock protection will prevent the use after free of the
completion. As the completion still exists between the timedout wait, and
the start of the spinlock. And the spinlock will keep the complete from
being visible if that were to run after the spinlock.
So what exact race are you trying to catch here?
-- Steve
Powered by blists - more mailing lists