[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZhalcBcd3w0w2HD_@localhost.localdomain>
Date: Wed, 10 Apr 2024 16:42:56 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Adrian Hunter <adrian.hunter@...el.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Ian Rogers <irogers@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Jiri Olsa <jolsa@...nel.org>, Marco Elver <elver@...gle.com>,
Mark Rutland <mark.rutland@....com>,
Namhyung Kim <namhyung@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: Re: [PATCH v3 2/4] perf: Enqueue SIGTRAP always via task_work.
Le Wed, Apr 10, 2024 at 04:06:33PM +0200, Sebastian Andrzej Siewior a écrit :
> On 2024-04-10 16:00:17 [+0200], Frederic Weisbecker wrote:
> > Le Wed, Apr 10, 2024 at 03:47:02PM +0200, Sebastian Andrzej Siewior a écrit :
> > > On 2024-04-10 13:37:05 [+0200], Frederic Weisbecker wrote:
> > > > > Couldn't we either flush _or_ remove the task_work in perf_release()?
> > > >
> > > > Right so the problem in perf_release() is that we may be dealing with task works
> > > > of other tasks than current. In that case, task_work_cancel() is fine if it
> > > > successes. But if it fails, you don't have the guarantee that the task work
> > > > isn't concurrently running or about to run. And you have no way to know about
> > > > that. So then you need some sort of flushing indeed.
> > >
> > > Since perf_release() preemptible, a wait/sleep for completion would be
> > > best (instead of flushing).
> >
> > Like this then?
> >
> > https://lore.kernel.org/all/202403310406.TPrIela8-lkp@intel.com/T/#m63c28147d8ac06b21c64d7784d49f892e06c0e50
>
> Kind of, yes. Do we have more than one waiter? If not, maybe that
> rcuwait would work then.
Indeed there is only one waiter so that should work. Would
that be something you can call while preemption is disabled?
Thanks.
> Otherwise (>1 waiter) we did establish that we may need a per-task
> counter for recursion handling so preempt-disable shouldn't be a problem
> then. The pending_work_wq must not be used outside of task context (means
> no hardirq or something like that).
>
> Sebastian
Powered by blists - more mailing lists