[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201001135640.GA1748@lothringen>
Date: Thu, 1 Oct 2020 15:56:40 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Alex Belits <abelits@...vell.com>
Cc: "rostedt@...dmis.org" <rostedt@...dmis.org>,
Prasun Kapoor <pkapoor@...vell.com>,
"mingo@...nel.org" <mingo@...nel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"will@...nel.org" <will@...nel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH v4 03/13] task_isolation: userspace hard isolation from
kernel
On Wed, Jul 22, 2020 at 02:49:49PM +0000, Alex Belits wrote:
> +/*
> + * Description of the last two tasks that ran isolated on a given CPU.
> + * This is intended only for messages about isolation breaking. We
> + * don't want any references to actual task while accessing this from
> + * CPU that caused isolation breaking -- we know nothing about timing
> + * and don't want to use locking or RCU.
> + */
> +struct isol_task_desc {
> + atomic_t curr_index;
> + atomic_t curr_index_wr;
> + bool warned[2];
> + pid_t pid[2];
> + pid_t tgid[2];
> + char comm[2][TASK_COMM_LEN];
> +};
> +static DEFINE_PER_CPU(struct isol_task_desc, isol_task_descs);
So that's quite a huge patch that would have needed to be split up.
Especially this tracing engine.
Speaking of which, I agree with Thomas that it's unnecessary. It's too much
code and complexity. We can use the existing trace events and perform the
analysis from userspace to find the source of the disturbance.
Thanks.
Powered by blists - more mailing lists