[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220108000308.GB1337751@lothringen>
Date: Sat, 8 Jan 2022 01:03:08 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: Christoph Lameter <cl@...ux.com>, linux-kernel@...r.kernel.org,
Nitesh Lal <nilal@...hat.com>,
Nicolas Saenz Julienne <nsaenzju@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Alex Belits <abelits@...its.com>, Peter Xu <peterx@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: Re: [patch v8 02/10] add prctl task isolation prctl docs and samples
On Fri, Jan 07, 2022 at 08:30:01AM -0300, Marcelo Tosatti wrote:
> On Fri, Jan 07, 2022 at 12:49:56AM +0100, Frederic Weisbecker wrote:
> > On Wed, Dec 08, 2021 at 01:09:08PM -0300, Marcelo Tosatti wrote:
> > > Add documentation and userspace sample code for prctl
> > > task isolation interface.
> > >
> > > Signed-off-by: Marcelo Tosatti <mtosatti@...hat.com>
> >
> > Acked-by: Frederic Weisbecker <frederic@...nel.org>
> >
> > Thanks a lot! Time for me to look at the rest of the series.
> >
> > Would be nice to have Thomas's opinion as well at least on
> > the interface (this patch).
>
> Yes. AFAIAW most of his earlier comments on what the
> interface should look like have been addressed (or at
> least i've tried to)... including the ability for
> the system admin to configure the isolation options.
>
> The one thing missing is to attempt to enter nohz_full
> on activation (which Christoph asked for).
>
> Christoph, have a question on that. At
> https://lkml.org/lkml/2021/12/14/346, you wrote:
>
> "Applications running would ideally have no performance penalty and there
> is no issue with kernel activity unless the application is in its special
> low latency loop. NOHZ is currently only activated after spinning in that
> loop for 2 seconds or so. Would be best to be able to trigger that
> manually somehow."
>
> So was thinking of something similar to what the full task isolation
> patchset does (with the behavior of returning an error as option...):
>
> +int try_stop_full_tick(void)
> +{
> + int cpu = smp_processor_id();
> + struct tick_sched *ts = this_cpu_ptr(&tick_cpu_sched);
> +
> + /* For an unstable clock, we should return a permanent error code. */
> + if (atomic_read(&tick_dep_mask) & TICK_DEP_MASK_CLOCK_UNSTABLE)
> + return -EINVAL;
> +
> + if (!can_stop_full_tick(cpu, ts))
> + return -EAGAIN;
> +
> + tick_nohz_stop_sched_tick(ts, cpu);
> + return 0;
> +}
>
> Is that sufficient? (note it might still be possible
> for a failure to enter nohz_full due to a number of
> reasons), see tick_nohz_stop_sched_tick.
Well, I guess we can simply make tick_nohz_full_update_tick() an API, then
it could be a QUIESCE feature.
But keep in mind we may not only fail to enter into nohz_full mode, we
may also enter it but, instead of completely stopping the tick, it can
be delayed to some future if there is still a timer callback queued somewhere.
Make sure you test "ts->next_tick == KTIME_MAX" after stopping the tick.
This raise the question: what do we do if a quiescing fails? At least if it's a
oneshot, we can return an -EBUSY from the prctl() but otherwise, subsequent kernel
entry/exit are a problem.
Powered by blists - more mailing lists