[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtB8OeRzbDjfKoNUGLC1XUXT07G9QedCJ8stSMGvAD5hNg@mail.gmail.com>
Date: Mon, 19 Sep 2022 14:41:19 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: timj <timj@....org>
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
linux-kernel@...r.kernel.org, parth@...ux.ibm.com,
qais.yousef@....com, chris.hyser@...cle.com,
valentin.schneider@....com, patrick.bellasi@...bug.net,
David.Laight@...lab.com, pjt@...gle.com, pavel@....cz,
tj@...nel.org, qperret@...gle.com, tim.c.chen@...ux.intel.com,
joshdon@...gle.com
Subject: Re: [PATCH v4 4/8] sched/core: Add permission checks for setting the
latency_nice value
Hi,
Thanks you for describing in detail your use case.
On Mon, 19 Sept 2022 at 10:52, timj <timj@....org> wrote:
>
> Hi,
>
> great to see work on improving low latency scenarios.
>
> There is an entire class of low latency audio applications that seems
> to be underrepresented in the current low latency use case descriptions.
> Experiencing audio drop outs (clicks or gaps) is generally perceived
> to be much more disruptive than graphics glitches like missing a frame
> deadline. That's why audio programs and devices with huge buffers put
> so much emphasis on avoiding drop outs however possible.
>
> Now, video players and tools like mpg123 are well served with increased
> audio buffer sizes to avoid drop outs. But the story is different with
> MIDI synthesizers (hit a piano key and expect immediate onset -
> especially drum synthesizers) or audio filters that implement realtime
> auto-tuning or short delay effects. Here, useful applications require
> IO latency guarantees significantly below 10ms. A useful setup would be
> e.g. 48kHz sampling frequency and an audio buffer of two fragments with
> 128 samples each. Simplified, that provides a combined IO latency of
> 2 * 128 / 48000 = 5.33ms if the audio application can rely on scheduling
> deadlines of 1 * 128 / 48000 = 2.6ms.
>
> So far, low latency applications have to jump through a number of hoops
> to roughly achieve such deadlines, ranging from requiring suid-
> installations for SCHED_DEADLINE (rarely used in practice), needing
> CAP_SYS_NICE, requiring a Jackd distro or depending on RtKit with all
> its shortcomings [1].
> I.e. there's a plethora of user space workarounds that have piled up
> over the decades, because regular user applications lack a simple way
> to tell the kernel one of:
>
> +1) I'm interested in throughput and don't care about latency: make -j
> 0) I'm fine with latency handling defaults: bash, X11
> -1) I'm depending on low latency scheduling much more than throughput:
> audio-filter, synthesizer
>
> The emphasis here is on *USER* applications, not programs run by root.
> As you write in [PATCH v4 5/8]: "We don't want to provide more CPU
> bandwidth to a thread but reorder the scheduling to run latency
> sensitive task first whenever possible" and as outlined in the
> presentation "Latency_nice - Implementation and Use-case for Scheduler
> Optimization" by Shah et al [2], changing latency nice will not result
> in taking over the CPU or extending the possibility for DoS attacks.
>
> So what I'm getting at with this lengthy use case description is that
> it is vitally important for low latency audio applications without
> elevated privileges to be able to request low latency scheduling.
> I.e. please remove the check for !capable(CAP_SYS_NICE) to request low
> latency scheduling from the patch set, so audio applications can simply
> use their time slices (and nothing more) at the *right* time and have a
> way to tell the kernel about it without requiring root or daemons to
> relay the message. Conversely, idle tasks or make -j also don't need
> root to enjoy up to 100% of idle CPU time.
Ok, Your explanation makes sense to me especially because we want to
ensure to not provide more cpu time with this latency prio. I'm
curious to see the feedback from others about the reason we want
CAP_SYS_NICE other than following nice priority.
Side question, Have you tried this patchset (minus this patch) with
your use case ?
>
> [1] https://github.com/heftig/rtkit/issues/25
> [2] https://static.lwn.net/images/conf/2020/ospm/latency-nice-slides.pdf
>
> On 16.09.22 10:03, Vincent Guittot wrote:
> > From: Parth Shah <parth@...ux.ibm.com>
> >
> > Since the latency_nice uses the similar infrastructure as NICE, use the
> > already existing CAP_SYS_NICE security checks for the latency_nice. This
> > should return -EPERM for the non-root user when trying to set the task
> > latency_nice value to any lower than the current value.
> >
> > Signed-off-by: Parth Shah <parth@...ux.ibm.com>
> > [rebase]
> > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > ---
> > kernel/sched/core.c | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 2015e7d1f8b2..3c79c5419d1b 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -7444,6 +7444,10 @@ static int __sched_setscheduler(struct task_struct *p,
> > return -EINVAL;
> > if (attr->sched_latency_nice < MIN_LATENCY_NICE)
> > return -EINVAL;
> > + /* Use the same security checks as NICE */
> > + if (attr->sched_latency_nice < p->latency_nice &&
> > + !capable(CAP_SYS_NICE))
> > + return -EPERM;
> > }
> >
> > if (pi)
>
Powered by blists - more mailing lists