[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPNVh5foSzWj3XHoONjvzmLLXOi9u3ojNmdfegohpufx1YYXgg@mail.gmail.com>
Date: Fri, 21 May 2021 15:01:24 -0700
From: Peter Oskolkov <posk@...gle.com>
To: Andy Lutomirski <luto@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Linux API <linux-api@...r.kernel.org>,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Peter Oskolkov <posk@...k.io>,
Joel Fernandes <joel@...lfernandes.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrei Vagin <avagin@...gle.com>,
Jim Newsome <jnewsome@...project.org>
Subject: Re: [RFC PATCH v0.1 4/9] sched/umcg: implement core UMCG API
On Fri, May 21, 2021 at 12:32 PM Andy Lutomirski <luto@...nel.org> wrote:
>
> On Thu, May 20, 2021 at 11:36 AM Peter Oskolkov <posk@...gle.com> wrote:
> >
> > Implement version 1 of core UMCG API (wait/wake/swap).
> >
> > As has been outlined in
> > https://lore.kernel.org/lkml/20200722234538.166697-1-posk@posk.io/,
> > efficient and synchronous on-CPU context switching is key
> > to enabling two broad use cases: in-process M:N userspace scheduling
> > and fast X-process RPCs for security wrappers.
> >
> > High-level design considerations/approaches used:
> > - wait & wake can race with each other;
> > - offload as much work as possible to libumcg in tools/lib/umcg,
> > specifically:
> > - most state changes, e.g. RUNNABLE <=> RUNNING, are done in
> > the userspace (libumcg);
> > - retries are offloaded to the userspace.
>
> Do you have some perf numbers as to how long a UMCG context switch
> takes compared to a normal one?
I'm not sure what is a "normal context switch" in this context. Futex
wakeup on a remote idle CPU takes 5-10usec; an on-CPU UMCG context
switch takes less than 1usec; futex wake + futex wait on the same CPU
(taskset ***) takes about 1-1.5usec in my benchmarks.
>
> --Andy
Powered by blists - more mailing lists