[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+m7Df_pb6CUMVjnBAcHqayg=4wKQ1VEGFvg3DYTDpetA@mail.gmail.com>
Date: Wed, 14 Apr 2021 22:25:23 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: Arjun Roy <arjunroy@...gle.com>
Cc: David Laight <David.Laight@...lab.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
paulmck <paulmck@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 3/3] rseq: optimise rseq_get_rseq_cs() and clear_rseq_cs()
On Wed, Apr 14, 2021 at 10:15 PM Arjun Roy <arjunroy@...gle.com> wrote:
>
> On Wed, Apr 14, 2021 at 10:35 AM Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > On Wed, Apr 14, 2021 at 7:15 PM Arjun Roy <arjunroy@...gle.com> wrote:
> > >
> > > On Wed, Apr 14, 2021 at 9:10 AM Eric Dumazet <edumazet@...gle.com> wrote:
> > > >
> > > > On Wed, Apr 14, 2021 at 6:08 PM David Laight <David.Laight@...lab.com> wrote:
> > > > >
> > > > > From: Eric Dumazet
> > > > > > Sent: 14 April 2021 17:00
> > > > > ...
> > > > > > > Repeated unsafe_get_user() calls are crying out for an optimisation.
> > > > > > > You get something like:
> > > > > > > failed = 0;
> > > > > > > copy();
> > > > > > > if (failed) goto error;
> > > > > > > copy();
> > > > > > > if (failed) goto error;
> > > > > > > Where 'failed' is set by the fault handler.
> > > > > > >
> > > > > > > This could be optimised to:
> > > > > > > failed = 0;
> > > > > > > copy();
> > > > > > > copy();
> > > > > > > if (failed) goto error;
> > > > > > > Even if it faults on every invalid address it probably
> > > > > > > doesn't matter - no one cares about that path.
> > > > > >
> > > > > >
> > > > > > On which arch are you looking at ?
> > > > > >
> > > > > > On x86_64 at least, code generation is just perfect.
> > > > > > Not even a conditional jmp, it is all handled by exceptions (if any)
> > > > > >
> > > > > > stac
> > > > > > copy();
> > > > > > copy();
> > > > > > clac
> > > > > >
> > > > > >
> > > > > > <out_of_line>
> > > > > > efault_end: do error recovery.
> > > > >
> > > > > It will be x86_64.
> > > > > I'm definitely seeing repeated tests of (IIRC) %rdx.
> > > > >
> > > > > It may well be because the compiler isn't very new.
> > > > > Will be an Ubuntu build of 9.3.0.
> > > > > Does that support 'asm goto with outputs' - which
> > > > > may be the difference.
> > > > >
> > > >
> > > > Yep, probably. I am using some recent clang version.
> > > >
> > >
> > > On x86-64 I can confirm, for me it (4 x unsafe_get_user()) compiles
> > > down to stac + lfence + 8 x mov + clac as straight line code. And
> > > results in roughly a 5%-10% speedup over copy_from_user().
> > >
> >
> > But rseq_get_rseq_cs() would still need three different copies,
> > with 3 stac+lfence+clac sequences.
> >
> > Maybe we need to enclose all __rseq_handle_notify_resume() operations
> > in a single section.
> >
> >
>
> To provide a bit of further exposition on this point, if you do 4x
> unsafe_get_user() recall I mentioned a 5-10% improvement. On the other
> hand, 4x normal get_user() I saw something like a 100% (ie. doubling
> of sys time measured) regression.
>
> I assume that's the fault of multiple stac+clac.
I was suggesting only using unsafe_get_user() and unsafe_put_user(),
and one surrounding stac/clac
Basically what we had (partially) in our old Google kernels, before
commit 8f2817701492 ("rseq: Use get_user/put_user rather than
__get_user/__put_user")
but with all the needed modern stuff.
Powered by blists - more mailing lists