lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200511143912.34086-1-maciej.fijalkowski@intel.com>
Date:   Mon, 11 May 2020 16:39:11 +0200
From:   Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To:     ast@...nel.org, daniel@...earbox.net
Cc:     bpf@...r.kernel.org, netdev@...r.kernel.org, bjorn.topel@...el.com,
        magnus.karlsson@...el.com,
        Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Subject: [RFC PATCH bpf-next 0/1] bpf, x64: optimize JIT prologue/epilogue generation

Hi!

Today, BPF x86-64 JIT is preserving all of the callee-saved registers
for each BPF program being JITed, even when none of the R6-R9 registers
are used by the BPF program. Furthermore the tail call counter is always
pushed/popped to/from the stack even when there is no tail call usage in
BPF program being JITed. Optimization can be introduced that would
detect the usage of R6-R9 and based on that push/pop to/from the stack
only what is needed. Same goes for tail call counter.

Results look promising for such instruction reduction. Below are the
numbers for xdp1 sample on FVL 40G NIC receiving traffic from pktgen:

* With optimization: 22.3 Mpps
* Without:           19.0 mpps

So it's around 15% of performance improvement. Note that xdp1 is not
using any of callee saved registers, nor the tail call, hence such
speed-up.

There is one detail that needs to be handled though.

Currently, x86-64 JIT tail call implementation is skipping the prologue
of target BPF program that has constant size. With the mentioned
optimization implemented, each particular BPF program that might be
inserted onto the prog array map and therefore be the target of tail
call, could have various prologue size.

Let's have some pseudo-code example:

func1:
pro
code
epi

func2:
pro
code'
epi

func3:
pro
code''
epi

Today, pro and epi are always the same (9/7) instructions. So a tail
call from func1 to func2 is just a:

jump func2 + sizeof pro in bytes (PROLOGUE_SIZE)

With the optimization:

func1:
pro
code
epi

func2:
pro'
code'
epi'

func3:
pro''
code''
epi''

For making the tail calls up and running with the mentioned optimization
in place, x86-64 JIT should emit the pop registers instructions
that were pushed on prologue before the actual jump. Jump offset should
skip the instructions that are handling rbp/rsp, not the whole prologue.

A tail call within func1 would then need to be:
epi -> pop what pro pushed, but no leave/ret instructions
jump func2 + 16 // first push insn of pro'; if no push, then this would
                // a direct jump to code'

Magic value of 16 comes from count of bytes that represent instructions
that are skipped:
0f 1f 44 00 00          nopl   0x0(%rax,%rax,1)
55                      push   %rbp
48 89 e5                mov    %rsp,%rbp
48 81 ec 08 00 00 00    sub    $0x8,%rsp

which would in many cases add *more* instructions for tailcalls. If none
of callee-saved registers are used, then there would be no overhead with
such optimization in place.

I'm not sure how to measure properly the impact on the BPF programs that
are utilizing tail calls. Any suggestions?

Daniel, Alexei, what is your view on this?

For implementation details, see commit message of included patch.

Thank you,
Maciej


Maciej Fijalkowski (1):
  bpf, x64: optimize JIT prologue/epilogue generation

 arch/x86/net/bpf_jit_comp.c | 190 ++++++++++++++++++++++++++++--------
 1 file changed, 148 insertions(+), 42 deletions(-)

-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ