lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <5a5c07ac-8c11-79d3-46a3-a255d4148f76@gmail.com>
Date:   Sat, 19 Jun 2021 20:21:17 +0900
From:   Akira Tsukamoto <akira.tsukamoto@...il.com>
To:     Paul Walmsley <paul.walmsley@...ive.com>,
        Palmer Dabbelt <palmer@...belt.com>,
        Albert Ou <aou@...s.berkeley.edu>,
        Akira Tsukamoto <akira.tsukamoto@...il.com>,
        linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org
Subject: [PATCH v2 0/5] riscv: improving uaccess with logs from network bench

Optimizing copy_to_user and copy_from_user.

I rewrote the functions in v2, heavily influenced by Garry's memcpy
function [1].
The functions must be written in assembler to handle page faults manually
inside the function.

With the changes, improves in the percentage usage and some performance
of network speed in UDP packets.
Only patching copy_user. Using the original memcpy.

All results are from the same base kernel, same rootfs and same
BeagleV beta board.

Comparison by "perf top -Ue task-clock" while running iperf3.

--- TCP recv ---
  * Before
   40.40%  [kernel]  [k] memcpy
   33.09%  [kernel]  [k] __asm_copy_to_user
  * After
   50.35%  [kernel]  [k] memcpy
   13.76%  [kernel]  [k] __asm_copy_to_user

--- TCP send ---
  * Before
   19.96%  [kernel]  [k] memcpy
    9.84%  [kernel]  [k] __asm_copy_to_user
  * After
   14.27%  [kernel]  [k] memcpy
    7.37%  [kernel]  [k] __asm_copy_to_user

--- UDP send ---
  * Before
   25.18%  [kernel]  [k] memcpy
   22.50%  [kernel]  [k] __asm_copy_to_user
  * After
   28.90%  [kernel]  [k] memcpy
    9.49%  [kernel]  [k] __asm_copy_to_user

--- UDP recv ---
  * Before
   44.45%  [kernel]  [k] memcpy
   31.04%  [kernel]  [k] __asm_copy_to_user
  * After
   55.62%  [kernel]  [k] memcpy
   11.22%  [kernel]  [k] __asm_copy_to_user

Processing network packets require a lot of unaligned access for the packet
header, which is not able to change the design of the header format to be
aligned.
And user applications call system calls with a large buffer for send/recf()
and sendto/recvfrom() to repeat less function calls for the optimization.

v1 -> v2:
- Added shift copy
- Separated patches for readability of changes in assembler
- Using perf results

[1] https://lkml.org/lkml/2021/2/16/778

Akira Tsukamoto (5):
   riscv: __asm_to/copy_from_user: delete existing code
   riscv: __asm_to/copy_from_user: Adding byte copy first
   riscv: __asm_to/copy_from_user: Copy until dst is aligned address
   riscv: __asm_to/copy_from_user: Bulk copy while shifting misaligned
     data
   riscv: __asm_to/copy_from_user: Bulk copy when both src dst are
     aligned

  arch/riscv/lib/uaccess.S | 181 +++++++++++++++++++++++++++++++--------
  1 file changed, 146 insertions(+), 35 deletions(-)

-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ