lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 28 Mar 2018 17:48:24 -0700
From:   Jakub Kicinski <jakub.kicinski@...ronome.com>
To:     alexei.starovoitov@...il.com, daniel@...earbox.net
Cc:     netdev@...r.kernel.org, oss-drivers@...ronome.com,
        Jan Gossens <jan.gossens@...h-aachen.de>,
        Jakub Kicinski <jakub.kicinski@...ronome.com>
Subject: [PATCH bpf-next 00/14] nfp: bpf: add updates, deletes, atomic ops, prandom and packet cache

Hi!

This set adds support for update and delete calls from the datapath,
as well as XADD instructions (32 and 64 bit) and pseudo random numbers.
The XADD support depends on verifier enforcing alignment which Daniel
recently added.  XADD uses NFP's atomic engine which requires values
to be in big endian, therefore we need to keep track of which parts of
the values are used as atomics and byte swap them accordingly.  Pseudo
random numbers are generated using NFP's HW pseudo random number
generator.

Jiong tackles initial implementation of packet cache, which he describes
as follows:

Memory reads on NFP would first fetch data from memory to transfer-in
registers, then move them from transfer-in to general registers.

Given NFP is rich on transfer-in registers, they could serve as memory
cache.

This patch tries to identify a sequence of packet data read (BPF_LDX) that
are executed sequentially, then the total access range of the sequence is
calculated and attached to each read instruction, the first instruction
in this sequence is marked with an cache init flag so the execution of
it would bring in the whole range of packet data for the sequence.

All later packet reads in this sequence would fetch data from transfer-in
registers directly, no need to JIT NFP memory access.

Function call, non-packet-data memory read, packet write and memcpy will
invalidate the cache and start a new cache range.

Cache invalidation could be improved in the future, for example packet
write doesn't need to invalidate the cache if the the write destination
won't be read again.


Jakub Kicinski (11):
  nfp: bpf: rename map_lookup_stack() to map_call_stack_common()
  nfp: bpf: add helper for validating stack pointers
  nfp: bpf: add helper for basic map call checks
  nfp: bpf: add map updates from the datapath
  nfp: bpf: add map deletes from the datapath
  bpf: add parenthesis around argument of BPF_LDST_BYTES()
  nfp: bpf: add basic support for atomic adds
  nfp: bpf: expose command delay slots
  nfp: bpf: add support for atomic add of unknown values
  nfp: bpf: add support for bpf_get_prandom_u32()
  nfp: bpf: improve wrong FW response warnings

Jiong Wang (3):
  nfp: bpf: read from packet data cache for PTR_TO_PACKET
  nfp: bpf: support unaligned read offset
  nfp: bpf: detect packet reads could be cached, enable the optimisation

 drivers/net/ethernet/netronome/nfp/bpf/cmsg.c     |  12 +-
 drivers/net/ethernet/netronome/nfp/bpf/fw.h       |   1 +
 drivers/net/ethernet/netronome/nfp/bpf/jit.c      | 462 ++++++++++++++++++++--
 drivers/net/ethernet/netronome/nfp/bpf/main.c     |  18 +
 drivers/net/ethernet/netronome/nfp/bpf/main.h     |  85 +++-
 drivers/net/ethernet/netronome/nfp/bpf/offload.c  |  45 ++-
 drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 217 ++++++++--
 drivers/net/ethernet/netronome/nfp/nfp_asm.c      |   2 +
 drivers/net/ethernet/netronome/nfp/nfp_asm.h      |   7 +
 include/linux/filter.h                            |   2 +-
 10 files changed, 771 insertions(+), 80 deletions(-)

-- 
2.16.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ