lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 31 Jul 2018 10:24:58 -0700
From:   Roman Gushchin <guro@...com>
To:     Daniel Borkmann <daniel@...earbox.net>
CC:     <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <kernel-team@...com>, Alexei Starovoitov <ast@...nel.org>
Subject: Re: [PATCH v4 bpf-next 08/14] bpf: introduce the
 bpf_get_local_storage() helper function

On Tue, Jul 31, 2018 at 12:34:06PM +0200, Daniel Borkmann wrote:
> Hi Roman,
> 
> On 07/27/2018 11:52 PM, Roman Gushchin wrote:
> > The bpf_get_local_storage() helper function is used
> > to get a pointer to the bpf local storage from a bpf program.
> > 
> > It takes a pointer to a storage map and flags as arguments.
> > Right now it accepts only cgroup storage maps, and flags
> > argument has to be 0. Further it can be extended to support
> > other types of local storage: e.g. thread local storage etc.
> > 
> > Signed-off-by: Roman Gushchin <guro@...com>
> > Cc: Alexei Starovoitov <ast@...nel.org>
> > Cc: Daniel Borkmann <daniel@...earbox.net>
> > Acked-by: Martin KaFai Lau <kafai@...com>
> > ---
> >  include/linux/bpf.h      |  2 ++
> >  include/uapi/linux/bpf.h | 13 ++++++++++++-
> >  kernel/bpf/cgroup.c      |  2 ++
> >  kernel/bpf/core.c        |  1 +
> >  kernel/bpf/helpers.c     | 20 ++++++++++++++++++++
> >  kernel/bpf/verifier.c    | 18 ++++++++++++++++++
> >  net/core/filter.c        | 23 ++++++++++++++++++++++-
> >  7 files changed, 77 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index ca4ac2a39def..cd8790d2c6ed 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -788,6 +788,8 @@ extern const struct bpf_func_proto bpf_sock_map_update_proto;
> >  extern const struct bpf_func_proto bpf_sock_hash_update_proto;
> >  extern const struct bpf_func_proto bpf_get_current_cgroup_id_proto;
> >  
> > +extern const struct bpf_func_proto bpf_get_local_storage_proto;
> > +
> >  /* Shared helpers among cBPF and eBPF. */
> >  void bpf_user_rnd_init_once(void);
> >  u64 bpf_user_rnd_u32(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index a0aa53148763..495180f229ee 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -2081,6 +2081,16 @@ union bpf_attr {
> >   * 	Return
> >   * 		A 64-bit integer containing the current cgroup id based
> >   * 		on the cgroup within which the current task is running.
> > + *
> > + * void* get_local_storage(void *map, u64 flags)
> > + *	Description
> > + *		Get the pointer to the local storage area.
> > + *		The type and the size of the local storage is defined
> > + *		by the *map* argument.
> > + *		The *flags* meaning is specific for each map type,
> > + *		and has to be 0 for cgroup local storage.
> > + *	Return
> > + *		Pointer to the local storage area.
> >   */
> 
> I think it would be crucial to clarify what underlying assumption the
> program writer can make with regards to concurrent access to this storage.
> 
> E.g. in this context, can _only_ BPF_XADD be used for counters as otherwise
> any other type of access may race in parallel, or are we protected by the
> socket lock and can safely override all data in this buffer via normal stores
> (e.g. for socket related progs)? What about other types?
> 
> Right now nothing is mentioned here, but I think it must be clarified to
> avoid any surprises. E.g. in normal htab case program can at least use the
> map update there for atomic value updates, but those are disallowed from
> the cgroup local storage, hence my question. Btw, no need to resend, I can
> also update the paragraph there.

Fair enough.

Mid- to long-term we want to add a per-cpu version of the cgroup local storage,
and, possible, some locking API. But right now XADD is what should be used.

I think something like this should work here:

--

Depending on the bpf program type, a local storage area can be shared between
multiple instances of the bpf program, running simultaneously.
A user should care about the synchronization by himself. For example,
by using BPF_STX_XADD instruction to alter the shared data.

--

Please, feel free to change/add to the text above.

Also, it might be good to change the example to use STX_XADD:

index 0597943ce34b..dc83fb2d3f27 100644
--- a/tools/testing/selftests/bpf/test_cgroup_storage.c
+++ b/tools/testing/selftests/bpf/test_cgroup_storage.c
@@ -18,9 +18,9 @@ int main(int argc, char **argv)
                BPF_MOV64_IMM(BPF_REG_2, 0), /* flags, not used */
                BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
                             BPF_FUNC_get_local_storage),
+               BPF_MOV64_IMM(BPF_REG_1, 1),
+               BPF_STX_XADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
                BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-               BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
-               BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
                BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x1),
                BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
                BPF_EXIT_INSN(),


Thank you!

Roman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ