lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 Oct 2021 09:41:43 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Joe Burton <jevburton@...gle.com>
Cc:     Joe Burton <jevburton.kernel@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>,
        Petar Penkov <ppenkov@...gle.com>,
        Stanislav Fomichev <sdf@...gle.com>,
        Hao Luo <haoluo@...gle.com>, netdev@...r.kernel.org,
        bpf@...r.kernel.org
Subject: Re: [RFC PATCH v2 00/13] Introduce BPF map tracing capability

On Tue, Oct 05, 2021 at 02:47:34PM -0700, Joe Burton wrote:
> > It's a neat idea to user verifier powers for this job,
> > but I wonder why simple tracepoint in map ops was not used instead?
> 
> My concern with tracepoints is that they execute for all map updates,
> not for a particular map. Ideally performing an upgrade of program X
> should not affect the performance characteristics of program Y.

Right, but single 'if (map == map_ptr_being_traced)'
won't really affect update() speed of maps.
For hash maps the update/delete are heavy operations with a bunch of
checks and spinlocks.
Just to make sure we're on the same patch I'm proposing something like
the patch below...

> If n programs are opted into this model, then upgrading any of them
> affects the performance characteristics of every other. There's also
> the (very remote) possibility of multiple simultaneous upgrades tracing
> map updates at the same time, causing a greater performance hit.

Also consider that the verifier fixup of update/delete in the code
is permanent whereas attaching fentry or fmod_ret to a nop function is temporary.
Once tracing of the map is no longer necessary that fentry program
will be detached and overhead will go back to zero.
Which is not the case for 'fixup' approach.

With fmod_ret the tracing program might be the enforcing program.
It could be used to disallow certain map access in a generic way.

> > I don't think the "solution" for lookup operation is worth pursuing.
> > The bpf prog that needs this map tracing is completely in your control.
> > So just don't do writes after lookup.
> 
> I eventually want to support apps that use local storage. Those APIs
> generally only allow updates via a pointer. E.g. bpf_sk_storage_get()
> only allows updates via the returned pointer and via
> bpf_sk_storage_delete().
> 
> Since I eventually have to solve this problem to handle local storage,
> then it seems worth solving it for normal maps as well. They seem
> like isomorphic problems.

Especially for local storage... doing tracing from bpf program itself
seems to make the most sense.

>From c7b6ec4488ee50ebbca61c22c6837fd6fe7007bf Mon Sep 17 00:00:00 2001
From: Alexei Starovoitov <ast@...nel.org>
Date: Wed, 6 Oct 2021 09:30:21 -0700
Subject: [PATCH] bpf: trace array map update

Signed-off-by: Alexei Starovoitov <ast@...nel.org>
---
 kernel/bpf/arraymap.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 5e1ccfae916b..89f853b1a217 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -293,6 +293,13 @@ static void check_and_free_timer_in_array(struct bpf_array *arr, void *val)
 		bpf_timer_cancel_and_free(val + arr->map.timer_off);
 }
 
+noinline int bpf_array_map_trace_update(struct bpf_map *map, void *key,
+					void *value, u64 map_flags)
+{
+	return 0;
+}
+ALLOW_ERROR_INJECTION(bpf_array_map_trace_update, ERRNO);
+
 /* Called from syscall or from eBPF program */
 static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
 				 u64 map_flags)
@@ -300,6 +307,7 @@ static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
 	struct bpf_array *array = container_of(map, struct bpf_array, map);
 	u32 index = *(u32 *)key;
 	char *val;
+	int err;
 
 	if (unlikely((map_flags & ~BPF_F_LOCK) > BPF_EXIST))
 		/* unknown flags */
@@ -317,6 +325,9 @@ static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
 		     !map_value_has_spin_lock(map)))
 		return -EINVAL;
 
+	if (unlikely(err = bpf_array_map_trace_update(map, key, value, map_flags)))
+		return err;
+
 	if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY) {
 		memcpy(this_cpu_ptr(array->pptrs[index & array->index_mask]),
 		       value, map->value_size);
-- 
2.30.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ