lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <74985816-3a3a-490e-b8f0-49f795ab2f07@kernel.org>
Date: Thu, 13 Jun 2024 11:32:04 +0200
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
 linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
 Daniel Bristot de Oliveira <bristot@...nel.org>,
 Boqun Feng <boqun.feng@...il.com>, Daniel Borkmann <daniel@...earbox.net>,
 Eric Dumazet <edumazet@...gle.com>, Frederic Weisbecker
 <frederic@...nel.org>, Ingo Molnar <mingo@...hat.com>,
 Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
 Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner <tglx@...utronix.de>,
 Waiman Long <longman@...hat.com>, Will Deacon <will@...nel.org>,
 Alexei Starovoitov <ast@...nel.org>, Andrii Nakryiko <andrii@...nel.org>,
 Eduard Zingerman <eddyz87@...il.com>, Hao Luo <haoluo@...gle.com>,
 Jiri Olsa <jolsa@...nel.org>, John Fastabend <john.fastabend@...il.com>,
 KP Singh <kpsingh@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>,
 Song Liu <song@...nel.org>, Stanislav Fomichev <sdf@...gle.com>,
 Toke Høiland-Jørgensen <toke@...hat.com>,
 Yonghong Song <yonghong.song@...ux.dev>, bpf@...r.kernel.org
Subject: Re: [PATCH v6 net-next 14/15] net: Reference bpf_redirect_info via
 task_struct on PREEMPT_RT.



On 12/06/2024 18.44, Sebastian Andrzej Siewior wrote:
> The XDP redirect process is two staged:
> - bpf_prog_run_xdp() is invoked to run a eBPF program which inspects the
>    packet and makes decisions. While doing that, the per-CPU variable
>    bpf_redirect_info is used.
> 
> - Afterwards xdp_do_redirect() is invoked and accesses bpf_redirect_info
>    and it may also access other per-CPU variables like xskmap_flush_list.
> 
> At the very end of the NAPI callback, xdp_do_flush() is invoked which
> does not access bpf_redirect_info but will touch the individual per-CPU
> lists.
> 
> The per-CPU variables are only used in the NAPI callback hence disabling
> bottom halves is the only protection mechanism. Users from preemptible
> context (like cpu_map_kthread_run()) explicitly disable bottom halves
> for protections reasons.
> Without locking in local_bh_disable() on PREEMPT_RT this data structure
> requires explicit locking.
> 
> PREEMPT_RT has forced-threaded interrupts enabled and every
> NAPI-callback runs in a thread. If each thread has its own data
> structure then locking can be avoided.
> 
> Create a struct bpf_net_context which contains struct bpf_redirect_info.
> Define the variable on stack, use bpf_net_ctx_set() to save a pointer to
> it, bpf_net_ctx_clear() removes it again.
> The bpf_net_ctx_set() may nest. For instance a function can be used from
> within NET_RX_SOFTIRQ/ net_rx_action which uses bpf_net_ctx_set() and
> NET_TX_SOFTIRQ which does not. Therefore only the first invocations
> updates the pointer.
> Use bpf_net_ctx_get_ri() as a wrapper to retrieve the current struct
> bpf_redirect_info. The returned data structure is zero initialized to
> ensure nothing is leaked from stack. This is done on first usage of the
> struct. bpf_net_ctx_set() sets bpf_redirect_info::kern_flags  to 0 to
> note that initialisation is required. First invocation of
> bpf_net_ctx_get_ri() will memset() the data structure and update
> bpf_redirect_info::kern_flags.
> bpf_redirect_info::nh  is excluded from memset because it is only used
> once BPF_F_NEIGH is set which also sets the nh member. The kern_flags is
> moved past nh to exclude it from memset.
> 
> The pointer to bpf_net_context is saved task's task_struct. Using
> always the bpf_net_context approach has the advantage that there is
> almost zero differences between PREEMPT_RT and non-PREEMPT_RT builds.
> 
> Cc: Alexei Starovoitov<ast@...nel.org>
> Cc: Andrii Nakryiko<andrii@...nel.org>
> Cc: Eduard Zingerman<eddyz87@...il.com>
> Cc: Hao Luo<haoluo@...gle.com>
> Cc: Jesper Dangaard Brouer<hawk@...nel.org>
> Cc: Jiri Olsa<jolsa@...nel.org>
> Cc: John Fastabend<john.fastabend@...il.com>
> Cc: KP Singh<kpsingh@...nel.org>
> Cc: Martin KaFai Lau<martin.lau@...ux.dev>
> Cc: Song Liu<song@...nel.org>
> Cc: Stanislav Fomichev<sdf@...gle.com>
> Cc: Toke Høiland-Jørgensen<toke@...hat.com>
> Cc: Yonghong Song<yonghong.song@...ux.dev>
> Cc:bpf@...r.kernel.org
> Acked-by: Alexei Starovoitov<ast@...nel.org>
> Reviewed-by: Toke Høiland-Jørgensen<toke@...hat.com>
> Signed-off-by: Sebastian Andrzej Siewior<bigeasy@...utronix.de>
> ---
>   include/linux/filter.h | 56 ++++++++++++++++++++++++++++++++++--------
>   include/linux/sched.h  |  3 +++
>   kernel/bpf/cpumap.c    |  3 +++
>   kernel/bpf/devmap.c    |  9 ++++++-
>   kernel/fork.c          |  1 +
>   net/bpf/test_run.c     | 11 ++++++++-
>   net/core/dev.c         | 26 +++++++++++++++++++-
>   net/core/filter.c      | 44 +++++++++------------------------
>   net/core/lwt_bpf.c     |  3 +++
>   9 files changed, 111 insertions(+), 45 deletions(-)
> 

I like it :-)

Acked-by: Jesper Dangaard Brouer <hawk@...nel.org>


> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index b02aea291b7e8..0a7f6e4a00b60 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -733,21 +733,59 @@ struct bpf_nh_params {
>   	};
>   };
>   
> +/* flags for bpf_redirect_info kern_flags */
> +#define BPF_RI_F_RF_NO_DIRECT	BIT(0)	/* no napi_direct on return_frame */
> +#define BPF_RI_F_RI_INIT	BIT(1)
> +
>   struct bpf_redirect_info {
>   	u64 tgt_index;
>   	void *tgt_value;
>   	struct bpf_map *map;
>   	u32 flags;
> -	u32 kern_flags;
>   	u32 map_id;
>   	enum bpf_map_type map_type;
>   	struct bpf_nh_params nh;
> +	u32 kern_flags;
>   };
>   
> -DECLARE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info);
> +struct bpf_net_context {
> +	struct bpf_redirect_info ri;
> +};
>   
> -/* flags for bpf_redirect_info kern_flags */
> -#define BPF_RI_F_RF_NO_DIRECT	BIT(0)	/* no napi_direct on return_frame */
> +static inline struct bpf_net_context *bpf_net_ctx_set(struct bpf_net_context *bpf_net_ctx)
> +{
> +	struct task_struct *tsk = current;
> +
> +	if (tsk->bpf_net_context != NULL)
> +		return NULL;
> +	bpf_net_ctx->ri.kern_flags = 0;
> +
> +	tsk->bpf_net_context = bpf_net_ctx;
> +	return bpf_net_ctx;
> +}
> +
> +static inline void bpf_net_ctx_clear(struct bpf_net_context *bpf_net_ctx)
> +{
> +	if (bpf_net_ctx)
> +		current->bpf_net_context = NULL;
> +}
> +
> +static inline struct bpf_net_context *bpf_net_ctx_get(void)
> +{
> +	return current->bpf_net_context;
> +}
> +
> +static inline struct bpf_redirect_info *bpf_net_ctx_get_ri(void)
> +{
> +	struct bpf_net_context *bpf_net_ctx = bpf_net_ctx_get();
> +
> +	if (!(bpf_net_ctx->ri.kern_flags & BPF_RI_F_RI_INIT)) {
> +		memset(&bpf_net_ctx->ri, 0, offsetof(struct bpf_net_context, ri.nh));
> +		bpf_net_ctx->ri.kern_flags |= BPF_RI_F_RI_INIT;
> +	}
> +
> +	return &bpf_net_ctx->ri;
> +}
>   
>   /* Compute the linear packet data range [data, data_end) which
>    * will be accessed by various program types (cls_bpf, act_bpf,
> @@ -1018,25 +1056,23 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
>   				       const struct bpf_insn *patch, u32 len);
>   int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt);
>   
> -void bpf_clear_redirect_map(struct bpf_map *map);
> -
>   static inline bool xdp_return_frame_no_direct(void)
>   {
> -	struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
> +	struct bpf_redirect_info *ri = bpf_net_ctx_get_ri();
>   
>   	return ri->kern_flags & BPF_RI_F_RF_NO_DIRECT;
>   }
>   
>   static inline void xdp_set_return_frame_no_direct(void)
>   {
> -	struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
> +	struct bpf_redirect_info *ri = bpf_net_ctx_get_ri();
>   
>   	ri->kern_flags |= BPF_RI_F_RF_NO_DIRECT;
>   }
>   
>   static inline void xdp_clear_return_frame_no_direct(void)
>   {
> -	struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
> +	struct bpf_redirect_info *ri = bpf_net_ctx_get_ri();
>   
>   	ri->kern_flags &= ~BPF_RI_F_RF_NO_DIRECT;
>   }
> @@ -1592,7 +1628,7 @@ static __always_inline long __bpf_xdp_redirect_map(struct bpf_map *map, u64 inde
>   						   u64 flags, const u64 flag_mask,
>   						   void *lookup_elem(struct bpf_map *map, u32 key))
>   {
> -	struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
> +	struct bpf_redirect_info *ri = bpf_net_ctx_get_ri();
>   	const u64 action_mask = XDP_ABORTED | XDP_DROP | XDP_PASS | XDP_TX;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ