[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xunyo9007hk9.fsf@redhat.com>
Date: Wed, 04 Sep 2019 08:32:06 +0300
From: Yauheni Kaliuta <yauheni.kaliuta@...hat.com>
To: Magnus Karlsson <magnus.karlsson@...el.com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH bpf 2/2] libbpf: remove dependency on barrier.h in xsk.h
Hi, Magnus!
>>>>> On Tue, 9 Apr 2019 08:44:13 +0200, Magnus Karlsson wrote:
> The use of smp_rmb() and smp_wmb() creates a Linux header dependency
> on barrier.h that is uneccessary in most parts. This patch implements
> the two small defines that are needed from barrier.h. As a bonus, the
> new implementations are faster than the default ones as they default
> to sfence and lfence for x86, while we only need a compiler barrier in
> our case. Just as it is when the same ring access code is compiled in
> the kernel.
> Fixes: 1cad07884239 ("libbpf: add support for using AF_XDP sockets")
> Signed-off-by: Magnus Karlsson <magnus.karlsson@...el.com>
> ---
> tools/lib/bpf/xsk.h | 19 +++++++++++++++++--
> 1 file changed, 17 insertions(+), 2 deletions(-)
> diff --git a/tools/lib/bpf/xsk.h b/tools/lib/bpf/xsk.h
> index 3638147..317b44f 100644
> --- a/tools/lib/bpf/xsk.h
> +++ b/tools/lib/bpf/xsk.h
> @@ -39,6 +39,21 @@ DEFINE_XSK_RING(xsk_ring_cons);
> struct xsk_umem;
> struct xsk_socket;
> +#if !defined bpf_smp_rmb && !defined bpf_smp_wmb
> +# if defined(__i386__) || defined(__x86_64__)
> +# define bpf_smp_rmb() asm volatile("" : : : "memory")
> +# define bpf_smp_wmb() asm volatile("" : : : "memory")
> +# elif defined(__aarch64__)
> +# define bpf_smp_rmb() asm volatile("dmb ishld" : : : "memory")
> +# define bpf_smp_wmb() asm volatile("dmb ishst" : : : "memory")
> +# elif defined(__arm__)
> +# define bpf_smp_rmb() asm volatile("dmb ish" : : : "memory")
> +# define bpf_smp_wmb() asm volatile("dmb ishst" : : : "memory")
> +# else
> +# error Architecture not supported by the XDP socket code in libbpf.
> +# endif
> +#endif
> +
What about other architectures then?
> static inline __u64 *xsk_ring_prod__fill_addr(struct xsk_ring_prod *fill,
> __u32 idx)
> {
> @@ -119,7 +134,7 @@ static inline void xsk_ring_prod__submit(struct xsk_ring_prod *prod, size_t nb)
> /* Make sure everything has been written to the ring before signalling
> * this to the kernel.
> */
> - smp_wmb();
> + bpf_smp_wmb();
> *prod->producer += nb;
> }
> @@ -133,7 +148,7 @@ static inline size_t xsk_ring_cons__peek(struct xsk_ring_cons *cons,
> /* Make sure we do not speculatively read the data before
> * we have received the packet buffers from the ring.
> */
> - smp_rmb();
> + bpf_smp_rmb();
> *idx = cons->cached_cons;
cons-> cached_cons += entries;
> --
> 2.7.4
--
WBR,
Yauheni Kaliuta
Powered by blists - more mailing lists