[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH3MdRXwK3=8z=CNK-=vvL5dY4nvT1ZCWXDOUWLR8f65zydS2Q@mail.gmail.com>
Date: Wed, 10 Apr 2019 10:56:09 -0700
From: Y Song <ys114321@...il.com>
To: Magnus Karlsson <magnus.karlsson@...el.com>
Cc: Björn Töpel <bjorn.topel@...el.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
netdev <netdev@...r.kernel.org>, bpf@...r.kernel.org,
bruce.richardson@...el.com, ciara.loftus@...el.com,
ilias.apalodimas@...aro.org, xiaolong.ye@...el.com,
ferruh.yigit@...el.com, qi.z.zhang@...el.com, georgmueller@....net
Subject: Re: [PATCH bpf v2 1/2] libbpf: remove likely/unlikely in xsk.h
On Wed, Apr 10, 2019 at 12:21 AM Magnus Karlsson
<magnus.karlsson@...el.com> wrote:
>
> This patch removes the use of likely and unlikely in xsk.h since they
> create a dependency on Linux headers as reported by several
> users. There have also been reports that the use of these decreases
> performance as the compiler puts the code on two different cache lines
> instead of on a single one. All in all, I think we are better off
> without them.
The change looks good to me.
Acked-by: Yonghong Song <yhs@...com>
libbpf repo (https://github.com/libbpf/libbpf/) solved this issue by
providing some customer
implementation just to satisfying compilatioins. I guess users here do
not use libbpf repo and they
directly extract kernel source and try to build?
Just curious. do you have detailed info about which code in two
different cache lines instead
of one cache line and how much performance degradation?
>
> Fixes: 1cad07884239 ("libbpf: add support for using AF_XDP sockets")
> Signed-off-by: Magnus Karlsson <magnus.karlsson@...el.com>
> ---
> tools/lib/bpf/xsk.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/tools/lib/bpf/xsk.h b/tools/lib/bpf/xsk.h
> index a497f00..3638147 100644
> --- a/tools/lib/bpf/xsk.h
> +++ b/tools/lib/bpf/xsk.h
> @@ -105,7 +105,7 @@ static inline __u32 xsk_cons_nb_avail(struct xsk_ring_cons *r, __u32 nb)
> static inline size_t xsk_ring_prod__reserve(struct xsk_ring_prod *prod,
> size_t nb, __u32 *idx)
> {
> - if (unlikely(xsk_prod_nb_free(prod, nb) < nb))
> + if (xsk_prod_nb_free(prod, nb) < nb)
> return 0;
>
> *idx = prod->cached_prod;
> @@ -129,7 +129,7 @@ static inline size_t xsk_ring_cons__peek(struct xsk_ring_cons *cons,
> {
> size_t entries = xsk_cons_nb_avail(cons, nb);
>
> - if (likely(entries > 0)) {
> + if (entries > 0) {
> /* Make sure we do not speculatively read the data before
> * we have received the packet buffers from the ring.
> */
> --
> 2.7.4
>
Powered by blists - more mailing lists