[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz2M0Xj_maD3jZeZedrUXGNJqvbV_DyC2A8Yh9R6z7gfsg@mail.gmail.com>
Date: Sat, 28 Mar 2020 10:18:45 +0100
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: Fletcher Dunn <fletcherd@...vesoftware.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
Andrii Nakryiko <andriin@...com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
Brandon Gilmore <bgilmore@...vesoftware.com>,
Steven Noonan <steven@...vesoftware.com>
Subject: Re: [PATCH bpf-next] xsk: Init all ring members in xsk_umem__create
and xsk_socket__create
On Fri, Mar 27, 2020 at 4:40 AM Fletcher Dunn
<fletcherd@...vesoftware.com> wrote:
>
> Fix a sharp edge in xsk_umem__create and xsk_socket__create. Almost all of
> the members of the ring buffer structs are initialized, but the "cached_xxx"
> variables are not all initialized. The caller is required to zero them.
> This is needlessly dangerous. The results if you don't do it can be very bad.
> For example, they can cause xsk_prod_nb_free and xsk_cons_nb_avail to return
> values greater than the size of the queue. xsk_ring_cons__peek can return an
> index that does not refer to an item that has been queued.
>
> I have confirmed that without this change, my program misbehaves unless I
> memset the ring buffers to zero before calling the function. Afterwards,
> my program works without (or with) the memset.
Thank you Flecther for catching this. Appreciated.
/Magnus
Acked-by: Magnus Karlsson <magnus.karlsson@...el.com>
> Signed-off-by: Fletcher Dunn <fletcherd@...vesoftware.com>
>
> ---
>
> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> index 9807903f121e..f7f4efb70a4c 100644
> --- a/tools/lib/bpf/xsk.c
> +++ b/tools/lib/bpf/xsk.c
> @@ -280,7 +280,11 @@ int xsk_umem__create_v0_0_4(struct xsk_umem **umem_ptr, void *umem_area,
> fill->consumer = map + off.fr.consumer;
> fill->flags = map + off.fr.flags;
> fill->ring = map + off.fr.desc;
> - fill->cached_cons = umem->config.fill_size;
> + fill->cached_prod = *fill->producer;
> + /* cached_cons is "size" bigger than the real consumer pointer
> + * See xsk_prod_nb_free
> + */
> + fill->cached_cons = *fill->consumer + umem->config.fill_size;
>
> map = mmap(NULL, off.cr.desc + umem->config.comp_size * sizeof(__u64),
> PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, umem->fd,
> @@ -297,6 +301,8 @@ int xsk_umem__create_v0_0_4(struct xsk_umem **umem_ptr, void *umem_area,
> comp->consumer = map + off.cr.consumer;
> comp->flags = map + off.cr.flags;
> comp->ring = map + off.cr.desc;
> + comp->cached_prod = *comp->producer;
> + comp->cached_cons = *comp->consumer;
>
> *umem_ptr = umem;
> return 0;
> @@ -672,6 +678,8 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> rx->consumer = rx_map + off.rx.consumer;
> rx->flags = rx_map + off.rx.flags;
> rx->ring = rx_map + off.rx.desc;
> + rx->cached_prod = *rx->producer;
> + rx->cached_cons = *rx->consumer;
> }
> xsk->rx = rx;
>
> @@ -691,7 +699,11 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> tx->consumer = tx_map + off.tx.consumer;
> tx->flags = tx_map + off.tx.flags;
> tx->ring = tx_map + off.tx.desc;
> - tx->cached_cons = xsk->config.tx_size;
> + tx->cached_prod = *tx->producer;
> + /* cached_cons is r->size bigger than the real consumer pointer
> + * See xsk_prod_nb_free
> + */
> + tx->cached_cons = *tx->consumer + xsk->config.tx_size;
> }
> xsk->tx = tx;
>
Powered by blists - more mailing lists