lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877cxtgnjh.fsf@toke.dk>
Date:   Wed, 11 Jan 2023 15:21:06 +0100
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     Magnus Karlsson <magnus.karlsson@...il.com>,
        Paolo Abeni <pabeni@...hat.com>
Cc:     Shawn Bohrer <sbohrer@...udflare.com>, netdev@...r.kernel.org,
        bpf@...r.kernel.org, bjorn@...nel.org, kernel-team@...udflare.com,
        davem@...emloft.net
Subject: Re: [PATCH] veth: Fix race with AF_XDP exposing old or
 uninitialized descriptors

Magnus Karlsson <magnus.karlsson@...il.com> writes:

> On Thu, Dec 22, 2022 at 11:18 AM Paolo Abeni <pabeni@...hat.com> wrote:
>>
>> On Tue, 2022-12-20 at 12:59 -0600, Shawn Bohrer wrote:
>> > When AF_XDP is used on on a veth interface the RX ring is updated in two
>> > steps.  veth_xdp_rcv() removes packet descriptors from the FILL ring
>> > fills them and places them in the RX ring updating the cached_prod
>> > pointer.  Later xdp_do_flush() syncs the RX ring prod pointer with the
>> > cached_prod pointer allowing user-space to see the recently filled in
>> > descriptors.  The rings are intended to be SPSC, however the existing
>> > order in veth_poll allows the xdp_do_flush() to run concurrently with
>> > another CPU creating a race condition that allows user-space to see old
>> > or uninitialized descriptors in the RX ring.  This bug has been observed
>> > in production systems.
>> >
>> > To summarize, we are expecting this ordering:
>> >
>> > CPU 0 __xsk_rcv_zc()
>> > CPU 0 __xsk_map_flush()
>> > CPU 2 __xsk_rcv_zc()
>> > CPU 2 __xsk_map_flush()
>> >
>> > But we are seeing this order:
>> >
>> > CPU 0 __xsk_rcv_zc()
>> > CPU 2 __xsk_rcv_zc()
>> > CPU 0 __xsk_map_flush()
>> > CPU 2 __xsk_map_flush()
>> >
>> > This occurs because we rely on NAPI to ensure that only one napi_poll
>> > handler is running at a time for the given veth receive queue.
>> > napi_schedule_prep() will prevent multiple instances from getting
>> > scheduled. However calling napi_complete_done() signals that this
>> > napi_poll is complete and allows subsequent calls to
>> > napi_schedule_prep() and __napi_schedule() to succeed in scheduling a
>> > concurrent napi_poll before the xdp_do_flush() has been called.  For the
>> > veth driver a concurrent call to napi_schedule_prep() and
>> > __napi_schedule() can occur on a different CPU because the veth xmit
>> > path can additionally schedule a napi_poll creating the race.
>>
>> The above looks like a generic problem that other drivers could hit.
>> Perhaps it could be worthy updating the xdp_do_flush() doc text to
>> explicitly mention it must be called before napi_complete_done().
>
> Good observation. I took a quick peek at this and it seems there are
> at least 5 more drivers that can call napi_complete_done() before
> xdp_do_flush():
>
> drivers/net/ethernet/qlogic/qede/
> drivers/net/ethernet/freescale/dpaa2
> drivers/net/ethernet/freescale/dpaa
> drivers/net/ethernet/microchip/lan966x
> drivers/net/virtio_net.c
>
> The question is then if this race can occur on these five drivers.
> Dpaa2 has AF_XDP zero-copy support implemented, so it can suffer from
> this race as the Tx zero-copy path is basically just a napi_schedule()
> and it can be called/invoked from multiple processes at the same time.
> In regards to the others, I do not know.
>
> Would it be prudent to just switch the order of xdp_do_flush() and
> napi_complete_done() in all these drivers, or would that be too
> defensive?

We rely on being inside a single NAPI instance trough to the
xdp_do_flush() call for RCU protection of all in-kernel data structures
as well[0]. Not sure if this leads to actual real-world bugs for the
in-kernel path, but conceptually it's wrong at least. So yeah, I think
we should definitely swap the order everywhere and document this!

-Toke

[0] See https://lore.kernel.org/r/20210624160609.292325-1-toke@redhat.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ