lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM6PR05MB5879DF6B2BD7DC426869875ED17B0@AM6PR05MB5879.eurprd05.prod.outlook.com>
Date:   Tue, 26 Feb 2019 14:49:01 +0000
From:   Maxim Mikityanskiy <maximmi@...lanox.com>
To:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Björn Töpel <bjorn.topel@...el.com>,
        Magnus Karlsson <magnus.karlsson@...el.com>,
        "David S. Miller" <davem@...emloft.net>
CC:     Tariq Toukan <tariqt@...lanox.com>,
        Saeed Mahameed <saeedm@...lanox.com>,
        Eran Ben Elisha <eranbe@...lanox.com>
Subject: AF_XDP design flaws

Hi everyone,

I would like to discuss some design flaws of AF_XDP socket (XSK) implementation
in kernel. At the moment I don't see a way to work around them without changing
the API, so I would like to make sure that I'm not missing anything and to
suggest and discuss some possible improvements that can be made.

The issues I describe below are caused by the fact that the driver depends on
the application doing some things, and if the application is
slow/buggy/malicious, the driver is forced to busy poll because of the lack of a
notification mechanism from the application side. I will refer to the i40e
driver implementation a lot, as it is the first implementation of AF_XDP, but
the issues are general and affect any driver. I already considered trying to fix
it on driver level, but it doesn't seem possible, so it looks like the behavior
and implementation of AF_XDP in the kernel has to be changed.

RX side busy polling
====================

On the RX side, the driver expects the application to put some descriptors in
the Fill Ring. There is no way for the application to notify the driver that
there are more Fill Ring descriptors to take, so the driver is forced to busy
poll the Fill Ring if it gets empty. E.g., the i40e driver does it in NAPI poll:

int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
{
...
                        failure = failure ||
                                  !i40e_alloc_rx_buffers_fast_zc(rx_ring,
                                                                 cleaned_count);
...
        return failure ? budget : (int)total_rx_packets;
}

Basically, it means that if there are no descriptors in the Fill Ring, NAPI will
never stop, draining CPU.

Possible cases when it happens
------------------------------

1. The application is slow, it received some frames in the RX Ring, and it is
still handling the data, so it has no free frames to put to the Fill Ring.

2. The application is malicious, it opens an XSK and puts no frames to the Fill
Ring. It can be used as a local DoS attack.

3. The application is buggy and stops filling the Fill Ring for whatever reason
(deadlock, waiting for another blocking operation, other bugs).

Although loading an XDP program requires root access, the DoS attack can be
targeted to setups that already use XDP, i.e. an XDP program is already loaded.
Even under root, userspace applications should not be able to disrupt system
stability by just calling normal APIs without an intention to destroy the
system, and here it happens in case 1.

Possible way to solve the issue
-------------------------------

When the driver can't take new Fill Ring frames, it shouldn't busy poll.
Instead, it signals the failure to the application (e.g., with POLLERR), and
after that it's up to the application to restart polling (e.g., by calling
sendto()) after refilling the Fill Ring. The issue with this approach is that it
changes the API, so we either have to deal with it or to introduce some API
version field.

TX side getting stuck
=====================

On the TX side, there is the Completion Ring that the application has to clean.
If it doesn't, the i40e driver stops taking descriptors from the TX Ring. If the
application finally completes something, the driver can go on transmitting.
However, it would require busy polling the Completion Ring (just like with the
Fill Ring on the RX side). i40e doesn't do it, instead, it relies on the
application to kick the TX by calling sendto(). The issue is that poll() doesn't
return POLLOUT in this case, because the TX Ring is full, so the application
will never call sendto(), and the ring is stuck forever (or at least until
something triggers NAPI).

Possible way to solve the issue
-------------------------------

When the driver can't reserve a descriptor in the Completion Ring, it should
signal the failure to the application (e.g., with POLLERR). The application
shouldn't call sendto() every time it sees that the number of not completed
frames is greater than zero (like xdpsock sample does). Instead, the application
should kick the TX only when it wants to flush the ring, and, in addition, after
resolving the cause for POLLERR, i.e. after handling Completion Ring entries.
The API will also have to change with this approach.

Triggering NAPI on a different CPU core
=======================================

.ndo_xsk_async_xmit runs on a random CPU core, so, to preserve CPU affinity,
i40e triggers an interrupt to schedule NAPI, instead of calling napi_schedule
directly. Scheduling NAPI on the correct CPU is what would every driver do, I
guess, but currently it has to be implemented differently in every driver, and
it relies on hardware features (the ability to trigger an IRQ).

I suggest introducing a kernel API that would allow triggering NAPI on a given
CPU. A brief look shows that something like smp_call_function_single_async can
be used. Advantages:

1. It lifts the hardware requirement to be able to raise an interrupt on demand.

2. It would allow to move common code to the kernel (.ndo_xsk_async_xmit).

3. It is also useful in the situation where CPU affinity changes while being in
NAPI poll. Currently, i40e and mlx5e try to stop NAPI polling by returning
a value less than budget if CPU affinity changes. However, there are cases
(e.g., NAPIF_STATE_MISSED) when NAPI will be rescheduled on a wrong CPU. It's a
race between the interrupt, which will move NAPI to the correct CPU, and
__napi_schedule from a wrong CPU. Having an API to schedule NAPI on a given CPU
will benefit both mlx5e and i40e, because when this situation happens, it kills
the performance.

I would be happy to hear your thoughts about these issues.

Thanks,
Max

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ