lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Sep 2020 10:24:38 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Björn Töpel <bjorn.topel@...el.com>
Cc:     Jesper Dangaard Brouer <brouer@...hat.com>,
        Björn Töpel 
        <bjorn.topel@...il.com>, Eric Dumazet <eric.dumazet@...il.com>,
        ast@...nel.org, daniel@...earbox.net, netdev@...r.kernel.org,
        bpf@...r.kernel.org, magnus.karlsson@...el.com,
        davem@...emloft.net, john.fastabend@...il.com,
        intel-wired-lan@...ts.osuosl.org
Subject: Re: [PATCH bpf-next 0/6] xsk: exit NAPI loop when AF_XDP Rx ring is
 full

On Tue, 8 Sep 2020 08:58:30 +0200 Björn Töpel wrote:
> >> As for this patch set, I think it would make sense to pull it in since
> >> it makes the single-core scenario *much* better, and it is pretty
> >> simple. Then do the application polling as another, potentially,
> >> improvement series.  
> > 
> > Up to you, it's extra code in the driver so mostly your code to
> > maintain.
> > 
> > I think that if we implement what I described above - everyone will
> > use that on a single core setup, so this set would be dead code
> > (assuming RQ is sized appropriately). But again, your call :)
> 
> Now, I agree that the busy-poll you describe above would be the best
> option, but from my perspective it's a much larger set that involves
> experimenting. I will explore that, but I still think this series should
> go in sooner to make the single core scenario usable *today*.
> 
> Ok, back to the busy-poll ideas. I'll call your idea "strict busy-poll",
> i.e. the NAPI loop is *only* driven by userland, and interrupts stay
> disabled. "Syscall driven poll-mode driver". :-)
> 
> On the driver side (again, only talking Intel here, since that's what I
> know the details of), the NAPI context would only cover AF_XDP queues,
> so that other queues are not starved.
> 
> Any ideas how strict busy-poll would look, API/implmentation-wise? An
> option only for AF_XDP sockets? Would this make sense to regular
> sockets? If so, maybe extend the existing NAPI busy-poll with a "strict"
> mode?

For AF_XDP and other sockets I think it should be quite straightforward.

For AF_XDP just implement current busy poll.

Then for all socket types add a new sockopt which sets "timeout" on how
long the IRQs can be suppressed for (we don't want application crash or
hang to knock the system off the network), or just enables the feature
and the timeout is from a sysctl.

Then make sure that at the end of polling napi doesn't get scheduled,
and set some bit which will prevent napi_schedule_prep() from letting
normal IRQ processing from scheduling it, too. Set a timer for the
timeout handling to undo all this.

What I haven't figured out in my head is how/if this relates to the
ongoing wq/threaded NAPI polling work 🤔 but that shouldn't stop you.

> I'll start playing around a bit, but again, I think this simple series
> should go in just to make AF_XDP single core usable *today*.

No objection from me.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ