lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 4 Feb 2020 09:09:22 -0800
From:   Jakub Kicinski <kuba@...nel.org>
To:     Toke Høiland-Jørgensen <toke@...hat.com>
Cc:     Jesper Dangaard Brouer <brouer@...hat.com>,
        David Ahern <dsahern@...il.com>,
        David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org,
        prashantbhole.linux@...il.com, jasowang@...hat.com,
        davem@...emloft.net, mst@...hat.com, toshiaki.makita1@...il.com,
        daniel@...earbox.net, john.fastabend@...il.com, ast@...nel.org,
        kafai@...com, songliubraving@...com, yhs@...com, andriin@...com,
        David Ahern <dahern@...italocean.com>,
        Björn Töpel <bjorn.topel@...el.com>,
        "Karlsson, Magnus" <magnus.karlsson@...el.com>
Subject: Re: [PATCH bpf-next 03/12] net: Add IFLA_XDP_EGRESS for XDP
 programs in the egress path

On Tue, 04 Feb 2020 12:00:40 +0100, Toke Høiland-Jørgensen wrote:
> Jesper Dangaard Brouer <brouer@...hat.com> writes:
> > On Mon, 03 Feb 2020 21:13:24 +0100
> > Toke Høiland-Jørgensen <toke@...hat.com> wrote:
> >  
> >> Oops, I see I forgot to reply to this bit:
> >>   
> >> >> Yeah, but having the low-level details available to the XDP program
> >> >> (such as HW queue occupancy for the egress hook) is one of the benefits
> >> >> of XDP, isn't it?    
> >> >
> >> > I think I glossed over the hope for having access to HW queue occupancy
> >> > - what exactly are you after? 
> >> >
> >> > I don't think one can get anything beyond a BQL type granularity.
> >> > Reading over PCIe is out of question, device write back on high
> >> > granularity would burn through way too much bus throughput.    
> >> 
> >> This was Jesper's idea originally, so maybe he can explain better; but
> >> as I understood it, he basically wanted to expose the same information
> >> that BQL has to eBPF. Making it possible for an eBPF program to either
> >> (re-)implement BQL with its own custom policy, or react to HWQ pressure
> >> in some other way, such as by load balancing to another interface.  
> >
> > Yes, and I also have plans that goes beyond BQL. But let me start with
> > explaining the BQL part, and answer Toke's question below.
> >
> > On Mon, 03 Feb 2020 20:56:03 +0100 Toke wrote:  
> >> [...] Hmm, I wonder if a TX driver hook is enough?  
> >
> > Short answer is no, a TX driver hook is not enough.  The queue state
> > info the TX driver hook have access to, needs to be updated once the
> > hardware have "confirmed" the TX-DMA operation have completed.  For
> > BQL/DQL this update happens during TX-DMA completion/cleanup (code
> > see call sites for netdev_tx_completed_queue()).  (As Jakub wisely
> > point out we cannot query the device directly due to performance
> > implications).  It doesn't need to be a new BPF hook, just something
> > that update the queue state info (we could piggy back on the
> > netdev_tx_completed_queue() call or give TX hook access to
> > dev_queue->dql).  

Interesting, that model does make sense to me.

> The question is whether this can't simply be done through bpf helpers?
> bpf_get_txq_occupancy(ifindex, txqno)?

Helper vs dev_queue->dql field access seems like a technicality.
The usual flexibility of implementation vs performance and simplicity
consideration applies.. I guess?

> > Regarding "where is the queue": For me the XDP-TX queue is the NIC
> > hardware queue, that this BPF hook have some visibility into and can do
> > filtering on. (Imagine that my TX queue is bandwidth limited, then I
> > can shrink the packet size and still send a "congestion" packet to my
> > receiver).  
> 
> I'm not sure the hardware queues will be enough, though. Unless I'm
> misunderstanding something, hardware queues are (1) fairly short and (2)
> FIFO. So, say we wanted to implement fq_codel for XDP forwarding: we'd
> still need a software queueing layer on top of the hardware queue.

Jesper makes a very interesting point tho. If all the implementation
wants is FIFO queues which are services in some simple manner (that is
something that can be offloaded) we should support that.

That means REDIRECT can target multiple TX queues, and we need an API
to control the queue allocation..

> If the hardware is EDT-aware this may change, I suppose, but I'm not
> sure if we can design the XDP queueing primitives with this assumption? :)

But I agree with you as well. I think both HW and SW feeding needs to
be supported. The HW implementations are always necessarily behind
ideas people implemented and tested in SW..

> > The bigger picture is that I envision the XDP-TX/egress hook can
> > open-up for taking advantage of NIC hardware TX queue features. This
> > also ties into the queue abstraction work by Björn+Magnus. Today NIC
> > hardware can do a million TX-queues, and hardware can also do rate
> > limiting per queue. Thus, I also envision that the XDP-TX/egress hook
> > can choose/change the TX queue the packet is queue/sent on (we can
> > likely just overload the XDP_REDIRECT and have a new bpf map type for
> > this).  

I wonder what that does to our HW offload model which is based on
TC Qdisc offload today :S Do we use TC API to control configuration 
of XDP queues? :S

> Yes, I think I mentioned in another email that putting all the queueing
> smarts into the redirect map was also something I'd considered (well, I
> do think we've discussed this in the past, so maybe not so surprising if
> we're thinking along the same lines) :)
> 
> But the implication of this is also that an actual TX hook in the driver
> need not necessarily incorporate a lot of new functionality, as it can
> control the queueing through a combination of BPF helpers and map
> updates?

True, it's the dequeuing that's on the TX side, so we could go as far
as putting all the enqueuing logic in the RX prog..

To answer your question from the other email Toke, my basic model was
kind of similar to TC Qdiscs. XDP redirect selects a device, then that
device has an enqueue and dequeue programs. Enqueue program can be run
in the XDP_REDIRECT context, dequeue is run every time NAPI cleaned up
some space on the TX descriptor ring. There is a "queue state" but the
FIFOs etc are sort of internal detail that the enqueue and dequeue
programs only share between each other. To be clear this is not a
suggestion of how things should be, it's what sprung to my mind without
thinking.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ