lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <878slhdpap.fsf@toke.dk>
Date:   Wed, 05 Feb 2020 16:30:54 +0100
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     Jesper Dangaard Brouer <brouer@...hat.com>,
        David Ahern <dsahern@...il.com>,
        David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org,
        prashantbhole.linux@...il.com, jasowang@...hat.com,
        davem@...emloft.net, mst@...hat.com, toshiaki.makita1@...il.com,
        daniel@...earbox.net, john.fastabend@...il.com, ast@...nel.org,
        kafai@...com, songliubraving@...com, yhs@...com, andriin@...com,
        David Ahern <dahern@...italocean.com>,
        Björn Töpel <bjorn.topel@...el.com>,
        "Karlsson\, Magnus" <magnus.karlsson@...el.com>
Subject: Re: [PATCH bpf-next 03/12] net: Add IFLA_XDP_EGRESS for XDP programs in the egress path

Jakub Kicinski <kuba@...nel.org> writes:

> On Tue, 04 Feb 2020 12:00:40 +0100, Toke Høiland-Jørgensen wrote:
>> Jesper Dangaard Brouer <brouer@...hat.com> writes:
>> > On Mon, 03 Feb 2020 21:13:24 +0100
>> > Toke Høiland-Jørgensen <toke@...hat.com> wrote:
>> >  
>> >> Oops, I see I forgot to reply to this bit:
>> >>   
>> >> >> Yeah, but having the low-level details available to the XDP program
>> >> >> (such as HW queue occupancy for the egress hook) is one of the benefits
>> >> >> of XDP, isn't it?    
>> >> >
>> >> > I think I glossed over the hope for having access to HW queue occupancy
>> >> > - what exactly are you after? 
>> >> >
>> >> > I don't think one can get anything beyond a BQL type granularity.
>> >> > Reading over PCIe is out of question, device write back on high
>> >> > granularity would burn through way too much bus throughput.    
>> >> 
>> >> This was Jesper's idea originally, so maybe he can explain better; but
>> >> as I understood it, he basically wanted to expose the same information
>> >> that BQL has to eBPF. Making it possible for an eBPF program to either
>> >> (re-)implement BQL with its own custom policy, or react to HWQ pressure
>> >> in some other way, such as by load balancing to another interface.  
>> >
>> > Yes, and I also have plans that goes beyond BQL. But let me start with
>> > explaining the BQL part, and answer Toke's question below.
>> >
>> > On Mon, 03 Feb 2020 20:56:03 +0100 Toke wrote:  
>> >> [...] Hmm, I wonder if a TX driver hook is enough?  
>> >
>> > Short answer is no, a TX driver hook is not enough.  The queue state
>> > info the TX driver hook have access to, needs to be updated once the
>> > hardware have "confirmed" the TX-DMA operation have completed.  For
>> > BQL/DQL this update happens during TX-DMA completion/cleanup (code
>> > see call sites for netdev_tx_completed_queue()).  (As Jakub wisely
>> > point out we cannot query the device directly due to performance
>> > implications).  It doesn't need to be a new BPF hook, just something
>> > that update the queue state info (we could piggy back on the
>> > netdev_tx_completed_queue() call or give TX hook access to
>> > dev_queue->dql).  
>
> Interesting, that model does make sense to me.
>
>> The question is whether this can't simply be done through bpf helpers?
>> bpf_get_txq_occupancy(ifindex, txqno)?
>
> Helper vs dev_queue->dql field access seems like a technicality.
> The usual flexibility of implementation vs performance and simplicity
> consideration applies.. I guess?

Yeah, that was my point: We can do whatever makes the most sense for
that; it doesn't necessarily have to be something that's baked into the
TX hook point.

>> > Regarding "where is the queue": For me the XDP-TX queue is the NIC
>> > hardware queue, that this BPF hook have some visibility into and can do
>> > filtering on. (Imagine that my TX queue is bandwidth limited, then I
>> > can shrink the packet size and still send a "congestion" packet to my
>> > receiver).  
>> 
>> I'm not sure the hardware queues will be enough, though. Unless I'm
>> misunderstanding something, hardware queues are (1) fairly short and (2)
>> FIFO. So, say we wanted to implement fq_codel for XDP forwarding: we'd
>> still need a software queueing layer on top of the hardware queue.
>
> Jesper makes a very interesting point tho. If all the implementation
> wants is FIFO queues which are services in some simple manner (that is
> something that can be offloaded) we should support that.
>
> That means REDIRECT can target multiple TX queues, and we need an API
> to control the queue allocation..

Yes. I think that should be part of the "hwq API" that we presented at
LPC last year[0].

>> If the hardware is EDT-aware this may change, I suppose, but I'm not
>> sure if we can design the XDP queueing primitives with this assumption? :)
>
> But I agree with you as well. I think both HW and SW feeding needs to
> be supported. The HW implementations are always necessarily behind
> ideas people implemented and tested in SW..

Exactly. And even though we mostly think about 10-100G network
interfaces, I believe there is also potential for this to be useful at
the low end (think small embedded routers that are very CPU
constrained). In these sorts of environments software queueing is more
feasible.

>> > The bigger picture is that I envision the XDP-TX/egress hook can
>> > open-up for taking advantage of NIC hardware TX queue features. This
>> > also ties into the queue abstraction work by Björn+Magnus. Today NIC
>> > hardware can do a million TX-queues, and hardware can also do rate
>> > limiting per queue. Thus, I also envision that the XDP-TX/egress hook
>> > can choose/change the TX queue the packet is queue/sent on (we can
>> > likely just overload the XDP_REDIRECT and have a new bpf map type for
>> > this).  
>
> I wonder what that does to our HW offload model which is based on
> TC Qdisc offload today :S Do we use TC API to control configuration 
> of XDP queues? :S

My thought was that it could be baked into the same API[0]. I.e., add a
way to say not just "I want 3 hardware queues for this", but also "I
want 3 hardware queues, with fq_codel (or whatever) on top".

>> Yes, I think I mentioned in another email that putting all the queueing
>> smarts into the redirect map was also something I'd considered (well, I
>> do think we've discussed this in the past, so maybe not so surprising if
>> we're thinking along the same lines) :)
>> 
>> But the implication of this is also that an actual TX hook in the driver
>> need not necessarily incorporate a lot of new functionality, as it can
>> control the queueing through a combination of BPF helpers and map
>> updates?
>
> True, it's the dequeuing that's on the TX side, so we could go as far
> as putting all the enqueuing logic in the RX prog..
>
> To answer your question from the other email Toke, my basic model was
> kind of similar to TC Qdiscs. XDP redirect selects a device, then that
> device has an enqueue and dequeue programs. Enqueue program can be run
> in the XDP_REDIRECT context, dequeue is run every time NAPI cleaned up
> some space on the TX descriptor ring. There is a "queue state" but the
> FIFOs etc are sort of internal detail that the enqueue and dequeue
> programs only share between each other. To be clear this is not a
> suggestion of how things should be, it's what sprung to my mind without
> thinking.

Ah, that's interesting. I do agree that it may be better to bake this
into the existing hooks rather than have an additional set of
(en/de)queue hooks.

-Toke

[0] https://linuxplumbersconf.org/event/4/contributions/462/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ