[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200203231503.24eec7f0@carbon>
Date: Mon, 3 Feb 2020 23:15:03 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: Jakub Kicinski <kuba@...nel.org>, David Ahern <dsahern@...il.com>,
David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org,
prashantbhole.linux@...il.com, jasowang@...hat.com,
davem@...emloft.net, mst@...hat.com, toshiaki.makita1@...il.com,
daniel@...earbox.net, john.fastabend@...il.com, ast@...nel.org,
kafai@...com, songliubraving@...com, yhs@...com, andriin@...com,
David Ahern <dahern@...italocean.com>, brouer@...hat.com,
Björn Töpel
<bjorn.topel@...el.com>,
"Karlsson, Magnus" <magnus.karlsson@...el.com>
Subject: Re: [PATCH bpf-next 03/12] net: Add IFLA_XDP_EGRESS for XDP
programs in the egress path
On Mon, 03 Feb 2020 21:13:24 +0100
Toke Høiland-Jørgensen <toke@...hat.com> wrote:
> Oops, I see I forgot to reply to this bit:
>
> >> Yeah, but having the low-level details available to the XDP program
> >> (such as HW queue occupancy for the egress hook) is one of the benefits
> >> of XDP, isn't it?
> >
> > I think I glossed over the hope for having access to HW queue occupancy
> > - what exactly are you after?
> >
> > I don't think one can get anything beyond a BQL type granularity.
> > Reading over PCIe is out of question, device write back on high
> > granularity would burn through way too much bus throughput.
>
> This was Jesper's idea originally, so maybe he can explain better; but
> as I understood it, he basically wanted to expose the same information
> that BQL has to eBPF. Making it possible for an eBPF program to either
> (re-)implement BQL with its own custom policy, or react to HWQ pressure
> in some other way, such as by load balancing to another interface.
Yes, and I also have plans that goes beyond BQL. But let me start with
explaining the BQL part, and answer Toke's question below.
On Mon, 03 Feb 2020 20:56:03 +0100 Toke wrote:
> [...] Hmm, I wonder if a TX driver hook is enough?
Short answer is no, a TX driver hook is not enough. The queue state
info the TX driver hook have access to, needs to be updated once the
hardware have "confirmed" the TX-DMA operation have completed. For
BQL/DQL this update happens during TX-DMA completion/cleanup (code
see call sites for netdev_tx_completed_queue()). (As Jakub wisely
point out we cannot query the device directly due to performance
implications). It doesn't need to be a new BPF hook, just something
that update the queue state info (we could piggy back on the
netdev_tx_completed_queue() call or give TX hook access to
dev_queue->dql).
Regarding "where is the queue": For me the XDP-TX queue is the NIC
hardware queue, that this BPF hook have some visibility into and can do
filtering on. (Imagine that my TX queue is bandwidth limited, then I
can shrink the packet size and still send a "congestion" packet to my
receiver).
The bigger picture is that I envision the XDP-TX/egress hook can
open-up for taking advantage of NIC hardware TX queue features.
This also ties into the queue abstraction work by Björn+Magnus.
Today NIC hardware can do a million TX-queues, and hardware can also do
rate limiting per queue. Thus, I also envision that the XDP-TX/egress
hook can choose/change the TX queue the packet is queue/sent on (we can
likely just overload the XDP_REDIRECT and have a new bpf map type for
this).
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists