lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 08 Jan 2022 21:19:41 +0100
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Network Development <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH bpf-next v7 1/3] bpf: Add "live packet" mode for XDP in
 bpf_prog_run()

Alexei Starovoitov <alexei.starovoitov@...il.com> writes:

> On Sat, Jan 8, 2022 at 5:19 AM Toke Høiland-Jørgensen <toke@...hat.com> wrote:
>>
>> Sure, can do. Doesn't look like BPF_PROG_RUN is documented in there at
>> all, so guess I can start such a document :)
>
> prog_run was simple enough.
> This live packet mode is a different level of complexity.
> Just look at the length of this thread.
> We keep finding implementation details that will be relevant
> to anyone trying to use this interface.
> They all will become part of uapi.

Sure, totally fine with documenting it. Just seems to me the most
obvious place to put this is in a new
Documentation/bpf/prog_test_run.rst file with a short introduction about
the general BPF_PROG_RUN mechanism, and then a subsection dedicated to
this facility.

Or would you rather I create something like
Documentation/bpf/xdp_live_packets.rst ?

>> > Another question comes to mind:
>> > What happens when a program modifies the packet?
>> > Does it mean that the 2nd frame will see the modified data?
>> > It will not, right?
>> > It's the page pool size of packets that will be inited the same way
>> > at the beginning. Which is NAPI_POLL_WEIGHT * 2 == 128 packets.
>> > Why this number?
>>
>> Yes, you're right: the next run won't see the modified packet data. The
>> 128 pages is because we run the program loop in batches of 64 (like NAPI
>> does, the fact that TEST_XDP_BATCH and NAPI_POLL_WEIGHT are the same is
>> not a coincidence).
>>
>> We need 2x because we want enough pages so we can keep running without
>> allocating more, and the first batch can still be in flight on a
>> different CPU while we're processing batch 2.
>>
>> I experimented with different values, and 128 was the minimum size that
>> didn't have a significant negative impact on performance, and above that
>> saw diminishing returns.
>
> I guess it's ok-ish to get stuck with 128.
> It will be uapi that we cannot change though.
> Are you comfortable with that?

UAPI in what sense? I'm thinking of documenting it like:

"The packet data being supplied as data_in to BPF_PROG_RUN will be used
 for the initial run of the XDP program. However, when running the
 program multiple times (with repeat > 1), only the packet *bounds*
 (i.e., the data, data_end and data_meta pointers) will be reset on each
 invocation, the packet data itself won't be rewritten. The pages
 backing the packets are recycled, but the order depends on the path the
 packet takes through the kernel, making it hard to predict when a
 particular modified page makes it back to the XDP program. In practice,
 this means that if the XDP program modifies the packet payload before
 sending out the packet, it has to be prepared to deal with subsequent
 invocations seeing either the initial data or the already-modified
 packet, in arbitrary order."

I don't think this makes any promises about any particular size of the
page pool, so how does it constitute UAPI?

>> > Should it be configurable?
>> > Then the user can say: init N packets with this one pattern
>> > and the program will know that exactly N invocation will be
>> > with the same data, but N+1 it will see the 1st packet again
>> > that potentially was modified by the program.
>> > Is it accurate?
>>
>> I thought about making it configurable, but the trouble is that it's not
>> quite as straight-forward as the first N packets being "pristine": it
>> depends on what happens to the packet afterwards:
>>
>> On XDP_DROP, the page will be recycled immediately, whereas on
>> XDP_{TX,REDIRECT} it will go through the egress driver after sitting in
>> the bulk queue for a little while, so you can get reordering compared to
>> the original execution order.
>>
>> On XDP_PASS the kernel will release the page entirely from the pool when
>> building an skb, so you'll never see that particular page again (and
>> eventually page_pool will allocate a new batch that will be
>> re-initialised to the original value).
>
> That all makes sense. Thanks for explaining.
> Please document it and update the selftest.
> Looks like XDP_DROP is not tested.
> Single packet TX and REDIRECT is imo too weak to give
> confidence that the mechanism will not explode with millions of
> packets.

OK, will do.

>> If we do want to support a "pristine data" mode, I think the least
>> cumbersome way would be to add a flag that would make the kernel
>> re-initialise the packet data before every program invocation. The
>> reason I didn't do this was because I didn't have a use case for it. The
>> traffic generator use case only rewrites a tiny bit of the packet
>> header, and it's just as easy to just keep rewriting it without assuming
>> a particular previous value. And there's also the possibility of just
>> calling bpf_prog_run() multiple times from userspace with a lower number
>> of repetitions...
>>
>> I'm not opposed to adding such a flag if you think it would be useful,
>> though. WDYT?
>
> reinit doesn't feel necessary.
> How one would use this interface to send N different packets?
> The api provides an interface for only one.

By having the XDP program react appropriately. E.g., here is the XDP
program used by the trafficgen tool to cycle through UDP ports when
sending out the packets - it just reads the current value and updates
based on that, so it doesn't matter if it sees the initial page or one
it already modified:

const volatile __u16 port_start;
const volatile __u16 port_range;
volatile __u16 next_port = 0;

SEC("xdp")
int xdp_redirect_update_port(struct xdp_md *ctx)
{
	void *data_end = (void *)(long)ctx->data_end;
	void *data = (void *)(long)ctx->data;
	__u16 cur_port, cksum_diff;
	struct udphdr *hdr;

	hdr = data + (sizeof(struct ethhdr) + sizeof(struct ipv6hdr));
	if (hdr + 1 > data_end)
		return XDP_ABORTED;

	cur_port = bpf_ntohs(hdr->dest);
	cksum_diff = next_port - cur_port;
	if (cksum_diff) {
		hdr->check = bpf_htons(~(~bpf_ntohs(hdr->check) + cksum_diff));
		hdr->dest = bpf_htons(next_port);
	}
	if (next_port++ >= port_start + port_range - 1)
		next_port = port_start;

	return bpf_redirect(ifindex_out, 0);
}

You could do something similar with a whole packet header or payload; or
you could even populate a map with the full-size packets and copy that
in based on a counter.

> It will be copied 128 times, but the prog_run call with repeat=1
> will invoke bpf prog only once, right?
> So technically doing N prog_run commands with different data
> and repeat=1 will achieve the result, right?
> But it's not efficient, since 128 pages and 128 copies will be
> performed each time.
> May be there is a use case for configurable page_pool size?

Hmm, we could size the page_pool as min(repeat, 128) to avoid the extra
copies when they won't be used?

Another question seeing as the merge window is imminent: How do you feel
about merging this before the merge window? I can resubmit before it
opens with the updated selftest and documentation, and we can deal with
any tweaks during the -rcs; or would you rather postpone the whole
thing until the next cycle?

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ