[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <10b5eb96-5200-0ffe-a1ba-6d8a16ac4ebe@meta.com>
Date: Mon, 14 Nov 2022 08:51:41 -0800
From: Yonghong Song <yhs@...a.com>
To: John Fastabend <john.fastabend@...il.com>, hawk@...nel.org,
daniel@...earbox.net, kuba@...nel.org, davem@...emloft.net,
ast@...nel.org
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, sdf@...gle.com
Subject: Re: [1/2 bpf-next] bpf: expose net_device from xdp for metadata
On 11/13/22 10:27 AM, John Fastabend wrote:
> Yonghong Song wrote:
>>
>>
>> On 11/10/22 3:11 PM, John Fastabend wrote:
>>> John Fastabend wrote:
>>>> Yonghong Song wrote:
>>>>>
>>>>>
>>>>> On 11/9/22 6:17 PM, John Fastabend wrote:
>>>>>> Yonghong Song wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 11/9/22 1:52 PM, John Fastabend wrote:
>>>>>>>> Allow xdp progs to read the net_device structure. Its useful to extract
>>>>>>>> info from the dev itself. Currently, our tracing tooling uses kprobes
>>>>>>>> to capture statistics and information about running net devices. We use
>>>>>>>> kprobes instead of other hooks tc/xdp because we need to collect
>>>>>>>> information about the interface not exposed through the xdp_md structures.
>>>>>>>> This has some down sides that we want to avoid by moving these into the
>>>>>>>> XDP hook itself. First, placing the kprobes in a generic function in
>>>>>>>> the kernel is after XDP so we miss redirects and such done by the
>>>>>>>> XDP networking program. And its needless overhead because we are
>>>>>>>> already paying the cost for calling the XDP program, calling yet
>>>>>>>> another prog is a waste. Better to do everything in one hook from
>>>>>>>> performance side.
>>>>>>>>
>>>>>>>> Of course we could one-off each one of these fields, but that would
>>>>>>>> explode the xdp_md struct and then require writing convert_ctx_access
>>>>>>>> writers for each field. By using BTF we avoid writing field specific
>>>>>>>> convertion logic, BTF just knows how to read the fields, we don't
>>>>>>>> have to add many fields to xdp_md, and I don't have to get every
>>>>>>>> field we will use in the future correct.
>>>>>>>>
>>>>>>>> For reference current examples in our code base use the ifindex,
>>>>>>>> ifname, qdisc stats, net_ns fields, among others. With this
>>>>>>>> patch we can now do the following,
>>>>>>>>
>>>>>>>> dev = ctx->rx_dev;
>>>>>>>> net = dev->nd_net.net;
>>>>>>>>
>>>>>>>> uid.ifindex = dev->ifindex;
>>>>>>>> memcpy(uid.ifname, dev->ifname, NAME);
>>>>>>>> if (net)
>>>>>>>> uid.inum = net->ns.inum;
>>>>>>>>
>>>>>>>> to report the name, index and ns.inum which identifies an
>>>>>>>> interface in our system.
>>>>>>>
>
> [...]
>
>>>> Yep.
>>>>
>>>> I'm fine doing it with bpf_get_kern_ctx() did you want me to code it
>>>> the rest of the way up and test it?
>>>>
>>>> .John
>>>
>>> Related I think. We also want to get kernel variable net_namespace_list,
>>> this points to the network namespace lists. Based on above should
>>> we do something like,
>>>
>>> void *bpf_get_kern_var(enum var_id);
>>>
>>> then,
>>>
>>> net_ns_list = bpf_get_kern_var(__btf_net_namesapce_list);
>>>
>>> would get us a ptr to the list? The other thought was to put it in the
>>> xdp_md but from above seems better idea to get it through helper.
>>
>> Sounds great. I guess my new proposed bpf_get_kern_btf_id() kfunc could
>> cover such a use case as well.
>
> Yes I think this should be good. The only catch is that we need to
> get the kernel global var pointer net_namespace_list.
Currently, the kernel supports percpu variable, but
not other global var like net_namespace_list. Currently, there is
an effort to add global var to BTF:
https://lore.kernel.org/bpf/20221104231103.752040-1-stephen.s.brennan@oracle.com/
>
> Then we can write iterators on network namespaces and net_devices
> without having to do anything else. The usecase is to iterate
> the network namespace and collect some subset of netdevices. Populate
> a map with these and then keep it in sync from XDP with stats. We
> already hook create/destroy paths so have built up maps that track
> this and have some XDP stats but not everything we would want.
the net_namespace_list is defined as:
struct list_head net_namespace_list;
So it is still difficult to iterate with bpf program. But we
could have a bpf_iter (similar to task, task_file, etc.)
for net namespaces and it can provide enough context
for the bpf program for each namespace to satisfy your
above need.
You can also with a bounded loop to traverse net_namespace_list
in the bpf program, but it may incur complicated codes...
>
> The other piece I would like to get out of the xdp ctx is the
> rx descriptor of the device. I want to use this to pull out info
> about the received buffer for debug mostly, but could also grab
> some fields that are useful for us to track. That we can likely
> do this,
>
> ctx->rxdesc
I think it is possible. Adding rxdesc to xdp_buff as
unsigned char *rxdesc;
or
void *rxdesc;
and using bpf_get_kern_btf_id(kctx->rxdesc, expected_btf_id)
to get a btf id for rxdesc. Here we assume there is
a struct available for rxdesc in vmlinux.h.
Then you can trace through rxdesc with direct memory
access.
I have a RFC patch
https://lore.kernel.org/bpf/20221114162328.622665-1-yhs@fb.com/
please help take a look.
>
> Recently had to debug an ugly hardware/driver bug where this would
> have been very useful.
>
> .John
Powered by blists - more mailing lists