[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fbf28086-321a-5508-6688-39b6b6c75bf0@iogearbox.net>
Date: Wed, 19 Dec 2018 18:07:02 +0100
From: Daniel Borkmann <daniel@...earbox.net>
To: Jesper Dangaard Brouer <brouer@...hat.com>,
Daniel Borkmann <borkmann@...earbox.net>
Cc: Stephen Hemminger <stephen@...workplumber.org>,
netdev@...r.kernel.org, Stephen Hemminger <sthemmin@...rosoft.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH net-next] net: add network device notifier trace points
On 12/19/2018 05:40 PM, Jesper Dangaard Brouer wrote:
> On Wed, 19 Dec 2018 16:46:05 +0100
> Daniel Borkmann <borkmann@...earbox.net> wrote:
>
>> Hmm, why not just doing something as in your example below with napi_poll()
>> where you pass in the napi pointer, and then use bpf_probe_read_str() on
>> ctx->dev for fetching the name? At least there this should work and should
>> be okay given it's rather slow-path event.
>>
>>> [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/napi_monitor_kern.c#L34-L130
>
> I didn't try to use bpf_probe_read_str() in [1], but that is also not
> what I want in my use-case. I don't want the name, but the ifindex to
> filter on, as it will be faster. My use-case is allowing my
> napi_monitor program to filter on a specific net_device, inside the
> kernel via BPF.
>
> E.g. this didn't work:
> bpf_probe_read(&ifindex, 4, &ctx->napi->dev->ifindex);
Something along the lines of this you could try:
#define probe_fetch(X) ({typeof(X) val; bpf_probe_read(&val, sizeof(val), &X); val;})
SEC("tracepoint/napi/napi_poll")
int napi_poll(struct napi_poll_ctx *ctx)
{
struct napi_struct *napi = ctx->napi;
struct net_device *dev;
int ifindex;
[...]
dev = probe_fetch(napi->dev);
ifindex = probe_fetch(dev->ifindex);
[...]
}
> Perhaps you know how I can do this deref correctly?
>
> My napi_monitor use-case is not a slow-path event, even-though in
> optimal cases we should handle 64 packets per tracepoint invocation,
> but I'm using this for 100G NICs with >20Mpps. And I mostly use the
> tool when something looks wrong and I don't see 64 packet bulks, which
> is also why I detect when this gets invoked from idle task or from
> ksoftirqd.
>
Powered by blists - more mailing lists