[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1685fbab-e4e1-5116-5148-fa7cd8f5879b@iogearbox.net>
Date: Mon, 6 Dec 2021 16:04:12 +0100
From: Daniel Borkmann <daniel@...earbox.net>
To: Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org
Cc: Alexei Starovoitov <ast@...nel.org>, bpf@...r.kernel.org,
Toke Høiland-Jørgensen <toke@...hat.com>
Subject: Re: [PATCH v3 net-next 2/2] bpf: let bpf_warn_invalid_xdp_action()
report more info
On 12/6/21 11:20 AM, Paolo Abeni wrote:
> On Fri, 2021-12-03 at 23:04 +0100, Daniel Borkmann wrote:
>> Hi Paolo,
>>
>> Changes look good to me as well, we can route the series via bpf-next after tree
>> resync, or alternatively ask David/Jakub to take it directly into net-next with our
>> Ack given in bpf-next there is no drivers/net/ethernet/microsoft/mana/mana_bpf.c yet.
>>
>> On 11/30/21 11:08 AM, Paolo Abeni wrote:
>> [...]> diff --git a/net/core/filter.c b/net/core/filter.c
>>> index 5631acf3f10c..392838fa7652 100644
>>> --- a/net/core/filter.c
>>> +++ b/net/core/filter.c
>>> @@ -8181,13 +8181,13 @@ static bool xdp_is_valid_access(int off, int size,
>>> return __is_valid_xdp_access(off, size);
>>> }
>>>
>>> -void bpf_warn_invalid_xdp_action(u32 act)
>>> +void bpf_warn_invalid_xdp_action(struct net_device *dev, struct bpf_prog *prog, u32 act)
>>> {
>>> const u32 act_max = XDP_REDIRECT;
>>>
>>> - pr_warn_once("%s XDP return value %u, expect packet loss!\n",
>>> + pr_warn_once("%s XDP return value %u on prog %s (id %d) dev %s, expect packet loss!\n",
>>> act > act_max ? "Illegal" : "Driver unsupported",
>>> - act);
>>> + act, prog->aux->name, prog->aux->id, dev ? dev->name : "");
>>
>> One tiny nit, but we could fix it up while applying I'd have is that for !dev case
>> we should probably dump a "<n/a>" or so just to avoid a kernel log message like
>> "dev , expect packet loss".
>
> Yep, that would probably be better. Pleas let me know it you prefer a
> formal new version for the patch.
Ok, I think no need, we can take care of it when applying.
Thanks,
Daniel
Powered by blists - more mailing lists