[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNEl/hit/c65UmYk@nanopsycho>
Date: Mon, 7 Aug 2023 19:12:30 +0200
From: Jiri Pirko <jiri@...nulli.us>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org
Subject: Re: ynl - mutiple policies for one nested attr used in multiple cmds
Mon, Aug 07, 2023 at 07:03:13PM CEST, kuba@...nel.org wrote:
>On Sat, 5 Aug 2023 08:33:28 +0200 Jiri Pirko wrote:
>> >I'm not sure if you'll like it but my first choice would be to skip
>> >the selector attribute. Put the attributes directly into the message.
>> >There is no functional purpose the wrapping serves, right?
>>
>> Well, the only reason is backward compatibility.
>> Currently, there is no attr parsing during dump, which is ensured by
>> GENL_DONT_VALIDATE_DUMP flag. That means if user passes any attr, it is
>> ignored.
>>
>> Now if we allow attrs to select, previously ignored attributes would be
>> processed now. User that passed crap with old kernel can gen different
>> results with new kernel.
>>
>> That is why I decided to add selector attr and put attrs inside, doing
>> strict parsing, so if selector attr is not supported by kernel, user
>> gets message back.
>>
>> So what do you suggest? Do per-dump strict parsing policy of root
>> attributes serving to do selection?
>
>Even the selector attr comes with a risk, right? Not only have we
Yep, however, the odds are quite low. That's why I went that direction.
>ignored all attributes, previously, we ignored the payload of the
>message. So the payload of a devlink dump request could be entirely
>uninitialized / random and it would work.
Yep.
>
>IOW we are operating on a scale of potential breakage here, unless
>we do something very heavy handed.
True. I can easily imagine an app having one function to create both do
and dump message, putting in crap as bus_name/dev_name attrs
in case of dump.
>
>How does the situation look with the known user apps? Is anyone
>that we know of putting attributes into dump requests?
I'm not aware of that.
Powered by blists - more mailing lists