lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 1 Oct 2020 10:09:33 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Johannes Berg <johannes@...solutions.net>
Cc:     netdev@...r.kernel.org, andrew@...n.ch, jiri@...nulli.us,
        mkubecek@...e.cz, dsahern@...nel.org, pablo@...filter.org
Subject: Re: [RFC net-next 9/9] genetlink: allow dumping command-specific
 policy

On Thu, 01 Oct 2020 18:57:35 +0200 Johannes Berg wrote:
> On Thu, 2020-10-01 at 09:24 -0700, Jakub Kicinski wrote:
> > > I guess the most compact representation, that also preserves the most
> > > data about sharing, would be to do something like
> > > 
> > > [ATTR_FAMILY_ID]
> > > [ATTR_POLICY]
> > >   [policy idx, 0 = main policy]
> > >     [bla]
> > >     ...
> > >   ...
> > > [ATTR_OP_POLICY]
> > >   [op] = [policy idx]
> > >   ...  
> 
> > Only comment I have is - can we make sure to put the ATTR_OP_POLICY
> > first? That way user space can parse the stream an pick out the info
> > it needs rather than recording all the policies only to find out later
> > which one is which.  
> 
> Hmm. Yes, that makes sense. But I don't see why not - you could go do
> the netlink_policy_dump_start() which that assigns the indexes, then
> dump out ATTR_OP_POLICY looking up the indexes in the table that it
> created, and then dump out all the policies?

Ack.

> > > I guess it's doable. Just seems a bit more complex. OTOH, it may be that
> > > such complexity also completely makes sense for non-generic netlink
> > > families anyway, I haven't looked at them much at all.  
> > 
> > IDK, doesn't seem crazy hard. We can create some iterator or expand the
> > API with "begin" "add" "end" calls. Then once dumper state is build we
> > can ask it which ids it assigned.  
> 
> Yeah. Seems feasible. Maybe I'll take a stab at it (later, when I can).
> 
> > OTOH I don't think we have a use for this in ethtool, because user
> > space usually does just one op per execution. So I'm thinking to use
> > your structure for the dump, but leave the actual implementation of
> > "dump all" for "later".
> > 
> > How does that sound?  
> 
> I'm not sure you even need that structure if you have the "filter by
> op"? I mean, then just stick to what you had?

I was adding OP as an attribute to each message. I will just ditch that
given user space should know what it asked for.

> When I started down this road I more had in mind "sniffer-like" tools
> that want to understand the messages better, etc. without really having
> any domain-specific "knowledge" encoded in them. And then you'd probably
> really want to build the entire policy representation in the tool side
> first.
> 
> Or perhaps even tools you could run on the latest kernel to generate
> code (e.g. python code was discussed) that would be able to build
> messages. You'd want to generate the code once on the latest kernel when
> you need a new feature, and then actually use it instead of redoing it
> at runtime, but still, could be done.
> 
> I suppose you have a completely different use case in mind :-)

I see. Yes, I'm trying to avoid having to probe the kernel for features.
We added new flags to ethtool to include extra info in the output, and
older kernels with return EOPNOTSUPP for the entire operation if those
are set (due to strict checking). While user would probably expect the
information to just not be there if kernel can't provide it. New
kernels can't provide it all the time either (it's extra stats from the
driver).

I'm hoping Michal will accept this as a solution :) Retrying on
EOPNOTSUPP gets a little hairy for my taste.

That should have been in the cover letter, I guess.

Powered by blists - more mailing lists