lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 Feb 2020 04:19:36 +0300
From:   Evgeniy Polyakov <zbr@...emap.net>
To:     "Daniel Walker (danielwa)" <danielwa@...co.com>,
        David Miller <davem@...emloft.net>
Cc:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

18.02.2020, 23:55, "Daniel Walker (danielwa)" <danielwa@...co.com>:
>>  > I think I would agree with you if this was unicast, and each listener could tailor
>>  > what messages they want to get. However, this interface isn't that, and it would
>>  > be considerable work to convert to that.
>>
>>  You filter at recvmsg() on the specific socket, multicast or not, I
>>  don't understand what the issue is.
>
> Cisco tried something like this (I don't know if it was exactly what your referring to),
> and it was messy and fairly complicated for a simple interface. In fact it was
> the first thing I suggested for Cisco.
>
> I'm not sure why Connector has to supply an exact set of messages, one could
> just make a whole new kernel module hooked into netlink sending a different
> subset of connector messages. The interface eats up CPU and slows the
> system if it's sending messages your just going to ignore. I'm sure the
> filtering would also slows down the system.

Connector has unicast interface and multicast-like 'subscription', but sending system-wide messages
implies using broadcast interface, since you can not hold per-user/per-socket information about particular
event mask, instead you have channels in connector each one could have been used for specific message type,
but it looks overkill for simple process mask changes.

And in fact, now I do not understand your point.
I thought you have been concerned about receiving too many messages from particular connector module because
there are, for example, too many 'fork/signal' events. And now you want to limit them to 'fork' events only.
Even if there could be other users who wanted to receive 'signal' and other events.

And you blame connector - basically a network media, call it TCP if you like - for not filtering this for you?
And after you have been told to use connector channels - let's call them TCP ports -
which requires quite a bit of work - you do not want to do this (also, this will break backward compatibility for everyone
else including (!) Cisco (!!)). I'm a little bit lost here.

As a side and more practical way - do we want to have a global switch for particular process state changes broadcasting?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ