lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 11 Feb 2009 13:44:23 +0100
From:	Patrick McHardy <kaber@...sh.net>
To:	Pablo Neira Ayuso <pablo@...filter.org>
CC:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	netfilter-devel@...r.kernel.org
Subject: Re: [RFC] netlink broadcast return value

Pablo Neira Ayuso wrote:
> Patrick McHardy wrote:
>> I know, but in the mean time I think its wrong :) The delivery
>> isn't reliable and what the admin is effectively expressing by
>> setting your sysctl is "I don't have any listeners besides the
>> synchronization daemon running". So it might as well use unicast.
> 
> No :), this setting means "state-changes over ctnetlink will be reliable
> at the cost of dropping packets (if needed)", it's an optional
> trade-off. You may also have more listeners like a logging daemon
> (ulogd), similarly this will be useful to ensure that ulogd doesn't leak
> logging information which may happen under very heavy load. This option
> is *not* only oriented to state-synchronization.

I'm aware of that. But you're adding a policy knob to control the
behaviour of a one-to-many interface based on what a single listener
(or maybe even two) want. Its not possible anymore to just listen to
events for debugging, since that might even lock you out. You also
can't use ulogd and say that you *don't* care whether every last state
change was delivered to it.

This seems very wrong to me. And I don't even see a reason to do
this since its easy to use unicast and per-listener state.

> Using unicast would not do any different from broadcast as you may have
> two listeners receiving state-changes from ctnetlink via unicast, so the
> problem would be basically the same as above if you want reliable
> state-change information at the cost of dropping packets.

Only the processes that actually care can specify this behaviour.
They're likely to have more CPU time, better adjusted receive
buffers etc than f.i. the conntrack tool when dumping events.

> BTW, the netlink_broadcast return value looked to me inconsistent before
> the patch. It returned ENOBUFS if it could not clone the skb, but zero
> when at least one message was delivered. How useful can be this return
> value for the callers? I would expect to have a similar behaviour to the
> one of netlink_unicast (reporting EAGAIN error when it could not deliver
> the message), even if the return value for most callers should be
> ignored as it is not of any help.

Its useless since you don't know how received it. It should return
void IMO.

>> So you're dropping the packet if you can't manage to synchronize.
>> Doesn't that defeat the entire purpose of synchronizing, which is
>> *increasing* reliability? :)
> 
> This reduces communications reliability a bit under very heavy load,
> yes, because it may drop some packets but it adds reliable flow-based
> logging accounting / state-synchronization in return. Both refers to
> reliability in different contexts. In the end, it's a trade-off world.
> There's some point at which you may want to choose which one you prefer,
> reliable communications if the system is under heavy load or reliable
> logging (no leaks in the logging) / state-synchronization (the backup
> firewall is able to follow state-changes of the master under heavy load).

Logging yes, but I can't see the point in perfect synchronization if
that leads to less throughput.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ