[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK-6q+iBhzFVgm5NQaPCZhJ8tEvVVeTt2OAEGH4QkOfHqfYzaA@mail.gmail.com>
Date: Fri, 5 Mar 2021 14:43:05 -0500
From: Alexander Ahring Oder Aring <aahringo@...hat.com>
To: Pablo Neira Ayuso <pablo@...filter.org>
Cc: fw@...len.de, netdev@...r.kernel.org, linux-man@...r.kernel.org,
David Teigland <teigland@...hat.com>
Subject: Re: [PATCH resend] netlink.7: note not reliable if NETLINK_NO_ENOBUFS
Hi Pablo,
I appreciate your very detailed response. Thank you.
On Thu, Mar 4, 2021 at 10:04 PM Pablo Neira Ayuso <pablo@...filter.org> wrote:
>
> Hi Alexander,
>
> On Thu, Mar 04, 2021 at 03:57:28PM -0500, Alexander Aring wrote:
> > This patch adds a note to the netlink manpage that if NETLINK_NO_ENOBUFS
> > is set there is no additional handling to make netlink reliable. It just
> > disables the error notification.
>
> A bit more background on this toggle.
>
> NETLINK_NO_ENOBUFS also disables netlink broadcast congestion control
> which kicks in when the socket buffer gets full. The existing
> congestion control algorithm keeps dropping netlink event messages
> until the queue is emptied. Note that it might take a while until your
> userspace process fully empties the socket queue that is congested
> (and during that time _your process is losing every netlink event_).
>
> The usual approach when your process hits ENOBUFS is to resync via
> NLM_F_DUMP unicast request. However, getting back to sync with the
> kernel subsystem might be expensive if the number of items that are
> exposed via netlink is huge.
>
> Note that some people select very large socket buffer queue for
> netlink sockets when they notice ENOBUFS. This might however makes
> things worse because, as I said, congestion control drops every
> netlink message until the queue is emptied. Selecting a large socket
> buffer might help to postpone the ENOBUFS error, but once your process
> hits ENOBUFS, then the netlink congestion control kicks in and you
> will make you lose a lot of event messages (until the queue is empty
> again!).
>
> So NETLINK_NO_ENOBUFS from userspace makes sense if:
>
> 1) You are subscribed to a netlink broadcast group (so it does _not_
> make sense for unicast netlink sockets).
> 2) The kernel subsystem delivers the netlink messages you are
> subscribed to from atomic context (e.g. network packet path, if
> the netlink event is triggered by network packets, your process
> might get spammed with a lot of netlink messages in little time,
> depending on your network workload).
> 3) Your process does not want to resync on lost netlink messages.
> Your process assumes that events might get lost but it does not
> case / it does not want to make any specific action in such case.
> 4) You want to disable the netlink broadcast congestion control.
>
> To provide an example kernel subsystem, this toggle can be useful with
> the connection tracking system, when monitoring for new connection
> events in a soft real-time fashion.
>
Can we just copy paste your above list and the connection tracking
example into the netlink manpage? I think it's good to have a
checklist like that to see if this option fits.
> > The used word "avoid" receiving ENOBUFS errors can be interpreted
> > that netlink tries to do some additional queue handling to avoid
> > that such scenario occurs at all, e.g. like zerocopy which tries to
> > avoid memory copy. However disable is not the right word here as
> > well that in some cases ENOBUFS can be still received. This patch
> > makes clear that there will no additional handling to put netlink in
> > a more reliable mode.
>
> Right, the NETLINK_NO_ENOBUFS toggle alone itself is not making
> netlink more reliable for the broadcast scenario, it just changes the
> way it netlink broadcast deals with congestion: userspace process gets
> no reports on lost messages and netlink congestion control is
> disabled.
>
Just out of curiosity:
If I understand correctly, the connection tracking netlink interface
is an exception here because it has its own handling of dealing with
congestion ("more reliable"?) so you need to disable the "default
congestion control"?
Does connection tracking always do it's own congestion algorithm, so
it's recommended to turn NETLINK_NO_ENOBUFS on when using it?
Thanks.
- Alex
Powered by blists - more mailing lists