[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <38ebe747-f65f-3b03-d089-86f454c78584@gmail.com>
Date: Fri, 17 Jun 2022 10:17:04 -0600
From: David Ahern <dsahern@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Ismael Luceno <iluceno@...e.de>,
"David S. Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: Netlink NLM_F_DUMP_INTR flag lost
On 6/17/22 9:22 AM, Jakub Kicinski wrote:
> On Fri, 17 Jun 2022 08:55:53 -0600 David Ahern wrote:
>>> No, I'm concerned that while in the dumping loop, the table might
>>> change between iterations, and if this results in the loop not finding
>>> more entries, because in most these functions there's no
>>> consistency check after the loop, this will go undetected.
>>
>> Specific example? e.g., fib dump and address dumps have a generation id
>> that gets recorded before the start of the dump and checked at the end
>> of the dump.
>
> FWIW what I think is strange is that we record the gen id before the
> dump and then check if the recorded version was old. Like.. what's the
> point of that? Nothing updates cb->seq while dumping AFAICS, so the
while dumping, no, because the rtnl is locked.
The genid is used across syscalls when dumping a table that does not fit
within a 64kB message.
> code is functionally equivalent to this right?
>
> diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
> index 92b778e423df..0cd7482dc1f0 100644
> --- a/net/ipv4/devinet.c
> +++ b/net/ipv4/devinet.c
> @@ -2259,6 +2259,7 @@ static int inet_netconf_dump_devconf(struct sk_buff *skb,
> rcu_read_lock();
> cb->seq = atomic_read(&net->ipv4.dev_addr_genid) ^
> net->dev_base_seq;
> + nl_dump_check_consistent(cb, nlmsg_hdr(skb));
> hlist_for_each_entry_rcu(dev, head, index_hlist) {
> if (idx < s_idx)
> goto cont;
> @@ -2276,7 +2277,6 @@ static int inet_netconf_dump_devconf(struct sk_buff *skb,
> rcu_read_unlock();
> goto done;
> }
> - nl_dump_check_consistent(cb, nlmsg_hdr(skb));
> cont:
> idx++;
> }
>
>
Powered by blists - more mailing lists