[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzZ4J-c-ODnBD3C8NJeeLOdCqLWvFadWXR8t9eFKaGZOvw@mail.gmail.com>
Date: Wed, 14 Oct 2020 16:14:12 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Yonghong Song <yhs@...com>
Cc: bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Kernel Team <kernel-team@...com>,
Andrii Nakryiko <andriin@...com>,
Vasily Averin <vvs@...tuozzo.com>
Subject: Re: [PATCH net v3] net: fix pos incrementment in ipv6_route_seq_next
On Wed, Oct 14, 2020 at 2:53 PM Yonghong Song <yhs@...com> wrote:
>
> Commit 4fc427e05158 ("ipv6_route_seq_next should increase position index")
> tried to fix the issue where seq_file pos is not increased
> if a NULL element is returned with seq_ops->next(). See bug
> https://bugzilla.kernel.org/show_bug.cgi?id=206283
> The commit effectively does:
> - increase pos for all seq_ops->start()
> - increase pos for all seq_ops->next()
>
> For ipv6_route, increasing pos for all seq_ops->next() is correct.
> But increasing pos for seq_ops->start() is not correct
> since pos is used to determine how many items to skip during
> seq_ops->start():
> iter->skip = *pos;
> seq_ops->start() just fetches the *current* pos item.
> The item can be skipped only after seq_ops->show() which essentially
> is the beginning of seq_ops->next().
>
> For example, I have 7 ipv6 route entries,
> root@...h-fb-vm1:~/net-next dd if=/proc/net/ipv6_route bs=4096
> 00000000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000400 00000001 00000000 00000001 eth0
> fe800000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000001 00000000 00000001 eth0
> 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo
> 00000000000000000000000000000001 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000003 00000000 80200001 lo
> fe800000000000002050e3fffebd3be8 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001 eth0
> ff000000000000000000000000000000 08 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000004 00000000 00000001 eth0
> 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo
> 0+1 records in
> 0+1 records out
> 1050 bytes (1.0 kB, 1.0 KiB) copied, 0.00707908 s, 148 kB/s
> root@...h-fb-vm1:~/net-next
>
> In the above, I specify buffer size 4096, so all records can be returned
> to user space with a single trip to the kernel.
>
> If I use buffer size 128, since each record size is 149, internally
> kernel seq_read() will read 149 into its internal buffer and return the data
> to user space in two read() syscalls. Then user read() syscall will trigger
> next seq_ops->start(). Since the current implementation increased pos even
> for seq_ops->start(), it will skip record #2, #4 and #6, assuming the first
> record is #1.
>
> root@...h-fb-vm1:~/net-next dd if=/proc/net/ipv6_route bs=128
Did you test with non-zero skip= parameter as well (to force lseek)?
To make sure we don't break the scenario that original fix tried to
fix.
If that works:
Acked-by: Andrii Nakryiko <andrii@...nel.org>
[...]
> diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
> index 141c0a4c569a..605cdd38a919 100644
> --- a/net/ipv6/ip6_fib.c
> +++ b/net/ipv6/ip6_fib.c
> @@ -2622,8 +2622,10 @@ static void *ipv6_route_seq_start(struct seq_file *seq, loff_t *pos)
> iter->skip = *pos;
>
> if (iter->tbl) {
> + loff_t p = 0;
> +
> ipv6_route_seq_setup_walk(iter, net);
> - return ipv6_route_seq_next(seq, NULL, pos);
> + return ipv6_route_seq_next(seq, NULL, &p);
nit: comment here wouldn't hurt for the next guy stumbling upon this
code and wondering why we ignore p afterwards
> } else {
> return NULL;
> }
> --
> 2.24.1
>
Powered by blists - more mailing lists