[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+m9yKkaVLUm9P8+gTSOMtvrJgsvHfKAjXCZ5_9Wf0-9w@mail.gmail.com>
Date: Wed, 26 Feb 2020 09:47:26 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Kuniyuki Iwashima <kuniyu@...zon.co.jp>
Cc: David Miller <davem@...emloft.net>,
Alexey Kuznetsov <kuznet@....inr.ac.ru>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
kuni1840@...il.com, netdev <netdev@...r.kernel.org>,
osa-contribution-log@...zon.com
Subject: Re: [PATCH v2 net-next 3/3] tcp: Prevent port hijacking when ports
are exhausted.
On Tue, Feb 25, 2020 at 11:46 PM Kuniyuki Iwashima <kuniyu@...zon.co.jp> wrote:
>
> If all of the sockets bound to the same port have SO_REUSEADDR and
> SO_REUSEPORT enabled, any other user can hijack the port by exhausting all
> ephemeral ports, binding sockets to (addr, 0) and calling listen().
>
Yes, an user (application) can steal all ports by opening many
sockets, bind to (addr, 0) and calling listen().
This changelog is rather confusing, and your patch does not solve this
precise problem.
Patch titles are important, you are claiming something, but I fail to
see how the patch solves the problem stated in the title.
Please be more specific, and add tests officially, in tools/testing/selftests/
> If both of SO_REUSEADDR and SO_REUSEPORT are enabled, the restriction of
> SO_REUSEPORT should be taken into account so that can only one socket be in
> TCP_LISTEN.
Sorry, I do not understand this. If I do not understand the sentence,
I do not read the patch
changing one piece of code that has been very often broken in the past.
Please spend time on the changelog to give the exact outcome and goals.
Thanks.
>
> Signed-off-by: Kuniyuki Iwashima <kuniyu@...zon.co.jp>
> ---
> net/ipv4/inet_connection_sock.c | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> index cddeab240ea6..d27ed5fe7147 100644
> --- a/net/ipv4/inet_connection_sock.c
> +++ b/net/ipv4/inet_connection_sock.c
> @@ -131,7 +131,7 @@ static int inet_csk_bind_conflict(const struct sock *sk,
> {
> struct sock *sk2;
> bool reuse = sk->sk_reuse;
> - bool reuseport = !!sk->sk_reuseport && reuseport_ok;
> + bool reuseport = !!sk->sk_reuseport;
> kuid_t uid = sock_i_uid((struct sock *)sk);
>
> /*
> @@ -148,10 +148,16 @@ static int inet_csk_bind_conflict(const struct sock *sk,
> sk->sk_bound_dev_if == sk2->sk_bound_dev_if)) {
> if (reuse && sk2->sk_reuse &&
> sk2->sk_state != TCP_LISTEN) {
> - if (!relax &&
> + if ((!relax ||
> + (!reuseport_ok &&
> + reuseport && sk2->sk_reuseport &&
> + !rcu_access_pointer(sk->sk_reuseport_cb) &&
> + (sk2->sk_state == TCP_TIME_WAIT ||
> + uid_eq(uid, sock_i_uid(sk2))))) &&
> inet_rcv_saddr_equal(sk, sk2, true))
> break;
> - } else if (!reuseport || !sk2->sk_reuseport ||
> + } else if (!reuseport_ok ||
> + !reuseport || !sk2->sk_reuseport ||
> rcu_access_pointer(sk->sk_reuseport_cb) ||
> (sk2->sk_state != TCP_TIME_WAIT &&
> !uid_eq(uid, sock_i_uid(sk2)))) {
> --
> 2.17.2 (Apple Git-113)
>
Powered by blists - more mailing lists