lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Dec 2020 11:33:40 -0800
From:   Martin KaFai Lau <kafai@...com>
To:     Kuniyuki Iwashima <kuniyu@...zon.co.jp>
CC:     <ast@...nel.org>, <benh@...zon.com>, <bpf@...r.kernel.org>,
        <daniel@...earbox.net>, <davem@...emloft.net>,
        <edumazet@...gle.com>, <eric.dumazet@...il.com>, <kuba@...nel.org>,
        <kuni1840@...il.com>, <linux-kernel@...r.kernel.org>,
        <netdev@...r.kernel.org>
Subject: Re: [PATCH v1 bpf-next 03/11] tcp: Migrate
 TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

On Thu, Dec 10, 2020 at 02:58:10PM +0900, Kuniyuki Iwashima wrote:

[ ... ]

> > > I've implemented one-by-one migration only for the accept queue for now.
> > > In addition to the concern about TFO queue,
> > You meant this queue:  queue->fastopenq.rskq_rst_head?
> 
> Yes.
> 
> 
> > Can "req" be passed?
> > I did not look up the lock/race in details for that though.
> 
> I think if we rewrite freeing TFO requests part like one of accept queue
> using reqsk_queue_remove(), we can also migrate them.
> 
> In this patchset, selecting a listener for accept queue, the TFO queue of
> the same listener is also migrated to another listener in order to prevent
> TFO spoofing attack.
> 
> If the request in the accept queue is migrated one by one, I am wondering
> which should the request in TFO queue be migrated to prevent attack or
> freed.
> 
> I think user need not know about keeping such requests in kernel to prevent
> attacks, so passing them to eBPF prog is confusing. But, redistributing
> them randomly without user's intention can make some irrelevant listeners
> unnecessarily drop new TFO requests, so this is also bad. Moreover, freeing
> such requests seems not so good in the point of security.
The current behavior (during process restart) is also not carrying this
security queue.  Not carrying them in this patch will make it
less secure than the current behavior during process restart?
Do you need it now or it is something that can be considered for later
without changing uapi bpf.h?

> > > ---8<---
> > > diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> > > index a82fd4c912be..d0ddd3cb988b 100644
> > > --- a/net/ipv4/inet_connection_sock.c
> > > +++ b/net/ipv4/inet_connection_sock.c
> > > @@ -1001,6 +1001,29 @@ struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> > >  }
> > >  EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
> > >  
> > > +static bool inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk, struct request_sock *req)
> > > +{
> > > +       struct request_sock_queue *queue = &inet_csk(nsk)->icsk_accept_queue;
> > > +       bool migrated = false;
> > > +
> > > +       spin_lock(&queue->rskq_lock);
> > > +       if (likely(nsk->sk_state == TCP_LISTEN)) {
> > > +               migrated = true;
> > > +
> > > +               req->dl_next = NULL;
> > > +               if (queue->rskq_accept_head == NULL)
> > > +                       WRITE_ONCE(queue->rskq_accept_head, req);
> > > +               else
> > > +                       queue->rskq_accept_tail->dl_next = req;
> > > +               queue->rskq_accept_tail = req;
> > > +               sk_acceptq_added(nsk);
> > > +               inet_csk_reqsk_queue_migrated(sk, nsk, req);
> > need to first resolve the question raised in patch 5 regarding
> > to the update on req->rsk_listener though.
> 
> In the unhash path, it is also safe to call sock_put() for the old listner.
> 
> In inet_csk_listen_stop(), the sk_refcnt of the listener >= 1. If the
> listener does not have immature requests, sk_refcnt is 1 and freed in
> __tcp_close().
> 
>   sock_hold(sk) in __tcp_close()
>   sock_put(sk) in inet_csk_destroy_sock()
>   sock_put(sk) in __tcp_clsoe()
I don't see how it is different here than in patch 5.
I could be missing something.

Lets contd the discussion on the other thread (patch 5) first.

> 
> 
> > > +       }
> > > +       spin_unlock(&queue->rskq_lock);
> > > +
> > > +       return migrated;
> > > +}
> > > +
> > >  struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child,
> > >                                          struct request_sock *req, bool own_req)
> > >  {
> > > @@ -1023,9 +1046,11 @@ EXPORT_SYMBOL(inet_csk_complete_hashdance);
> > >   */
> > >  void inet_csk_listen_stop(struct sock *sk)
> > >  {
> > > +       struct sock_reuseport *reuseport_cb = rcu_access_pointer(sk->sk_reuseport_cb);
> > >         struct inet_connection_sock *icsk = inet_csk(sk);
> > >         struct request_sock_queue *queue = &icsk->icsk_accept_queue;
> > >         struct request_sock *next, *req;
> > > +       struct sock *nsk;
> > >  
> > >         /* Following specs, it would be better either to send FIN
> > >          * (and enter FIN-WAIT-1, it is normal close)
> > > @@ -1043,8 +1068,19 @@ void inet_csk_listen_stop(struct sock *sk)
> > >                 WARN_ON(sock_owned_by_user(child));
> > >                 sock_hold(child);
> > >  
> > > +               if (reuseport_cb) {
> > > +                       nsk = reuseport_select_migrated_sock(sk, req_to_sk(req)->sk_hash, NULL);
> > > +                       if (nsk) {
> > > +                               if (inet_csk_reqsk_queue_migrate(sk, nsk, req))
> > > +                                       goto unlock_sock;
> > > +                               else
> > > +                                       sock_put(nsk);
> > > +                       }
> > > +               }
> > > +
> > >                 inet_child_forget(sk, req, child);
> > >                 reqsk_put(req);
> > > +unlock_sock:
> > >                 bh_unlock_sock(child);
> > >                 local_bh_enable();
> > >                 sock_put(child);
> > > ---8<---
> > > 
> > > 
> > > > > >   5. lock the accept queue of the new listener
> > > > > >   6. splice requests and increment refcount
> > > > > >   7. unlock
> > > > > > 
> > > > > > Also, I think splicing is better to keep the order of requests. Adding one
> > > > > > by one reverses it.
> > > > > It can keep the order but I think it is orthogonal here.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ