lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231120192913.28629-1-kuniyu@amazon.com>
Date:   Mon, 20 Nov 2023 11:29:13 -0800
From:   Kuniyuki Iwashima <kuniyu@...zon.com>
To:     <ivan@...udflare.com>
CC:     <edumazet@...gle.com>, <hdanton@...a.com>,
        <kernel-team@...udflare.com>, <kuniyu@...zon.com>,
        <linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>,
        <pabeni@...hat.com>
Subject: Re: wait_for_unix_gc can cause CPU overload for well behaved programs

From: Ivan Babrou <ivan@...udflare.com>
Date: Fri, 17 Nov 2023 15:38:42 -0800
> On Mon, Oct 23, 2023 at 4:46 PM Kuniyuki Iwashima <kuniyu@...zon.com> wrote:
> >
> > From: Ivan Babrou <ivan@...udflare.com>
> > Date: Mon, 23 Oct 2023 16:22:35 -0700
> > > On Fri, Oct 20, 2023 at 6:23 PM Hillf Danton <hdanton@...a.com> wrote:
> > > >
> > > > On Fri, 20 Oct 2023 10:25:25 -0700 Ivan Babrou <ivan@...udflare.com>
> > > > >
> > > > > This could solve wait_for_unix_gc spinning, but it wouldn't affect
> > > > > unix_gc itself, from what I understand. There would always be one
> > > > > socket writer or destroyer punished by running the gc still.
> > > >
> > > > See what you want. The innocents are rescued by kicking a worker off.
> > > > Only for thoughts.
> > > >
> > > > --- x/net/unix/garbage.c
> > > > +++ y/net/unix/garbage.c
> > > > @@ -86,7 +86,6 @@
> > > >  /* Internal data structures and random procedures: */
> > > >
> > > >  static LIST_HEAD(gc_candidates);
> > > > -static DECLARE_WAIT_QUEUE_HEAD(unix_gc_wait);
> > > >
> > > >  static void scan_inflight(struct sock *x, void (*func)(struct unix_sock *),
> > > >                           struct sk_buff_head *hitlist)
> > > > @@ -185,24 +184,25 @@ static void inc_inflight_move_tail(struc
> > > >                 list_move_tail(&u->link, &gc_candidates);
> > > >  }
> > > >
> > > > -static bool gc_in_progress;
> > > > +static void __unix_gc(struct work_struct *w);
> > > > +static DECLARE_WORK(unix_gc_work, __unix_gc);
> > > > +
> > > >  #define UNIX_INFLIGHT_TRIGGER_GC 16000
> > > >
> > > >  void wait_for_unix_gc(void)
> > > >  {
> > > >         /* If number of inflight sockets is insane,
> > > > -        * force a garbage collect right now.
> > > > -        * Paired with the WRITE_ONCE() in unix_inflight(),
> > > > -        * unix_notinflight() and gc_in_progress().
> > > > -        */
> > > > -       if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC &&
> > > > -           !READ_ONCE(gc_in_progress))
> > > > -               unix_gc();
> > > > -       wait_event(unix_gc_wait, gc_in_progress == false);
> > > > +        * kick a garbage collect right now.
> > > > +        *
> > > > +        * todo s/wait_for_unix_gc/kick_unix_gc/
> > > > +        */
> > > > +       if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC /2)
> > > > +               queue_work(system_unbound_wq, &unix_gc_work);
> > > >  }
> > > >
> > > > -/* The external entry point: unix_gc() */
> > > > -void unix_gc(void)
> > > > +static DEFINE_MUTEX(unix_gc_mutex);
> > > > +
> > > > +static void __unix_gc(struct work_struct *w)
> > > >  {
> > > >         struct sk_buff *next_skb, *skb;
> > > >         struct unix_sock *u;
> > > > @@ -211,15 +211,10 @@ void unix_gc(void)
> > > >         struct list_head cursor;
> > > >         LIST_HEAD(not_cycle_list);
> > > >
> > > > +       if (!mutex_trylock(&unix_gc_mutex))
> > > > +               return;
> > > >         spin_lock(&unix_gc_lock);
> > > >
> > > > -       /* Avoid a recursive GC. */
> > > > -       if (gc_in_progress)
> > > > -               goto out;
> > > > -
> > > > -       /* Paired with READ_ONCE() in wait_for_unix_gc(). */
> > > > -       WRITE_ONCE(gc_in_progress, true);
> > > > -
> > > >         /* First, select candidates for garbage collection.  Only
> > > >          * in-flight sockets are considered, and from those only ones
> > > >          * which don't have any external reference.
> > > > @@ -325,11 +320,12 @@ void unix_gc(void)
> > > >         /* All candidates should have been detached by now. */
> > > >         BUG_ON(!list_empty(&gc_candidates));
> > > >
> > > > -       /* Paired with READ_ONCE() in wait_for_unix_gc(). */
> > > > -       WRITE_ONCE(gc_in_progress, false);
> > > > -
> > > > -       wake_up(&unix_gc_wait);
> > > > -
> > > > - out:
> > > >         spin_unlock(&unix_gc_lock);
> > > > +       mutex_unlock(&unix_gc_mutex);
> > > > +}
> > > > +
> > > > +/* The external entry point: unix_gc() */
> > > > +void unix_gc(void)
> > > > +{
> > > > +       __unix_gc(NULL);
> > > >  }
> > > > --
> > >
> > > This one results in less overall load than Kuniyuki's proposed patch
> > > with my repro:
> > >
> > > * https://lore.kernel.org/netdev/20231020220511.45854-1-kuniyu@amazon.com/
> > >
> > > My guess is that's because my repro is the one that is getting penalized there.
> >
> > Thanks for testing, and yes.
> >
> > It would be good to split the repro to one offender and one normal
> > process, run them on different users, and measure load on the normal
> > process.
> >
> >
> > > There's still a lot work done in unix_release_sock here, where GC runs
> > > as long as you have any fds inflight:
> > >
> > > * https://elixir.bootlin.com/linux/v6.1/source/net/unix/af_unix.c#L670
> > >
> > > Perhaps it can be improved.
> >
> > Yes, it also can be done async by worker as done in my first patch.
> > I replaced schedule_work() with queue_work() to avoid using system_wq
> > as gc could take long.
> >
> > Could you try this ?
> 
> Apologies for the long wait, I was OOO.
> 
> A bit of a problem here is that unix_gc is called unconditionally in
> unix_release_sock.

unix_release_sock() calls gc only when there is a inflight socket.


> It's done asynchronously now and it can only
> consume a single CPU with your changes, which is a lot better than
> before. I am wondering if we can still do better to only trigger gc
> when it touches unix sockets that have inflight fds.
> 
> Commit 3c32da19a858 ("unix: Show number of pending scm files of
> receive queue in fdinfo") added "struct scm_stat" to "struct
> unix_sock", which can be used to check for the number of inflight fds
> in the unix socket. Can we use that to drive the GC?

I don't think the stat is useful to trigger gc.  Unless the receiver
is accessible via sendmsg(), it's not a gc candidate and we need not
care about its stats about valid FDs that are ready to be processed
and never cleaned up by gc until close().


> I think we can:
> 
> * Trigger unix_gc from unix_release_sock if there's a non-zero number
> of inflight fds in the socket being destroyed.

This is the case of now.


> * Trigger wait_for_unix_gc from the write path only if the write
> contains fds or if the socket contains inflight fds. Alternatively, we
> can run gc at the end of the write path and only check for inflight
> fds on the socket there, which is probably simpler.

I don't think it's better to call gc at the end of sendmsg() as there
would be small chance to consume memory compared to running gc in the
beginning of sendmsg().


> GC only applies to unix sockets inflight of other unix sockets, so GC
> can still affect sockets passing regular fds around, but it wouldn't
> affect non-fd-passing unix sockets, which are usually in the data
> path.

I think we can run gc only when scm contains AF_UNIX sockets by tweaking
a little bit scm processing.


> This way we don't have to check for per-user inflight fds, which means
> that services running as "nobody" wouldn't be punished for other
> services running as "nobody" screwing up.

If we do not check user, a user could send 16000 AF_UNIX fds and
other innocent users sending fds must wait gc.

I think isolating users would make more sense so you can sandbox
a suspicious process.

Probably we can move flush_work() in the preceding if.  Then, the
number of gc invocation in the "nobody" case is the same as before.

---8<---
diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 51f30f89bacb..74fc208c8858 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -198,12 +198,13 @@ void wait_for_unix_gc(void)
 	 * Paired with the WRITE_ONCE() in unix_inflight(),
 	 * unix_notinflight().
 	 */
-	if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC)
+	if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC) {
 		queue_work(system_unbound_wq, &unix_gc_work);
 
-	/* Penalise senders of not-yet-received-fd */
-	if (READ_ONCE(user->unix_inflight))
-		flush_work(&unix_gc_work);
+		/* Penalise senders of not-yet-received-fd */
+		if (READ_ONCE(user->unix_inflight))
+			flush_work(&unix_gc_work);
+	}
 }
 
 static void __unix_gc(struct work_struct *work)
---8<---


> 
> Does this sound like a reasonable approach?
> 
[...]
> > -static bool gc_in_progress;
> > -#define UNIX_INFLIGHT_TRIGGER_GC 16000
> > +#define UNIX_INFLIGHT_TRIGGER_GC 16
> 
> It's probably best to keep it at 16k.

Oops, this is just for testing on my local machine easily :p

Anyway, I'll post a formal patch this week.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ