[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXJAmx7ojpBmR7RiKm3umZ7QDaA8r-hgBTnxay11UCv42xWdA@mail.gmail.com>
Date: Mon, 3 Feb 2025 09:33:59 -0800
From: John Ousterhout <ouster@...stanford.edu>
To: Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, edumazet@...gle.com, horms@...nel.org,
kuba@...nel.org
Subject: Re: [PATCH net-next v6 08/12] net: homa: create homa_incoming.c
On Mon, Feb 3, 2025 at 1:17 AM Paolo Abeni <pabeni@...hat.com> wrote:
>
> On 1/31/25 11:35 PM, John Ousterhout wrote:
> > On Thu, Jan 30, 2025 at 1:57 AM Paolo Abeni <pabeni@...hat.com> wrote:
> >> On 1/30/25 1:48 AM, John Ousterhout wrote:
> >>> On Mon, Jan 27, 2025 at 2:19 AM Paolo Abeni <pabeni@...hat.com> wrote:
> >>>>
> >>>> On 1/15/25 7:59 PM, John Ousterhout wrote:
> >>>>> + /* Each iteration through the following loop processes one
> >> packet. */
> >>>>> + for (; skb; skb = next) {
> >>>>> + h = (struct homa_data_hdr *)skb->data;
> >>>>> + next = skb->next;
> >>>>> +
> >>>>> + /* Relinquish the RPC lock temporarily if it's needed
> >>>>> + * elsewhere.
> >>>>> + */
> >>>>> + if (rpc) {
> >>>>> + int flags = atomic_read(&rpc->flags);
> >>>>> +
> >>>>> + if (flags & APP_NEEDS_LOCK) {
> >>>>> + homa_rpc_unlock(rpc);
> >>>>> + homa_spin(200);
> >>>>
> >>>> Why spinning on the current CPU here? This is completely unexpected, and
> >>>> usually tolerated only to deal with H/W imposed delay while programming
> >>>> some device registers.
> >>>
> >>> This is done to pass the RPC lock off to another thread (the
> >>> application); the spin is there to allow the other thread to acquire
> >>> the lock before this thread tries to acquire it again (almost
> >>> immediately). There's no performance impact from the spin because this
> >>> thread is going to turn around and try to acquire the RPC lock again
> >>> (at which point it will spin until the other thread releases the
> >>> lock). Thus it's either spin here or spin there. I've added a comment
> >>> to explain this.
> >>
> >> What if another process is spinning on the RPC lock without setting
> >> APP_NEEDS_LOCK? AFAICS incoming packets targeting the same RPC could
> >> land on different RX queues.
> >>
> >
> > If that happens then it could grab the lock instead of the desired
> > application, which would defeat the performance optimization and delay the
> > application a bit. This would be no worse than if the APP_NEEDS_LOCK
> > mechanism were not present.
>
> Then I suggest using plain unlock/lock() with no additional spinning in
> between.
My concern here is that the unlock/lock sequence will happen so fast
that the other thread never actually has a chance to get the lock. I
will do some measurements to see what actually happens; if lock
ownership is successfully transferred in the common case without a
spin, then I'll remove it.
-John-
Powered by blists - more mailing lists