lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <e7cdcca6-d0b2-4b59-a2ef-17834a8ffca3@redhat.com>
Date: Mon, 3 Feb 2025 10:17:30 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: John Ousterhout <ouster@...stanford.edu>
Cc: netdev@...r.kernel.org, edumazet@...gle.com, horms@...nel.org,
 kuba@...nel.org
Subject: Re: [PATCH net-next v6 08/12] net: homa: create homa_incoming.c

On 1/31/25 11:35 PM, John Ousterhout wrote:
> On Thu, Jan 30, 2025 at 1:57 AM Paolo Abeni <pabeni@...hat.com> wrote:
>> On 1/30/25 1:48 AM, John Ousterhout wrote:
>>> On Mon, Jan 27, 2025 at 2:19 AM Paolo Abeni <pabeni@...hat.com> wrote:
>>>>
>>>> On 1/15/25 7:59 PM, John Ousterhout wrote:
>>>>> +     /* Each iteration through the following loop processes one
>> packet. */
>>>>> +     for (; skb; skb = next) {
>>>>> +             h = (struct homa_data_hdr *)skb->data;
>>>>> +             next = skb->next;
>>>>> +
>>>>> +             /* Relinquish the RPC lock temporarily if it's needed
>>>>> +              * elsewhere.
>>>>> +              */
>>>>> +             if (rpc) {
>>>>> +                     int flags = atomic_read(&rpc->flags);
>>>>> +
>>>>> +                     if (flags & APP_NEEDS_LOCK) {
>>>>> +                             homa_rpc_unlock(rpc);
>>>>> +                             homa_spin(200);
>>>>
>>>> Why spinning on the current CPU here? This is completely unexpected, and
>>>> usually tolerated only to deal with H/W imposed delay while programming
>>>> some device registers.
>>>
>>> This is done to pass the RPC lock off to another thread (the
>>> application); the spin is there to allow the other thread to acquire
>>> the lock before this thread tries to acquire it again (almost
>>> immediately). There's no performance impact from the spin because this
>>> thread is going to turn around and try to acquire the RPC lock again
>>> (at which point it will spin until the other thread releases the
>>> lock). Thus it's either spin here or spin there. I've added a comment
>>> to explain this.
>>
>> What if another process is spinning on the RPC lock without setting
>> APP_NEEDS_LOCK? AFAICS incoming packets targeting the same RPC could
>> land on different RX queues.
>>
> 
> If that happens then it could grab the lock instead of the desired
> application, which would defeat the performance optimization and delay the
> application a bit. This would be no worse than if the APP_NEEDS_LOCK
> mechanism were not present.

Then I suggest using plain unlock/lock() with no additional spinning in
between.

/P


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ