lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <93b294fc-c4e8-4f1f-8abb-ebcea5b8c3a1@gmail.com>
Date: Fri, 2 Aug 2024 16:22:37 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: Olivier Langlois <olivier@...llion01.com>, io-uring@...r.kernel.org
Cc: netdev@...r.kernel.org
Subject: Re: io_uring NAPI busy poll RCU is causing 50 context switches/second
 to my sqpoll thread

On 8/1/24 23:02, Olivier Langlois wrote:
> On Wed, 2024-07-31 at 01:33 +0100, Pavel Begunkov wrote:
>>
>> You're seeing something that doesn't make much sense to me, and we
>> need
>> to understand what that is. There might be a bug _somewhere_, that's
>> always a possibility, but before saying that let's get a bit more
>> data.
>>
>> While the app is working, can you grab a profile and run mpstat for
>> the
>> CPU on which you have the SQPOLL task?
>>
>> perf record -g -C <CPU number> --all-kernel &
>> mpstat -u -P <CPU number> 5 10 &
>>
>> And then as usual, time it so that you have some activity going on,
>> mpstat interval may need adjustments, and perf report it as before.
>>
> First thing first.
> 
> The other day, I did put my foot in my mouth by saying the NAPI busy
> poll was adding 50 context switches/second.
> 
> I was responsible for that behavior with the rcu_nocb_poll boot kernel
> param. I have removed the option and the context switches went away...
> 
> I am clearly outside my comfort zone with this project, I am trying
> things without fully understand what I am doing and I am making errors
> and stuff that is incorrect.
> 
> On top of that, before mentioning io_uring RCU usage, I did not realize
> that net/core was already massively using RCU, including in
> napi_busy_poll, therefore, that io_uring is using rcu before calling
> napi_busy_poll, the point does seem very moot.
> 
> this is what I did the other day and I wanted to apologize to have said
> something incorrect.

No worries at all, you're pushing your configuration to extremes,
anyone would get lost in the options, and I'm getting curious what
you can squeeze from it. That's true that the current tracking
scheme might be an overkill but not because of mild RCU use.

> that being said, it does not remove the possible merit of what I did
> propose.
> 
> I really think that the current io_uring implemention of the napi
> device tracking strategy is overkill for a lot of scenarios...
> 
> if some sort of abstract interface like a mini struct net_device_ops
> with 3-4 function pointers where the user could select between the
> standard dynamic tracking or a manual lightweight tracking was present,
> that would be very cool... so cool...
> 
> I am definitely interested in running the profiler tools that you are
> proposing... Most of my problems are resolved...
> 
> - I got rid of 99.9% if the NET_RX_SOFTIRQ
> - I have reduced significantly the number of NET_TX_SOFTIRQ
>    https://github.com/amzn/amzn-drivers/issues/316
> - No more rcu context switches
> - CPU2 is now nohz_full all the time
> - CPU1 local timer interrupt is raised once every 2-3 seconds for an
> unknown origin. Paul E. McKenney did offer me his assistance on this
> issue
> https://lore.kernel.org/rcu/367dc07b740637f2ce0298c8f19f8aec0bdec123.camel@trillion01.com/t/#u

And I was just going to propose to ask Paul, but great to
see you beat me on that

> I am going to give perf record a second chance... but just keep in
> mind, that it is not because it is not recording much, it is not
> because nothing is happening. if perf relies on interrupts to properly
> operate, there is close to 0 on my nohz_full CPU...
> 
> thx a lot for your help Pavel!
> 

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ