lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <qvqwedmuv6mu.fsf@devbig1114.prn1.facebook.com>
Date: Thu, 01 Jun 2023 21:12:10 -0700
From: Stefan Roesch <shr@...kernel.io>
To: Jakub Kicinski <kuba@...nel.org>
Cc: io-uring@...r.kernel.org, kernel-team@...com, axboe@...nel.dk,
 ammarfaizi2@...weeb.org, netdev@...r.kernel.org, olivier@...llion01.com
Subject: Re: [PATCH v13 1/7] net: split off __napi_busy_poll from
 napi_busy_poll


Jakub Kicinski <kuba@...nel.org> writes:

> On Wed, 31 May 2023 12:16:50 -0700 Stefan Roesch wrote:
>> > This will conflict with:
>> >
>> >     https://git.kernel.org/netdev/net-next/c/c857946a4e26
>> >
>> > :( Not sure what to do about it..
>> >
>> > Maybe we can merge a simpler version to unblock io-uring (just add
>> > need_resched() to your loop_end callback and you'll get the same
>> > behavior). Refactor in net-next in parallel. Then once trees converge
>> > do simple a cleanup and call the _rcu version?
>>
>> Jakub, I can certainly call need_resched() in the loop_end callback, but
>> isn't there a potential race? need_resched() in the loop_end callback
>> might not return true, but the need_resched() call in napi_busy_poll
>> does?
>
> need_resched() is best effort. It gets added to potentially long
> execution paths and loops. Extra single round thru the loop won't
> make a difference.

I might be missing something, however what can happen at a high-level is:

io_napi_blocking_busy_loop()
  rcu_read_lock()
  __io_napi_busy_do_busy_loop()
  rcu_read_unlock()

in __io_napi_do_busy_loop() we do

__io_napi_do_busy_loop()
  list_foreach_entry_rcu()
    napi_busy_loop()


and in napi_busy_loop()

napi_busy_loop()
  rcu_read_lock()
  __napi_busy_poll()
  loop_end()
  if (need_resched) {
    rcu_read_unlock()
    schedule()
  }


The problem with checking need_resched in loop_end is that need_resched
can be false in loop_end, however the check for need_resched in
napi_busy_loop succeeds. This means that we unlock the rcu read lock and
call schedule. However the code in io_napi_blocking_busy_loop still
believes we hold the read lock.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ