lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <123C6650-490B-4D08-96B4-39B118AD0054@joelfernandes.org>
Date:   Mon, 24 Jul 2023 19:04:14 -0400
From:   Joel Fernandes <joel@...lfernandes.org>
To:     paulmck@...nel.org
Cc:     linux-kernel@...r.kernel.org, stable@...r.kernel.org,
        rcu@...r.kernel.org, Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [BUG] Re: Linux 6.4.4



> On Jul 24, 2023, at 12:00 PM, Paul E. McKenney <paulmck@...nel.org> wrote:
> 
> On Mon, Jul 24, 2023 at 09:36:02AM -0400, Joel Fernandes wrote:
>>> On Sun, Jul 23, 2023 at 11:35 PM Paul E. McKenney <paulmck@...nel.org> wrote:
>>> 
>>> On Mon, Jul 24, 2023 at 12:32:57AM +0000, Joel Fernandes wrote:
>>>> On Sun, Jul 23, 2023 at 10:19:27AM -0700, Paul E. McKenney wrote:
>>>>> On Sun, Jul 23, 2023 at 10:50:26AM -0400, Joel Fernandes wrote:
>>>>>> 
>>>>>> 
>>>>>> On 7/22/23 13:27, Paul E. McKenney wrote:
>>>>>> [..]
>>>>>>> 
>>>>>>> OK, if this kernel is non-preemptible, you are not running TREE03,
>>>>>>> correct?
>>>>>>> 
>>>>>>>> Next plan of action is to get sched_waking stack traces since I have a
>>>>>>>> very reliable repro of this now.
>>>>>>> 
>>>>>>> Too much fun!  ;-)
>>>>>> 
>>>>>> For TREE07 issue, it is actually the schedule_timeout_interruptible(1)
>>>>>> in stutter_wait() that is beating up the CPU0 for 4 seconds.
>>>>>> 
>>>>>> This is very similar to the issue I fixed in New year in d52d3a2bf408
>>>>>> ("torture: Fix hang during kthread shutdown phase")
>>>>> 
>>>>> Agreed, if there are enough kthreads, and all the kthreads are on a
>>>>> single CPU, this could consume that CPU.
>>>>> 
>>>>>> Adding a cond_resched() there also did not help.
>>>>>> 
>>>>>> I think the issue is the stutter thread fails to move spt forward
>>>>>> because it does not get CPU time. But spt == 1 should be very brief
>>>>>> AFAIU. I was wondering if we could set that to RT.
>>>>> 
>>>>> Or just use a single hrtimer-based wait for each kthread?
>>>> 
>>>> [Joel]
>>>> Yes this might be better, but there's still the issue that spt may not be set
>>>> back to 0 in some future release where the thread gets starved.
>>> 
>>> But if each thread knows the absolute time at which the current stutter
>>> period is supposed to end, there should not be any need for the spt
>>> variable, correct?
>> 
>> Yes.
>> 
>>>>>> But also maybe the following will cure it like it did for the shutdown
>>>>>> issue, giving the stutter thread just enough CPU time to move spt forward.
>>>>>> 
>>>>>> Now I am trying the following and will let it run while I go do other
>>>>>> family related things. ;)
>>>>> 
>>>>> Good point, if this avoids the problem, that gives a strong indication
>>>>> that your hypothesis on the root cause is correct.
>>>> 
>>>> [Joel]
>>>> And the TREE07 issue is gone with that change!
>> [...]
>>>> Let me know what you think, thanks!
>>> 
>>> If we can make the stutter kthread set an absolute time for the current
>>> stutter period to end, then we should be able to simplify the code quite
>>> a bit and get rid of the CPU consumption entirely.  (Give or take the
>>> possible need for a given thread to check whether it was erroneously
>>> awakened early.)
>>> 
>>> But what specifically did you have in mind?
>> 
>> I was thinking of a 2 counter approach storing the absolute time. Use
>> an alternative counter for different stuttering sessions. But yes,
>> generally I agree with the absolute time idea. What do you think Paul?
>> 
>> Do we want to just do  the simpler schedule_timeout at HZ / 20 to keep stable
>> green, and do the absolute-time approach for mainline? That might be better
>> from a process PoV. But I think stable requires patches to be upstream. Greg?
>> 
>> I will try to send out patches this week to discuss this, thanks,
> 
> Heh!!!
> 
> Me, I was just thinking of mainline.  ;-)

Turns out it is simple enough for both mainline and stable :-).
Will test more and send it out soon.

Thanks,

- Joel


> 
>                            Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ