lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m21qtdqocv.fsf@gmail.com>
Date:   Fri, 19 Aug 2022 03:56:29 +0800
From:   Schspa Shi <schspa@...il.com>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     rostedt@...dmis.org, linux-kernel@...r.kernel.org,
        linux-rt-users@...r.kernel.org
Subject: Re: [RFC PATCH] irq_work: wakeup irq_workd when queued first
 rt_lazy work


Sebastian Andrzej Siewior <bigeasy@...utronix.de> writes:

> On 2022-07-12 01:23:15 [+0800], Schspa Shi wrote:
>> I want to know if this difference is by design.
>
> Yes. type1 (LAZY) does not need immediate action but can't be scheduled
> regularly like a workqueue.
>
>> If this is by design, we have a problem that the irq_work of type2
>> will not execute as quickly as expected, it may be delayed by the
>> irq_work of type1.
>> 
>> Please consider the following scenarios:
>> 
>> If the CPU queued a type1 irq_work A, and then a type2 irq_work B.
>> But we won't make B executed quickly, because we won't issue the IPI
>> interrupt to wakeup irq_workd (the llist_add call will return false).
>
> But those two are different lists. So adding type1 to list1 does not
> affect type2 with list2
>

No, this will be added to same list (lazy_list).
All irq work without IRQ_WORK_HARD_IRQ flags will be added to lazy_list.
Maybe my description of type2 is not clear, type2 irq work means neither
the IRQ_WORK_LAZY flag nor the IRQ_WORK_HARD_IRQ flag is set.

>> This PATCH will issue the IPI_IRQ_WORK to make B execute quickly.
>> 
>> One thing that needs to be optimized is that we now have
>> lazy_list.node.llist and lazy_work_raised which need to be granted
>> to be atomicity, disabled the local CPU IRQ to make this atomic.
>> There should be a better way to make these two variants to be atomically
>> and I can go in deep if this little problem is not by design, and need
>> to be fixed.
>> 
>> If these two types of irq_work should be the same with the priority.
>> maybe we should change.
>> 
>> if (!lazy_work || tick_nohz_tick_stopped()) {
>> 	arch_irq_work_raise();
>> }
>> 
>> to
>> 
>> if (!(lazy_work || rt_lazy_work) || tick_nohz_tick_stopped()) {
>> 	arch_irq_work_raise();
>> }
>
> but we wait for the timer for the lazy-work. RT has more LAZY items
> compared to !RT. So if there is an error then it should be visible
> there, too.
>

As type 2 work and type 1 work will be added to lazy_list, type 2 work
can be delayed and have same priority as type 1.

> Is there a problem with this? Adding (as you call it) type1 item does
> not affect type2 items. They will will processed asap.
>

I noticed this because there is a BUG before the patch
b4c6f86ec2f6 ("irq_work: Handle some irq_work in a per-CPU thread on PREEMPT_RT")
applied, which makes the task hang on when the CPU hotplug.

On some RT branches, lazy_work will be queued to ksoftirq via commit
https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/kernel/irq_work.c?h=linux-5.10.y-rt&id=c1ecdc62c514c2d541490026c312ec614ebd35aa
c1ecdc62c5 ("irqwork: push most work into softirq context")

Which makes the irq_work won't be executed due to we don't call arch_irq_work_raise();
and raise_softirq(TIMER_SOFTIRQ); won't be executed by this case too.

If there is no timer exists on the current CPU, it will hang forever.

Log as fellowing.

[32987.846092] INFO: task core_ctl:749 blocked for more than 120 seconds.
[32987.846106]       Tainted: G           O      5.10.59-rt52-g19228cd9c280-dirty #24
[32987.846117] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[32987.846125] task:core_ctl        state:D stack:    0 pid:  749 ppid:     2 flags:0x00000028
[32987.846149] Call trace:
[32987.846155]  __switch_to+0x164/0x17c
[32987.846175]  __schedule+0x4cc/0x5c0
[32987.846190]  schedule+0x7c/0xcc
[32987.846205]  schedule_timeout+0x34/0xdc
[32987.846338]  do_wait_for_common+0xa0/0x12c
[32987.846360]  wait_for_common+0x44/0x68
[32987.846376]  wait_for_completion+0x18/0x24
[32987.846391]  __cpuhp_kick_ap+0x58/0x68
[32987.846408]  cpuhp_kick_ap+0x38/0x94
[32987.846423]  _cpu_down+0xbc/0x1f8
[32987.846443]  cpu_down_maps_locked+0x20/0x34
[32987.846461]  cpu_device_down+0x24/0x40
[32987.846477]  cpu_subsys_offline+0x10/0x1c
[32987.846496]  device_offline+0x6c/0xbc
[32987.846514]  remove_cpu+0x24/0x40
[32987.846530]  do_core_ctl+0x44/0x88 [cpuhp_qos]
[32987.846563]  try_core_ctl+0x90/0xb0 [cpuhp_qos]
[32987.846587]  kthread+0x114/0x124
[32987.846604]  ret_from_fork+0x10/0x30


Please notice this patch is only used to explain the problem, don't
try to compile it.

> Sebastian

-- 
BRs
Schspa Shi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ