lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABeCy1YcELDiFB0rdSCKGgPm7RE5MkQL9v-9xOHfwn5SP3iVeA@mail.gmail.com>
Date:	Thu, 9 Feb 2012 18:17:06 -0800
From:	Venki Pallipadi <venki@...gle.com>
To:	Yong Zhang <yong.zhang0@...il.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Aaron Durbin <adurbin@...gle.com>,
	Paul Turner <pjt@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC] Extend mwait idle to optimize away IPIs when possible

On Wed, Feb 8, 2012 at 6:18 PM, Yong Zhang <yong.zhang0@...il.com> wrote:
> On Wed, Feb 08, 2012 at 03:28:45PM -0800, Venki Pallipadi wrote:
>> On Tue, Feb 7, 2012 at 10:51 PM, Yong Zhang <yong.zhang0@...il.com> wrote:
>> > On Mon, Feb 06, 2012 at 12:42:13PM -0800, Venkatesh Pallipadi wrote:
>> >> smp_call_function_single and ttwu_queue_remote sends unconditional IPI
>> >> to target CPU. However, if the target CPU is in mwait based idle, we can
>> >> do IPI-less wakeups using the magical powers of monitor-mwait.
>> >> Doing this has certain advantages:
>> >
>> > Actually I'm trying to do the similar thing on MIPS.
>> >
>> > The difference is that I want task_is_polling() to do something. The basic
>> > idea is:
>> >
>> >> + ? ? ? ? ? ? ? ? ? ? if (ipi_pending()) {
>> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? clear_ipi_pending();
>> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? local_bh_disable();
>> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? local_irq_disable();
>> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? generic_smp_call_function_single_interrupt();
>> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? scheduler_wakeup_self_check();
>> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? local_irq_enable();
>> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? local_bh_enable();
>> >
>> > I let cpu_idle() check if there is anything to do as your above code.
>> >
>> > And task_is_polling() handle the others with below patch:
>> > ---
>> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> > index 5255c9d..09f633d 100644
>> > --- a/kernel/sched/core.c
>> > +++ b/kernel/sched/core.c
>> > @@ -527,15 +527,16 @@ void resched_task(struct task_struct *p)
>> > ? ? ? ? ? ? ? ?smp_send_reschedule(cpu);
>> > ?}
>> >
>> > -void resched_cpu(int cpu)
>> > +int resched_cpu(int cpu)
>> > ?{
>> > ? ? ? ?struct rq *rq = cpu_rq(cpu);
>> > ? ? ? ?unsigned long flags;
>> >
>> > ? ? ? ?if (!raw_spin_trylock_irqsave(&rq->lock, flags))
>> > - ? ? ? ? ? ? ? return;
>> > + ? ? ? ? ? ? ? return 0;
>> > ? ? ? ?resched_task(cpu_curr(cpu));
>> > ? ? ? ?raw_spin_unlock_irqrestore(&rq->lock, flags);
>> > + ? ? ? return 1;
>> > ?}
>>
>
> I assume we are talking about 'return from idle' but seems I don't
> make it clear.
>
>> Two points -
>> rq->lock: I tried something similar first. One hurdle with checking
>> task_is_polling() is that you need rq->lock to check it. And adding
>> lock+unlock (without wait) in wakeup path ended up being no net gain
>> compared to IPI. And when we actually end up spinning on that lock,
>> thats going to add overhead in the common path. That is the reason I
>> switched to atomic compare exchange and moving any wait onto the
>> target CPU coming out of idle.
>
> I see. But actually we will not spinning on that lock because we
> use 'trylock' in resched_cpu().

Ahh. Sorry I missed the trylock in there...

> And you are right there is indeed a
> little overhead (resched_task()) if we hold the lock but it can be
> tolerated IMHO.

One advantage I got by using atomic stuff instead of rq->lock was as I
mentioned in the patch description, if 2 CPUs are trying to send IPI
to same target CPU around same time (50-100 us if CPU is in deep
C-state in x86).

>
> BTW, mind showing you test case thus we can collect some common data?

Test case was a silly clock measure around
__smp_call_function_single() with optimization I had in
generic_exec_single(). Attaching the patch I had..

>>
>> resched_task: ttwu_queue_remote() does not imply that the remote CPU
>> will do a resched. Today there is a IPI and IPI handler calls onto
>> check_preempt_wakeup() and if the current task has higher precedence
>> than the waking up task, then there will be just an activation of new
>> task and no resched. Using resched_task above breaks
>> check_preempt_wakeup() and always calls a resched on remote CPU after
>> the IPI, which would be change in behavior.
>
> Yeah, if the remote cpu is not idle, mine will change the behavior; but
> if the remote cpu is idle, it will always rescheduled, right?
>
> So maybe we could introduce resched_idle_cpu() to make things more clear:
>
> int resched_idle_cpu(int cpu)
> {
>        struct rq *rq = cpu_rq(cpu);
>        unsigned long flags;
>        int ret = 0;
>
>        if (!raw_spin_trylock_irqsave(&rq->lock, flags))
>                goto out;
>        if (!idle_cpu(cpu))
>                goto out_unlock;
>        resched_task(cpu_curr(cpu));
>                ret = 1;
> out_unlock:
>        raw_spin_unlock_irqrestore(&rq->lock, flags);
> out:
>        return ret;
> }
>

This should likely work. But, if you do want to use similar logic in
smp_call_function() or idle load balance kick etc, you need additional
bit other than need_resched() as there we only need irq+softirq and
not necessarily a resched.
At this time I am not sure how poll wakeup logic works in MIPS. But,
if it is something that is similar to x86 mwait and we can wakeup with
a bit other than TIF_NEED_RESCHED, we can generalize most of the
changes in my RFC and share it across archs.

-Venki

>>
>> >
>> > ?#ifdef CONFIG_NO_HZ
>> > @@ -1484,7 +1485,8 @@ void scheduler_ipi(void)
>> >
>> > ?static void ttwu_queue_remote(struct task_struct *p, int cpu)
>> > ?{
>> > - ? ? ? if (llist_add(&p->wake_entry, &cpu_rq(cpu)->wake_list))
>> > + ? ? ? if (llist_add(&p->wake_entry, &cpu_rq(cpu)->wake_list) &&
>> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? !resched_cpu(cpu))
>> > ? ? ? ? ? ? ? ?smp_send_reschedule(cpu);
>> > ?}
>> >
>> > Thought?
>> >
>> > Thanks,
>> > Yong
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>
> --
> Only stand for myself

View attachment "0003-test-ipicost-test-routine.patch" of type "text/x-patch" (3377 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ