lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <522F3ACC.9000701@linux.intel.com>
Date:	Tue, 10 Sep 2013 08:29:16 -0700
From:	Arjan van de Ven <arjan@...ux.intel.com>
To:	Ingo Molnar <mingo@...nel.org>
CC:	Peter Zijlstra <peterz@...radead.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andi Kleen <ak@...ux.intel.com>, Peter Anvin <hpa@...or.com>,
	Mike Galbraith <bitbucket@...ine.de>,
	Thomas Gleixner <tglx@...utronix.de>,
	Frederic Weisbecker <fweisbec@...il.com>,
	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org
Subject: Re: [PATCH 0/7] preempt_count rework -v2

On 9/10/2013 6:56 AM, Ingo Molnar wrote:
>
> * Ingo Molnar <mingo@...nel.org> wrote:
>
>> So what we do in kick_process() is:
>>
>>          preempt_disable();
>>          cpu = task_cpu(p);
>>          if ((cpu != smp_processor_id()) && task_curr(p))
>>                  smp_send_reschedule(cpu);
>>          preempt_enable();
>>
>> The preempt_disable() looks sweet:
>>
>>>    ffffffff8106f3f1:       65 ff 04 25 e0 b7 00    incl   %gs:0xb7e0
>>>    ffffffff8106f3f8:       00
>>
>> and the '*' you marked is the preempt_enable() portion, which, with your
>> new code, looks like this:
>>
>>   #define preempt_check_resched() \
>>   do { \
>>          if (unlikely(!*preempt_count_ptr())) \
>>                  preempt_schedule(); \
>>   } while (0)
>>
>> Which GCC translates to:
>>
>>> * ffffffff8106f42a:       65 ff 0c 25 e0 b7 00    decl   %gs:0xb7e0
>>>    ffffffff8106f431:       00
>>> * ffffffff8106f432:       0f 94 c0                sete   %al
>>> * ffffffff8106f435:       84 c0                   test   %al,%al
>>> * ffffffff8106f437:       75 02                   jne    ffffffff8106f43b <kick_process+0x4b>
>
> Correction, so this comes from the new x86-specific optimization:
>
> +static __always_inline bool __preempt_count_dec_and_test(void)
> +{
> +       unsigned char c;
> +
> +       asm ("decl " __percpu_arg(0) "; sete %1"
> +                       : "+m" (__preempt_count), "=qm" (c));
> +
> +       return c != 0;
> +}
>
> And that's where the sete and test originates from.
>
> Couldn't it be improved by merging the preempt_schedule() call into a new
> primitive, keeping the call in the regular flow, or using section tricks
> to move it out of line? The scheduling case is a slowpath in most cases.
>
also.. yuck on using "dec"
"dec" sucks, please use "sub foo  ,1" instead
(dec sucks because of its broken flags behavior; it creates basically a bubble in the pipeline)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ