lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 15 Jun 2016 08:53:42 -0500
From:	Gary R Hook <ghook@....com>
To:	Steffen Klassert <steffen.klassert@...unet.com>,
	"Jason A. Donenfeld" <Jason@...c4.com>
CC:	<linux-crypto@...r.kernel.org>, Netdev <netdev@...r.kernel.org>
Subject: Re: padata - is serial actually serial?

On 06/15/2016 06:52 AM, Steffen Klassert wrote:
> Hi Jason.
>
> On Tue, Jun 14, 2016 at 11:00:54PM +0200, Jason A. Donenfeld wrote:
>> Hi Steffen & Folks,
>>
>> I submit a job to padata_do_parallel(). When the parallel() function
>> triggers, I do some things, and then call padata_do_serial(). Finally
>> the serial() function triggers, where I complete the job (check a
>> nonce, etc).
>>
>> The padata API is very appealing because not only does it allow for
>> parallel computation, but it claims that the serial() functions will
>> execute in the order that jobs were originally submitted to
>> padata_do_parallel().
>>
>> Unfortunately, in practice, I'm pretty sure I'm seeing deviations from
>> this. When I submit tons and tons of tasks at rapid speed to
>> padata_do_parallel(), it seems like the serial() function isn't being
>> called in the exactly the same order that tasks were submitted to
>> padata_do_parallel().
>>
>> Is this known (expected) behavior? Or have I stumbled upon a potential
>> bug that's worthwhile for me to investigate more?
>
> It should return in the same order as the job were submitted,
> given that the submitting cpu and the callback cpu are fixed
> for all the jobs you want to preserve the order.  If you submit
> jobs from more than one cpu, we can not know in which order
> they are enqueued. The cpu that gets the lock as the first
> has its job in front.

Isn't there an element of indeterminacy at the application thread level
(i.e. user space) too? We don't know how the jobs are being submitted, but
unless that is being handled by a single thread in a single process, I
think all bets are off with respect to ordering.

Then again, perhaps I'm not grokking the details here.

> Same if you use more than one callback cpu
> we can't know in which order they are dequeued, because the
> serial workers are scheduled independent on each cpu.

Powered by blists - more mailing lists