lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140411190014.GL14815@wotan.suse.de>
Date:	Fri, 11 Apr 2014 21:00:14 +0200
From:	"Luis R. Rodriguez" <mcgrof@...e.com>
To:	Julia Lawall <julia.lawall@...6.fr>
Cc:	SF Markus Elfring <elfring@...rs.sourceforge.net>,
	Johannes Berg <johannes@...solutions.net>,
	linux-kernel@...r.kernel.org, backports@...r.kernel.org,
	cocci@...teme.lip6.fr
Subject: Re: [Cocci] [PATCH] coccinelle: add pycocci wrapper for
	multithreaded support

On Fri, Apr 11, 2014 at 08:01:04AM +0200, Julia Lawall wrote:
> 
> 
> On Fri, 11 Apr 2014, SF Markus Elfring wrote:
> 
> > > I checked the profile results, the reason the jobs finish is some threads
> > > had no work or little work.
> > 
> > Could you find out during the data processing which parts or files
> > result in a special application behaviour you would like to point out here?
> 
> I don't understand the question at all, but since the various files have 
> different properties, it is hard to determine automatically in advance how 
> much work Coccinelle will need to do on each one.

For the person who might work on enhancing multithreading support, I'd wonder
if there could be gains of actually putting an effort out first to evaluate
which files have one rule hit and then adding the file to an activity file lis
to later be spread between the threads. As you note though it is hard to
determine this in advance though given that each rule express any change.

I think one small change which could help, and likely not incur a drastic
immediate change on architecture could be to not let theads take files / jobs
list of files, but instead just take say:

	work_task_n = (files / jobs) / 10

The list of files needing work could then be kept on a list protected
against the threads, and each thread will only die if all the files
have been worked on already. This would enable keeping number_cpu
threads only, as each CPU would indeed be busy all the time.

BTW is the patch Acked-by Julia? Can we commit it :)

  Luis

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ