lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201109241838.06941.kernel@kolivas.org>
Date:	Sat, 24 Sep 2011 18:38:06 +1000
From:	Con Kolivas <kernel@...ivas.org>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: BFS cpu scheduler and skip list implementation

On Sat, 24 Sep 2011 17:35:22 Andi Kleen wrote:
> On Sat, Sep 24, 2011 at 12:14:21PM +1000, Con Kolivas wrote:
> > On Sat, 24 Sep 2011 11:21:06 Andi Kleen wrote:
> > > Con Kolivas <kernel@...ivas.org> writes:
> > > > +struct nodeStructure {
> > > > +	int level;	/* Levels in this structure */
> > > > +	keyType key;
> > > > +	valueType value;
> > > > +	skiplist_node *next[16];
> > > > +	skiplist_node *prev[16];
> > > > +};
> > > 
> > > That's 128 byte / 2 cache lines, not too bad, but it limits
> > > the maximum number of tasks that can be efficiently handled
> > > (my guess to around 64k with maxlevel == 16, but someone may
> > > correct me on that)
> > 
> > Thanks very much for your informed comments. Do you mean once 64k of
> > tasks are queued concurrently, or after 64k of entries have gone in +/-
> > been removed?
> 
> queued concurrently I believe.

That's great then. I'm sure we'd explode in other weird and wonderful ways 
before the CPU load ever got to 64k. Plus all that would happen is that it 
would start degenerating from O(log n) insertion to O(n) as the number way 
surpassed 64k. The number 16 for levels was simply chosen as the one 
originally used by William Pugh in his sample code, but seems to be ample for 
this application.


Being very unimaginative with my benchmarking, a quick benchmark showed a 
significant improvement in the make -j (allnoconfig kbuild) case on my quad core 
from the previous BFS code (all with performance governor). 


3.0.0:
Elapsed Time 28.7

3.0.0-bfs406:
Elapsed Time 28.5

3.0.0-bfs406-sl:
Elapsed Time 27.0


For convenience of those interested in testing:

Here's the original 3.0 -bfs 406 patch:
http://ck.kolivas.org/patches/bfs/3.0.0/3.0-sched-bfs-406.patch

And here's a combined bfs406 + skiplists patch:
http://ck.kolivas.org/patches/bfs/test/3.0-sched-bfs-406-skiplists.patch


-- 
-ck
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ