lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090827184939.GK12579@kernel.dk>
Date:	Thu, 27 Aug 2009 20:49:39 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	linux-kernel@...r.kernel.org, linux-ide@...r.kernel.org,
	alan@...rguk.ukuu.org.uk, jeff@...zik.org, dhowells@...hat.com
Subject: Re: [PATCH 0/3] Convert libata pio task to slow-work

On Thu, Aug 27 2009, Tejun Heo wrote:
> Hello, Jens.
> 
> Jens Axboe wrote:
> >> It would be nice if merging of this series and the lazy work can be
> >> held a bit but there's no harm in merging either.  If the concurrency
> >> managed workqueue turns out to be a good idea, we can replace it then.
> > 
> > It can wait, what you describe above sounds really cool and would
> > hopefully allow us to get rid of all workqueues (provided it scales well
> > and doesn't fall down on cache line contention with many different
> > instances pounding on it).
> 
> Almost all operations are per-cpu so cache lines shouldn't bounce too
> much.  The only part I worry about is the part which checks whether a
> work is currently executing on the current cpu which currently is
> implemeted as a hash table.  The hash table is only 16 pointers long
> and will be mostly empty so hopefully it doesn't add any significant
> overhead.

OK, we'll let time and experimentation be the judge.

> > Care to post it? I know you don't think it's perfect yet, but it would
> > make a lot more sense to throw effort into this rather than waste time
> > on partial solutions.
> 
> I have this printed out code with full of red markings from proof
> reading and flush implementation is mostly broken.  Please give me a
> couple of days.  I'll post a rough unsplit version which at least
> compiles with the planned changes applied by the end of the week.  :-)

Alright, fair enough.

One question - do the 'exposed' workqueues (the ones that drivers
allocate/create) sitting in front of the global cpu queue allow more
than one thread per cpu, or is that property retained for the global cpu
queue (where it is a necessity)?

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ