lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1348511077.618.19.camel@maxim-laptop>
Date:	Mon, 24 Sep 2012 20:24:37 +0200
From:	Maxim Levitsky <maximlevitsky@...il.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Alex Dubov <oakad@...oo.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] memstick: add support for legacy memorysticks

On Mon, 2012-09-24 at 20:19 +0200, Maxim Levitsky wrote: 
> On Mon, 2012-09-24 at 11:05 -0700, Tejun Heo wrote: 
> > Hello,
> > 
> > On Mon, Sep 24, 2012 at 05:09:23PM +0200, Maxim Levitsky wrote:
> > > > Now that my exams done....
> > > > Can you spare me from using a workqueue?
> > 
> > I'd much prefer if you convert to workqueue.
> > 
> > > > The point is that using current model I wake the worker thread as much
> > > > as I want to, and I know that it will be woken once an will do all the
> > > > work till request queue is empty.
> > 
> > You can do exactly the same thing by scheduling the same work item
> > multiple times.  "Waking up" just becomes "scheduling the work item".
> I don't believe that will work this way.
> I will dig through the source, and see how to do that.
> 
> 
> 
> > > > With workqueues, it doesn't work this way. I have to pass the request as
> > > > a work item or something like that.
> > > > Any pointers?
> > 
> > No, there's no reason to change the structure of the code in any way.
> > Just use a work item as you would use a kthread.
> Except that if I schedule a same work item few times, these work items
> will be 'processed' in parallel, although there is just one work to do,
> work of pulling the requests from block queue until it has them, and
> dispatching them through my code.
> Or I can get a guarantee that work items wont be processed in parallel?
> Stiil, even with that only first work item will do the actual work,
> others will wake the workqueue for nothing, but I am ok with that.
Should have looked through the source. Understand now.
Just one quick question, should I create my own workqueue or use
schedule_work? if I use the later and my work function sleeps, will it
harmfully affect other users of this function?


-- 
Best regards,
        Maxim Levitsky



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ