lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 Jun 2008 19:01:14 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Oleg Nesterov <oleg@...sign.ru>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Jarek Poplawski <jarkao2@...pl>,
	Max Krasnyansky <maxk@...lcomm.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] workqueues: insert_work: use "list_head *" instead of
	"int tail"

On Thu, 2008-06-12 at 20:55 +0400, Oleg Nesterov wrote:
> On 06/12, Oleg Nesterov wrote:
> >
> > insert_work() inserts the new work_struct before or after cwq->worklist,
> > depending on the "int tail" parameter. Change it to accept "list_head *"
> > instead, this shrinks .text a bit and allows us to insert the barrier
> > after specific work_struct.
> 
> This allows us to implement
> 
> 	int flush_work(struct work_struct *work)
> 	{
> 		struct cpu_workqueue_struct *cwq;
> 		struct list_head *head;
> 		struct wq_barrier barr;
> 
> 		cwq = get_wq_data(work);
> 		if (!cwq)
> 			return 0;
> 
> 		head = NULL;
> 		spin_lock_irq(&cwq->lock);
> 		if (!list_empty(&work->entry)) {
> 			smp_rmb();
> 			/*
> 			 * ---- FAT COMMENT ----
> 			 */
> 			if (cwq == get_wq_data(work))
> 				head = work->entry.next;
> 		} else if (cwq->current_work == work) {
> 			head = cwq->worklist.next;
> 		}
> 
> 		if (head)
> 			insert_wq_barrier(cwq, &barr, head);
> 		spin_unlock_irq(&cwq->lock);
> 
> 		if (!head)
> 			return 0;
> 		wait_for_completion(&barr.done);
> 		return 1;
> 	}
> 
> suggested by Peter. It only waits for selected work_struct.
> 
> I doubt it will have a lot of users though. In most cases we need
> cancel_work_sync() and nothing more.

Are there cases where we dynamically allocate work structs and queue
them and then forget about them? In such cases we'd need something a
little more complex as we don't have work pointers to flush or cancel.

Hence that idea of flush context and completions.

Aside from that, this seems like a fine idea. :-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ