lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090126223703.GA5508@redhat.com>
Date:	Mon, 26 Jan 2009 23:37:03 +0100
From:	Oleg Nesterov <oleg@...hat.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Andrew Morton <akpm@...ux-foundation.org>, a.p.zijlstra@...llo.nl,
	rusty@...tcorp.com.au, travis@....com, mingo@...hat.com,
	davej@...hat.com, cpufreq@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] work_on_cpu: Use our own workqueue.

On 01/26, Ingo Molnar wrote:
>
> Andrew's suggestion does make sense though: for any not-in-progress
> worklet we can dequeue that worklet and execute it in the flushing
> context. [ And if that worklet cannot be dequeued because it's being
> processed then that's fine and we can wait on that single worklet, without
> waiting on any other 'unrelated' worklets. ]

Yes sure. This is easy, and I am not sure we need the special handler.
If the caller wants this behaviour, it can do:

	if (cancel_work_sync(work))
		work->func(work);

But flush_work() was specially introduced for the case when we can't
do the above,

> That does not help work_on_cpu() though: that facility really uses the
> fact that workqueues are implemented via per CPU threads - hence we cannot
> remove the worklet from the queue and execute it in the flushing context.

Yes.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ