lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090428134209.GA5978@nowhere>
Date:	Tue, 28 Apr 2009 15:42:11 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>, zhaolei@...fujitsu.com,
	kosaki.motohiro@...fujitsu.com, rostedt@...dmis.org,
	tzanussi@...il.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] workqueue_tracepoint: Add worklet tracepoints for
	worklet lifecycle tracing

On Mon, Apr 27, 2009 at 05:02:41PM +0200, Oleg Nesterov wrote:
> On 04/26, Andrew Morton wrote:
> >
> > Most workqueue work lately has come from Oleg.  I'm unaware that he has
> > expressed an interest in this feature?  Oleg, would it have been useful
> > in any of the work you've done?
> 
> Well. Probably not. But I don't think this matters. Other people (and
> not only kernel developers) can find this useful.
> 
> I _think_ that if you are going to hack workqueue.c itself, it is
> more easy to just add some printks, may be I am wrong. But, probably
> tracepoints can help to write/debug, say, device drivers. Or admins
> can use debugfs to see whats going on, or to provide more info for
> the bugreports.
> 
> I try to avoid "do we need this feauture" discussions as much as
> possible. Because I never know. My usage of kernel is very, very
> limited, I never do something "interesting" on my machine. This reminds
> me the discussion about the ability to trace /sbin/init. Some developers
> were unhappy with the trivial patch I sent. They said it is trivial
> to change your kernel if you need this. But no, it was not trivial
> to me when I was admin. So, I just can't judge.
> 
> > > And the thing is, the workqueue code has been pretty problematic
> > > lately - with lockups and other regressions.
> 
> Hmm. Perhaps I missed some bug-reports... But I don't remember any
> recent problems with workueues except the "usual" bugs like "flush
> shares the lock with work->func".
> 
> 
> As for the patches, I can't review them now. They are on top of
> some other changes which I didn't see (or perhaps I lost the patches
> I was cc'ed? sorry in this case).


I'm currently gathering Zhaolei patches and I will push them all in
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing.git
 tracing/workqueue

Once it's done, I will also push the fixes that address some of your
comments from your last review.

 
> But at least the change in workqueue.c looks very simple, and do
> not complicate the code for readers. And, if we add any tracing
> to workqueues, then it is very natural to add entry/exit handlers.
> 
> 
> I must admit, I don't really understand why trace_workqueue.c uses
> cwq->thread as a "primary key". I have the feeling we can simplify
> this code if we pass "struct workqueue_struct *" instead, but I am
> not sure.



Indeed, I wanted to use it as the identifier first. The problem
is that this structure is privately defined inside kernel/workqueue.c
And because struct cpu_workqueue * and struct cpu_workqueue::thread
is a 1:1 relation, I've used the thread as the real identifier, so
that I can access the thread_comm and other things.

But actually it's not really a 1:1 matching in CONFIG_HOTPLUG_CPU
case, because the thread can destroyed and the cpu_workqueue assigned
with a new one later.

I managed it by also destroying the workqueue in that case on the stat.


> In particular, trace_workqueue_flush(cwq->thread) looks a bit strange
> to me. I can't imagine how it can be useful per-thread and without
> trace_workqueue_flush_end() or something. I mean, if we need to trace
> flushes, then imho it makes much more sense to add flush_start/flush_end
> handlers into flush_workqueue().


True. The flush would make more sense if it's embraced with a begin/end.
The result would be:

wq_name flush_begin
 cpu_workqueue1
	worklet1 exec
	worklet2 exec
 cpu_workqueue2
	worklet3 exec
	worklet4 exec
 ...
wq_name flush_end

So that we can more easily eye-parse this section and have the
execution time of the whole wait for completion.


> Oleg.
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ