lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 Nov 2007 13:33:55 -0500
From:	William Cohen <wcohen@...hat.com>
To:	eranian@....hp.com
CC:	akpm@...l.org, Robert Richter <robert.richter@....com>,
	gregkh@...e.de, linux-kernel@...r.kernel.org,
	perfmon@...ali.hpl.hp.com, Andi Kleen <andi@...stfloor.org>,
	perfmon2-devel@...ts.sourceforge.net
Subject: Re: [perfmon] Re: [perfmon2] perfmon2 merge news

Stephane Eranian wrote:
> Hello,
> 
> On Tue, Nov 13, 2007 at 10:35:11AM -0500, William Cohen wrote:
>> Robert Richter wrote:
>>> On 10.11.07 21:32:39, Andi Kleen wrote:
>>>> It would be really good to extract a core perfmon and start with
>>>> that and then add stuff as it makes sense.
>>>>
>>>> e.g. core perfmon could be something simple like just support
>>>> to context switch state and initialize counters in a basic way 
>>>> and perhaps get counter numbers for RDPMC in ring3 on x86[1]
>>> Perhaps a core could provide also as much functionality so that
>>> Perfmon can be used with an *unpatched* kernel using loadable modules?
>>> One drawback with today's Perfmon is that it can not be used with a
>>> vanilla kernel. But maybe such a core is by far too complex for a
>>> first merge.
>>>
>>> -Robert
>>>
>> Hi Robert,
>>
>> In the past I suggested that it might be useful to have a version of perfmon2 
>> that only set up the perfmon on a global basis. That would allow the patches for 
>> context switches to be added as a separate step, splitting up the patch into 
>> smaller set of patches.
>>
>> Perfmon2 uses a set of system calls to control the performance monitoring 
>> hardware. This would make it difficult to use an unpatch kernel unless perfmon 
>> changed the mechanism used to control the performance monitoring hardware.
>>
> Yes, that would be a possibility but as you pointed out there are some problems:
> 
> 	- perfmon2 uses system calls. So unless you can dynamically patch the
> 	  syscall table we would have to go back to the ioctl() and driver model.
> 	  I was under the impression that people did not quite like multiplexing
> 	  syscalls such as ioctl(). I also do prefer the multi syscall approach.
> 
> 	- perfmon2 needs to install a PMU interrupt handler. On X86, this is not just
> 	  an external device interrupts. There needs to be some APIC and interrupt
> 	  gate setup. There maybe other constraints on other architectures as well.
> 	  Not sure if all functions/structures necessary for this are available to
> 	  modules.

The oprofile module can setup a handler for PMU interrupts. This is done in 
archi/x86/oprofile/nmi_int:nmi_cpu_setup().  Other modules could do the same. 
However, it bumps what ever was using the nmi/pmu off, then restores nmi/pmu 
when oprofile is shut down. Maybe the pmu/nmi resource reservation mechanism 
should be another self-contained patch.

> 	- we could not support per-thread mode with the kernel module approach due to
> 	  link to the context switch code. I do believe per-thread is a key value-add
> 	  for performance monitoring.

The per-thread monitoring is useful to a number of people and many people want 
it. The thought was how to break the large perfmon patch into set of smaller 
incremental patches. So it isn't whether to have per-thread pmu virtualization, 
but rather when/how to get it in.

-Will
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ