lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200806121431.53596.arnd@arndb.de>
Date:	Thu, 12 Jun 2008 14:31:52 +0200
From:	Arnd Bergmann <arnd@...db.de>
To:	Maynard Johnson <maynardj@...ibm.com>
Cc:	Carl Love <cel@...ibm.com>, acjohnso <acjohnso@...ibm.com>,
	oprofile-list <oprofile-list@...ts.sf.net>,
	cbe-oss-dev@...abs.org, linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] Version 3: Reworked Cell OProfile: SPU mutex lock fix

On Wednesday 11 June 2008, Maynard Johnson wrote:

> >
> > I like the approach in general, but would prefer to have it more generic,
> > so that other subsystems can use the same infrastructure if necessary:
> >   
> I believe Carl's thoughts on implementation would result in
> model-specific buffer create/destroy routines, to be invoked by the
> generic routine.  This would give the flexibility for architectures to
> implement their own unique buffering mechanism.  I don't think it would
> be worthwhile to go beyond this level in trying to build some generic
> mechanism since the generic mechanism already exists and has been
> sufficient for all other architectures.

Well, the amount of code should be the same either way. We currently
have an interface between the cpu_buffer and its users, which the
interface between the event_buffer and the cpu_buffer is private
to the oprofile layer.

The logical extension of this would be to have another buffer in
common oprofile code, and have its users in the architecture code,
rather than interface the architecture with the private event_buffer.

> > I think the dependency on the size of the per cpu buffers is a bit
> > artificial. How about making this independently tunable with
> > another oprofilefs file and a reasonable default?
> > You could either create the extra files in oprofile_create_files
> > or in oprofile_ops.create_files.
> >   
> With the right scaling factor, we should be able to avoid the buffer
> overflows for the most part.  Adding a new file in oprofilefs for an
> arch-specific buffer size would have to be documented on the user side,
> as well as providing a new option on the opcontrol command to adjust the
> value.  Granted, the system-wide buffer that Carl is proposing for Cell
> clashes with the "cpu buffer" construct, but some other architecture in
> the future may want to make use of this new model-specific buffer
> management routines and perhaps their buffers *would* be per-cpu.  I'm
> not sure there  would be much value add in exposing this implementation
> detail to the user as long as we can effectively use the existing user
> interface mechanism for managing buffer size.

But we're already exposing the implementation detail for the cpu buffers,
which are similar enough, just not the same. Since the file and opcontrol
option are called cpu-buffer-size, its rather unobvious how this should
control the size of a global buffer. As I mentioned, with a reasonable
default, you should not need to tune this anyway.
Changing opcontrol is a real disadvantage, but if a user needs to tune
this without a new opcontrol tool, it's always possible to directly
write to the file system.

> > As a general design principle, I'd always try to have locks close
> > to the data. What this means is that I think the SPU_BUFFER should
> > be in drivers/oprofile, similar to the cpu_buffer. In that case,
> > it may be better to change the name to something more generic, e.g.
> > aux_buffer.
> >   
> I agree -- in principle -- however, I really don't think a system-wide
> buffer is something we want to put into generic code.  It is not
> generally useful.  Also, putting this new code in drivers/oprofile would
> not fit well with the general technique of adding model-specific
> function pointers to 'struct oprofile_operations'.

> This assumes we can get the oprofile maintainer to review and comment on
> the new patch, which has been difficult to do lately.  I agree that we
> should not avoid making changes to common code when there's a good
> justification for it, but I would say that this is not such a case.

You have to change the oprofile code either way, because the existing
interface is not sufficient. However, there is no point in trying to do
a minimal change to the existing code when a larger change clearly gives
us a better resulting code.

	Arnd <><
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ