lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 13 Oct 2008 16:53:28 +0200
From:	Arnd Bergmann <arnd@...db.de>
To:	linuxppc-dev@...abs.org
Cc:	Paul Mackerras <paulus@...ba.org>,
	Robert Richter <robert.richter@....com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	oprofile-list@...ts.sourceforge.net, cel <cel@...ux.vnet.ibm.com>,
	cbe-oss-dev@...abs.org, benh@...nel.crashing.org
Subject: Re: [Cbe-oss-dev] powerpc/cell/oprofile: fix mutex locking for spu-oprofile

On Monday 25 August 2008, Arnd Bergmann wrote:
> On Monday 25 August 2008, Paul Mackerras wrote:
> > 
> > > Since rc4 is out now, I understand if you feel more comfortable with
> > > putting the patch into -next instead of -merge.
> > 
> > Linus has been getting stricter about only putting in fixes for
> > regressions and serious bugs (see his recent email to Dave Airlie on
> > LKML for instance).  I assume that the corruption is just in the data
> > that is supplied to userspace and doesn't extend to any kernel data
> > structures.
> 
> That's right, please queue it for -next then.

I just realized that this patch never made it into powerpc-next after
all, neither benh nor paulus version. Whoever is handling it today,
could you please pull

 master.kernel.org:/pub/scm/linux/kernel/git/arnd/cell-2.6.git merge

to get this commit below. I have rebased it on top of the current
benh/powerpc/next branch.

Thanks,

	Arnd <><

---

commit aa5810fa545515c9f383e3e649bd120bef9c7f29
Author: Carl Love <cel@...ibm.com>
Date:   Fri Aug 8 15:38:36 2008 -0700

    powerpc/cell/oprofile: fix mutex locking for spu-oprofile

    The issue is the SPU code is not holding the kernel mutex lock while
    adding samples to the kernel buffer.

    This patch creates per SPU buffers to hold the data.  Data
    is added to the buffers from in interrupt context.  The data
    is periodically pushed to the kernel buffer via a new Oprofile
    function oprofile_put_buff(). The oprofile_put_buff() function
    is called via a work queue enabling the funtion to acquire the
    mutex lock.

    The existing user controls for adjusting the per CPU buffer
    size is used to control the size of the per SPU buffers.
    Similarly, overflows of the SPU buffers are reported by
    incrementing the per CPU buffer stats.  This eliminates the
    need to have architecture specific controls for the per SPU
    buffers which is not acceptable to the OProfile user tool
    maintainer.

    The export of the oprofile add_event_entry() is removed as it
    is no longer needed given this patch.

    Note, this patch has not addressed the issue of indexing arrays
    by the spu number.  This still needs to be fixed as the spu
    numbering is not guarenteed to be 0 to max_num_spus-1.

    Signed-off-by: Carl Love <carll@...ibm.com>
    Signed-off-by: Maynard Johnson <maynardj@...ibm.com>
    Signed-off-by: Arnd Bergmann <arnd@...db.de>
    Acked-by: Acked-by: Robert Richter <robert.richter@....com>

 arch/powerpc/oprofile/cell/pr_util.h       |   13 +
 arch/powerpc/oprofile/cell/spu_profiler.c  |    4
 arch/powerpc/oprofile/cell/spu_task_sync.c |  236 ++++++++++++++++++++++++---
 drivers/oprofile/buffer_sync.c             |   24 ++
 drivers/oprofile/cpu_buffer.c              |   15 +
 drivers/oprofile/event_buffer.c            |    2
 drivers/oprofile/event_buffer.h            |    7
 include/linux/oprofile.h                   |   16 +
 drivers/oprofile/cpu_buffer.c              |    4
 9 files changed, 284 insertions(+), 37 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ