lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180521104008.z6ei5zjve7u5iwho@lakrids.cambridge.arm.com>
Date:   Mon, 21 May 2018 11:40:08 +0100
From:   Mark Rutland <mark.rutland@....com>
To:     Ganapatrao Kulkarni <gklkml16@...il.com>
Cc:     Ganapatrao Kulkarni <ganapatrao.kulkarni@...ium.com>,
        linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
        linux-arm-kernel@...ts.infradead.org,
        Will Deacon <Will.Deacon@....com>, jnair@...iumnetworks.com,
        Robert Richter <Robert.Richter@...ium.com>,
        Vadim.Lomovtsev@...ium.com, Jan.Glauber@...ium.com
Subject: Re: [PATCH v4 2/2] ThunderX2: Add Cavium ThunderX2 SoC UNCORE PMU
 driver

On Mon, May 21, 2018 at 11:37:12AM +0100, Mark Rutland wrote:
> Hi Ganapat,
> 
> 
> Sorry for the delay in replying; I was away most of last week.
> 
> On Tue, May 15, 2018 at 04:03:19PM +0530, Ganapatrao Kulkarni wrote:
> > On Sat, May 5, 2018 at 12:16 AM, Ganapatrao Kulkarni <gklkml16@...il.com> wrote:
> > > On Thu, Apr 26, 2018 at 4:29 PM, Mark Rutland <mark.rutland@....com> wrote:
> > >> On Wed, Apr 25, 2018 at 02:30:47PM +0530, Ganapatrao Kulkarni wrote:
> 
> > >>> +static int alloc_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore)
> > >>> +{
> > >>> +     int counter;
> > >>> +
> > >>> +     raw_spin_lock(&pmu_uncore->lock);
> > >>> +     counter = find_first_zero_bit(pmu_uncore->counter_mask,
> > >>> +                             pmu_uncore->uncore_dev->max_counters);
> > >>> +     if (counter == pmu_uncore->uncore_dev->max_counters) {
> > >>> +             raw_spin_unlock(&pmu_uncore->lock);
> > >>> +             return -ENOSPC;
> > >>> +     }
> > >>> +     set_bit(counter, pmu_uncore->counter_mask);
> > >>> +     raw_spin_unlock(&pmu_uncore->lock);
> > >>> +     return counter;
> > >>> +}
> > >>> +
> > >>> +static void free_counter(struct thunderx2_pmu_uncore_channel *pmu_uncore,
> > >>> +                                     int counter)
> > >>> +{
> > >>> +     raw_spin_lock(&pmu_uncore->lock);
> > >>> +     clear_bit(counter, pmu_uncore->counter_mask);
> > >>> +     raw_spin_unlock(&pmu_uncore->lock);
> > >>> +}
> > >>
> > >> I don't believe that locking is required in either of these, as the perf
> > >> core serializes pmu::add() and pmu::del(), where these get called.
> > 
> > without this locking, i am seeing "BUG: scheduling while atomic" when
> > i run perf with more events together than the maximum counters
> > supported
> 
> Did you manage to get to the bottom of this?
> 
> Do you have a backtrace?
> 
> It looks like in your latest posting you reserve counters through the
> userspace ABI, which doesn't seem right to me, and I'd like to
> understand the problem.

Looks like I misunderstood -- those are still allocated kernel-side.

I'll follow that up in the v5 posting.

Thanks,
Mark.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ