lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKMK7uH8MU5awyVDLNi8h-+yrrSRW8JjV20aMZwXZ9Yw9XDZgA@mail.gmail.com>
Date:	Sat, 9 Aug 2014 10:53:03 +0200
From:	Daniel Vetter <daniel@...ll.ch>
To:	Alan Stern <stern@...land.harvard.edu>
Cc:	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	Chris Wilson <chris@...is-wilson.co.uk>,
	"Mateo Lozano, Oscar" <oscar.mateo@...el.com>,
	intel-gfx <intel-gfx@...ts.freedesktop.org>,
	Greg KH <gregkh@...uxfoundation.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [Intel-gfx] [PATCH 45/53] drm/i915/bdw: Do not call
 intel_runtime_pm_get() in an interrupt

On Sat, Aug 9, 2014 at 3:21 AM, Alan Stern <stern@...land.harvard.edu> wrote:
> On Sat, 9 Aug 2014, Rafael J. Wysocki wrote:
>
>> > > > Well it works currently. So where do you see the problem?
>> > >
>> > > Sampling registers from an timer - in particular, we really do not want
>> > > to disable runtime pm whilst trying to monitor the impact of runtime pm.
>> >
>> > In that case you can grab a runtime pm reference iff the device is powered
>> > on already. Which won't call anything scary, just amounts to an
>> > atomic_add_unless or so, and then drop it again.
>> >
>> > Unfortunately there doesn't seem to be such a thing around already, so
>> > need to add it first. Greg, how much would you freak out if we add
>> > something like
>> >
>> > /**
>> >  * pm_runtime_get_unless_suspended - grab a rpm ref if the device is on
>> >  *
>> >  * Returns true if an rpm ref has been acquire, false otherwise. Can be
>> >  * called from atomic context to e.g. sample perfomance counters (where we
>> >  * obviously don't want to disturb system state if everything is off atm).
>> >  */
>> > static inline bool pm_runtime_get_unless_suspended(struct device *dev)
>> > {
>> >     return atomic_add_unless(&dev->power.usage_count, 1, 0);
>> > }
>>
>> I don't think it'll work universally.
>>
>> That'd need to be synchronized with other stuff done under the spinlock
>> and in fact, what you're interested in is runtime_status (and that being
>> RPM_ACTIVE) and not just the usage count.
>
> That's right.  You'd need to acquire the spinlock, test runtime_status,
> do the register sampling if the status is RPM_ACTIVE, and then drop the
> spinlock.
>
> I suppose wrapper routines for acquiring and releasing the spinlock
> could be added to the runtime-PM API.  Something like this:
>
> #define pm_runtime_lock(dev, flags)                     \
>                 spin_lock_irqsave(&(dev)->power.lock, flags)
> #define pm_runtime_unlock(dev, flags)                   \
>                 spin_unlock_irqrestore(&(dev)->power.lock, flags)
>
> It looks a little silly but it would work.

Oh right, I've totally ignored all the async resuming/suspending
stuff. Anyway what we want to do is sample a perf monitoring unit on
the gpu from an hrtimer and then expose that as a perf pmu. But we
don't want to wake up the gpu for the sampling or hold a special
reference, since that disturbs the sampling and also tends to upset
the gpu.

Note that those registers are just status indicator registers, so no
counters that will get reset when the device is suspended. For the
later counters (we have them too) we're not sure yet how to handle
them, especially since the amount the gpu saves/restores into its own
context storage depends upon the platform.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ