lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090507164334.0d3890ef@jbarnes-g45>
Date:	Thu, 7 May 2009 16:43:34 -0700
From:	Jesse Barnes <jbarnes@...tuousgeek.org>
To:	nigel@...onice.net
Cc:	"Rafael J. Wysocki" <rjw@...k.pl>,
	Fabio Comolli <fabio.comolli@...il.com>,
	linux-kernel@...r.kernel.org, Pavel Machek <pavel@....cz>,
	linux-pm@...ts.linux-foundation.org,
	tuxonice-devel@...ts.tuxonice.net
Subject: Re: [TuxOnIce-devel] [RFC] TuxOnIce

On Fri, 08 May 2009 09:32:34 +1000
Nigel Cunningham <nigel@...onice.net> wrote:

> Hi.
> 
> On Thu, 2009-05-07 at 16:14 -0700, Jesse Barnes wrote:
> > On Fri, 08 May 2009 06:41:00 +1000
> > Nigel Cunningham <nigel@...onice.net> wrote:
> > 
> > > Hi.
> > > 
> > > On Thu, 2009-05-07 at 21:27 +0200, Rafael J. Wysocki wrote:
> > > > In fact I agree, but there's a catch.  The way in which TuxOnIce
> > > > operates LRU pages is based on some assumptions that may or may
> > > > not be satisfied in future, so if we decide to merge it, then
> > > > we'll have to make sure these assumptions will be satisfied.
> > > > That in turn is going to require quite some discussion I guess.
> > > 
> > > Agreed. That's why I've got that GEMS patch - it's putting pages
> > > on the LRU that don't satisfy the former assumptions: they are
> > > used during hibernating and need to be atomically copied. If
> > > there are further developments in that area, I would hope we
> > > could just extend what's been done with GEMS.
> > 
> > Another option here would be to suspend all DRM operations earlier.
> > The suspend hook for i915 already does this, but maybe it needs to
> > happen sooner?  We'll probably want a generic DRM suspend hook soon
> > too (as the radeon memory manager lands) to shut down GPU activity
> > in the suspend and hibernate cases.
> > 
> > All that assumes I understand what's going on here though. :)  It
> > appears you delay saving the GEM (just GEM by the way, for
> > Graphics/GPU Execution Manager) backing store until late to avoid
> > having the pages move around out from under you?
> 
> Yeah. TuxOnIce saves some pages without doing an atomic copy of them.
> Up 'til now, the algorithm has been LRU pages - pages used for
> TuxOnIce's userspace helpers. With GEM, we also need to make sure GEM
> pages are atomically copied and so also 'subtract' them from the list
> of pages that aren't atomically copied.
> 
> It's no great problem to do this, so I wouldn't ask you to change GEM
> to suspend DRM operations earlier. It's more important that GEM
> doesn't allocate extra pages unexpectedly - and I don't think that's
> likely anyway since we've switched away from X. This is important
> because TuxOnIce depends (for reliability) on having memory usage
> being predictable much more than swsusp and uswsusp do. (Larger
> images, less free RAM to begin with).

Yeah X is typically the one causing GEM allocations and performing
execution, but there are other possibilities too.  E.g. Wayland is a
non-X based display system that may be running instead, or maybe
there's an EGL or GPGPU program running in the background.

So I think it's best if we suspend DRM fairly early, otherwise you
*may* get extra allocations and will probably see all sorts of GPU
memory mapping activity and execution while you're trying to hibernate
things.  On the plus side I don't think this is a radical redesign or
anything, and mostly something we can do in our suspend and hibernate
callbacks.

Thanks,
Jesse
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ