[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LNX.2.00.1403101540140.3690@pobox.suse.cz>
Date: Mon, 10 Mar 2014 15:41:59 +0100 (CET)
From: Jiri Kosina <jkosina@...e.cz>
To: Daniel Vetter <daniel.vetter@...ll.ch>
cc: Jani Nikula <jani.nikula@...ux.intel.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
intel-gfx <intel-gfx@...ts.freedesktop.org>
Subject: Re: i915 resume-from-hibernation problems on resume with current
Linus' tree
On Mon, 10 Mar 2014, Daniel Vetter wrote:
> >> > *ERROR* render ring initialization failed ctl 0001f001 head 00003004 tail 00000000 start 00003000
> >> >
> >> > as it doesn't seem to be there in case of resumption that works properly.
> >> >
> >> > Please see the dmesg from the broken case below.
> >>
> >> I encountered this again with -rc5.
> >>
> >> If there is anything I can do to help debug this, please let me know.
> >
> > I hate to be doing this, but ... ping? :)
>
> gm45 and rendering ring init failures. We've seen this occasionally
> crop up due to rather unrelated changes. Some even hit stable
> backports and had to be backed out again. We essentially have no clue
> what's amiss, but it seems to /mostly/ work. Thus far I've only hear
> reports of this for gm45 and not yet really for upstream. Until
> someone digs up more evidence I think we need to classify this as a
> rare heisenbug and not really a regression :(
Thanks for getting back to me with this, Daniel.
Well, this started to be rather regular sometime during the 3.14
development cycle on my system.
I am able to reproduce it after ~20 suspend-resume cycles. Which is rather
inconvenient for bisect, but enough for making the kernel unusable on that
system.
> One thing we could try is to simply repeat the ring init setup, maybe
> after a gpu reset or something like that.
Will be happy to test any patches.
Thanks,
--
Jiri Kosina
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists