[<prev] [next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0906141514120.2800@localhost.localdomain>
Date: Sun, 14 Jun 2009 15:17:25 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: "Pallipadi, Venkatesh" <venkatesh.pallipadi@...el.com>
cc: "Benjamin S." <sbenni@....de>, "Rafael J. Wysocki" <rjw@...k.pl>,
Ingo Molnar <mingo@...e.hu>,
LKML <linux-kernel@...r.kernel.org>,
"js@...21.net" <js@...21.net>,
Jesse Barnes <jbarnes@...tuousgeek.org>,
pm list <linux-pm@...ts.linux-foundation.org>,
Linux PCI <linux-pci@...r.kernel.org>,
Matthew Wilcox <matthew@....cx>
Subject: RE: 2.6.30 enabling cpu1 on resume fails after suspend to memory
On Sun, 14 Jun 2009, Pallipadi, Venkatesh wrote:
> >On Sun, 14 Jun 2009, Benjamin S. wrote:
> >
> >This is odd as well:
> >> CPU0 CPU1
> >> 0: 42 1 IO-APIC-edge timer
> >> 24: 4830 0 HPET_MSI-edge hpet2
> >> LOC: 42 5070 Local timer interrupts
> >
> >So we set up only one hpet channel for CPU0 and CPU1 uses the local
> >timer interrupt. Need to look at that as well.
> >
>
> Logic in percpu HPET is something like this.
>
> - Number of per cpu HPET channels = total number of HPET channels -
> 1 (global HPET) - 1 (legacy RTC replacement) - 1 (reserved for
> /dev/hpet).
>
> - So, this number is assigned one per CPU and remaining CPUs use
> APIC timer + broadcast logic
>
> Looks like there is a slight problem with the above though. We
> should start such per cpu assignment from CPU 1 instead of CPU 0,
> when number of HPET channels is less than number of CPUs. Will send
> a patch for that. But, this suspend resume problem should not be due
> to the percpu HPET logic. It will be good to try with hpet=disable
> to make sure...
Benjamin just confirmed that. The logic in disable_device_interrupts()
already skips interrupts marked with IRQF_TIMER, but I suspect that
the hpet/MSI interupts are not marked that way.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists