lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 13 May 2010 15:16:29 -0700
From:	"Pan, Jacob jun" <jacob.jun.pan@...el.com>
To:	Thomas Gleixner <tglx@...utronix.de>,
	Jacob Pan <jacob.jun.pan@...ux.intel.com>
CC:	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
	"Du, Alek" <alek.du@...el.com>,
	Arjan van de Ven <arjan@...ux.intel.com>,
	"Tang, Feng" <feng.tang@...el.com>,
	LKML <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 4/8] x86/mrst: change clock selection logic to support
 medfield

sorry for the late reply, we are working on the fixes. just to give some answers to your comments.

> 
>  apbt sucks performance wise, so why do you consider it a better
>  choice than lapic + broadcast ?
> 
apbt is always-on, I guess depends on the load, it can be better than
having broadcast timers. e.g. if there are frequency transitions
between C0 to deep C states. if we are always in c0, I can easily see
native performance impact with per cpu apbt. I don't have power number
to backup either cases. ftrace shows programming lapic timer is quite
expensive, I don't understand.


 1)               |          clockevents_program_event() {
 1)               |            lapic_next_event() {
 1)   2.947 us    |              native_apic_mem_write();
 1)   8.682 us    |            }
 1) + 14.676 us   |          }

 0)               |                  clockevents_program_event() {
 0)   4.146 us    |                    apbt_next_event();
 0)   9.910 us    |                  }


> > + * lapic (always-on,ARAT) ------ 150
> > + */
> > +
> > +int mrst_timer_options __cpuinitdata;
> > +
> >  static u32 sfi_mtimer_usage[SFI_MTMR_MAX_NUM];
> >  static struct sfi_timer_table_entry
> sfi_mtimer_array[SFI_MTMR_MAX_NUM];
> > +static u32 mrst_cpu_chip;
> >  int sfi_mtimer_num;
> >
> >  struct sfi_rtc_table_entry sfi_mrtc_array[SFI_MRTC_MAX];
> > @@ -167,15 +191,16 @@ int __init sfi_parse_mrtc(struct
> sfi_table_header *table)
> >  	return 0;
> >  }
> >
> > -/*
> > - * the secondary clock in Moorestown can be APBT or LAPIC clock,
> default to
> > - * APBT but cmdline option can also override it.
> > - */
> >  static void __cpuinit mrst_setup_secondary_clock(void)
> >  {
> > -	/* restore default lapic clock if disabled by cmdline */
> > -	if (disable_apbt_percpu)
> > -		return setup_secondary_APIC_clock();
> > +	if ((mrst_timer_options == MRST_TIMER_APBT_ONLY))
> > +		return apbt_setup_secondary_clock();
> > +	if (cpu_has(&current_cpu_data, X86_FEATURE_ARAT)
> > +		||  (mrst_timer_options == MRST_TIMER_LAPIC_APBT)) {
> > +		pr_info("using lapic timers for secondary clock\n");
> > +		setup_secondary_APIC_clock();
> > +		return;
> 
>   The logic is confusing.
> 
it can be more straightforward if we don't allow user cmdline
overwrite.

>  I guess the 111 is Penwell/MRST specific, right ?
> 
>  According to SDM we have anyway different results for the various CPU
>  families, but we should utilize this in a generic way and have the
>  translation tables for the various CPUs in one place.
agreed. 111b is Penwell specific 83MHz spread spectrum


> 
>   Yikes. From which Intel cookbook is this ?
> 
>   Aside of that we have that info in boot_cpu_info already, don't we ?
>   So there is neither a requirement for the extra cpuid call nor for
>   the extra mrst_cpu_chip id magic.
> 
initially, I thought we need this before boot_cpu_data is initialized.
But we actually don't need it early. I will fix that.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ