[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090904142212.GA3093@redhat.com>
Date: Fri, 4 Sep 2009 10:22:12 -0400
From: Dave Jones <davej@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Renninger <trenn@...e.de>, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org, Gautham R Shenoy <ego@...ibm.com>,
Andreas Herrmann <andreas.herrmann3@....com>,
Balbir Singh <balbir@...ibm.com>,
"H. Peter Anvin" <hpa@...or.com>,
Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>,
Yanmin <yanmin_zhang@...ux.intel.com>,
Len Brown <len.brown@...el.com>,
Yinghai Lu <yhlu.kernel@...il.com>, cpufreq@...r.kernel.org
Subject: Re: [RFC][PATCH 10/14] x86: generic aperf/mperf code.
On Fri, Sep 04, 2009 at 11:27:19AM +0200, Peter Zijlstra wrote:
> On Fri, 2009-09-04 at 11:25 +0200, Peter Zijlstra wrote:
> > On Fri, 2009-09-04 at 11:19 +0200, Thomas Renninger wrote:
> > > You still use struct perf_pair split/hi/lo members in #ifdef __i386__
> > > case which you deleted above.
> >
> > > > shift_count = fls(h);
> > > >
> > > > - cur.aperf.whole >>= shift_count;
> > > > - cur.mperf.whole >>= shift_count;
> > > > + cur.aperf >>= shift_count;
> > > > + cur.mperf >>= shift_count;
> > > > }
> > > >
> > > > if (((unsigned long)(-1) / 100) < cur.aperf.split.lo) {
> > > Same here, possibly still elsewhere.
> > > Is this only x86_64 compile tested?
> >
> > Of course, who still has 32bit only hardware anyway ;-)
> >
> > Will fix, thanks for spotting that.
>
> Hrmm, on that, does it really make sense to maintain the i386 code path?
>
> How frequently is that code called and what i386 only chips support
> aperf/mperf, atom?
any 64-bit cpu that supports it can have a 32bit kernel installed on it.
(and a significant number of users actually do this).
Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists