[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141008111927.GG4750@worktop.programming.kicks-ass.net>
Date: Wed, 8 Oct 2014 13:19:27 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Matt Fleming <matt@...sole-pimps.org>
Cc: Ingo Molnar <mingo@...nel.org>, Jiri Olsa <jolsa@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>,
Matt Fleming <matt.fleming@...el.com>
Subject: Re: [PATCH 11/11] perf/x86/intel: Perform rotation on Intel CQM RMIDs
On Wed, Sep 24, 2014 at 03:04:15PM +0100, Matt Fleming wrote:
> This scheme reserves one RMID at all times for rotation. When we need to
> schedule a new event we give it the reserved RMID, pick a victim event
> from the front of the global CQM list and wait for the victim's RMID to
> drop to zero occupancy, before it becomes the new reserved RMID.
> +/*
> + * If we fail to assign a new RMID for intel_cqm_rotation_rmid because
> + * cachelines are still tagged with RMIDs in limbo, we progressively
> + * increment the threshold until we find an RMID in limbo with <=
> + * __intel_cqm_threshold lines tagged. This is designed to mitigate the
> + * problem where cachelines tagged with an RMID are not steadily being
> + * evicted.
> + *
> + * On successful rotations we decrease the threshold back towards zero.
> + */
> +static unsigned int __intel_cqm_threshold;
Ah, so I was about to tell you there is the possibiliy we'll never quite
reach 0. But it appears you've cured that with this adaptive threshold
thing?
Is there an upper bound on the threshold after which we'll just wait, or
will you keep increasing it until something matches?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists