[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8072D878-BBF2-47E4-B4C9-190F379F6221@vmware.com>
Date: Tue, 18 Jun 2019 05:40:11 +0000
From: Nadav Amit <namit@...are.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: LKML <linux-kernel@...r.kernel.org>, Linux-MM <linux-mm@...ck.org>,
Borislav Petkov <bp@...e.de>, Toshi Kani <toshi.kani@....com>,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH 3/3] resource: Introduce resource cache
> On Jun 17, 2019, at 10:33 PM, Nadav Amit <namit@...are.com> wrote:
>
>> On Jun 17, 2019, at 9:57 PM, Andrew Morton <akpm@...ux-foundation.org> wrote:
>>
>> On Wed, 12 Jun 2019 21:59:03 -0700 Nadav Amit <namit@...are.com> wrote:
>>
>>> For efficient search of resources, as needed to determine the memory
>>> type for dax page-faults, introduce a cache of the most recently used
>>> top-level resource. Caching the top-level should be safe as ranges in
>>> that level do not overlap (unlike those of lower levels).
>>>
>>> Keep the cache per-cpu to avoid possible contention. Whenever a resource
>>> is added, removed or changed, invalidate all the resources. The
>>> invalidation takes place when the resource_lock is taken for write,
>>> preventing possible races.
>>>
>>> This patch provides relatively small performance improvements over the
>>> previous patch (~0.5% on sysbench), but can benefit systems with many
>>> resources.
>>
>>> --- a/kernel/resource.c
>>> +++ b/kernel/resource.c
>>> @@ -53,6 +53,12 @@ struct resource_constraint {
>>>
>>> static DEFINE_RWLOCK(resource_lock);
>>>
>>> +/*
>>> + * Cache of the top-level resource that was most recently use by
>>> + * find_next_iomem_res().
>>> + */
>>> +static DEFINE_PER_CPU(struct resource *, resource_cache);
>>
>> A per-cpu cache which is accessed under a kernel-wide read_lock looks a
>> bit odd - the latency getting at that rwlock will swamp the benefit of
>> isolating the CPUs from each other when accessing resource_cache.
>>
>> On the other hand, if we have multiple CPUs running
>> find_next_iomem_res() concurrently then yes, I see the benefit. Has
>> the benefit of using a per-cpu cache (rather than a kernel-wide one)
>> been quantified?
>
> No. I am not sure how easy it would be to measure it. On the other hander
> the lock is not supposed to be contended (at most cases). At the time I saw
> numbers that showed that stores to “exclusive" cache lines can be as
> expensive as atomic operations [1]. I am not sure how up to date these
> numbers are though. In the benchmark I ran, multiple CPUs ran
> find_next_iomem_res() concurrently.
>
> [1] http://sigops.org/s/conferences/sosp/2013/papers/p33-david.pdf
Just to clarify - the main motivation behind the per-cpu variable is not
about contention, but about the fact the different processes/threads that
run concurrently might use different resources.
Powered by blists - more mailing lists