[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <98464609-8F5A-47B9-A64E-2F67809737AD@vmware.com>
Date: Tue, 18 Jun 2019 05:33:12 +0000
From: Nadav Amit <namit@...are.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: LKML <linux-kernel@...r.kernel.org>, Linux-MM <linux-mm@...ck.org>,
Borislav Petkov <bp@...e.de>, Toshi Kani <toshi.kani@....com>,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH 3/3] resource: Introduce resource cache
> On Jun 17, 2019, at 9:57 PM, Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Wed, 12 Jun 2019 21:59:03 -0700 Nadav Amit <namit@...are.com> wrote:
>
>> For efficient search of resources, as needed to determine the memory
>> type for dax page-faults, introduce a cache of the most recently used
>> top-level resource. Caching the top-level should be safe as ranges in
>> that level do not overlap (unlike those of lower levels).
>>
>> Keep the cache per-cpu to avoid possible contention. Whenever a resource
>> is added, removed or changed, invalidate all the resources. The
>> invalidation takes place when the resource_lock is taken for write,
>> preventing possible races.
>>
>> This patch provides relatively small performance improvements over the
>> previous patch (~0.5% on sysbench), but can benefit systems with many
>> resources.
>
>> --- a/kernel/resource.c
>> +++ b/kernel/resource.c
>> @@ -53,6 +53,12 @@ struct resource_constraint {
>>
>> static DEFINE_RWLOCK(resource_lock);
>>
>> +/*
>> + * Cache of the top-level resource that was most recently use by
>> + * find_next_iomem_res().
>> + */
>> +static DEFINE_PER_CPU(struct resource *, resource_cache);
>
> A per-cpu cache which is accessed under a kernel-wide read_lock looks a
> bit odd - the latency getting at that rwlock will swamp the benefit of
> isolating the CPUs from each other when accessing resource_cache.
>
> On the other hand, if we have multiple CPUs running
> find_next_iomem_res() concurrently then yes, I see the benefit. Has
> the benefit of using a per-cpu cache (rather than a kernel-wide one)
> been quantified?
No. I am not sure how easy it would be to measure it. On the other hander
the lock is not supposed to be contended (at most cases). At the time I saw
numbers that showed that stores to “exclusive" cache lines can be as
expensive as atomic operations [1]. I am not sure how up to date these
numbers are though. In the benchmark I ran, multiple CPUs ran
find_next_iomem_res() concurrently.
[1] http://sigops.org/s/conferences/sosp/2013/papers/p33-david.pdf
>
>
>> @@ -262,9 +268,20 @@ static void __release_child_resources(struct resource *r)
>> }
>> }
>>
>> +static void invalidate_resource_cache(void)
>> +{
>> + int cpu;
>> +
>> + lockdep_assert_held_exclusive(&resource_lock);
>> +
>> + for_each_possible_cpu(cpu)
>> + per_cpu(resource_cache, cpu) = NULL;
>> +}
>
> All the calls to invalidate_resource_cache() are rather a
> maintainability issue - easy to miss one as the code evolves.
>
> Can't we just make find_next_iomem_res() smarter? For example, start
> the lookup from the cached point and if that failed, do a full sweep?
I may be able to do something like that to reduce the number of locations
that need to be updated, but you always need to invalidate if a resource is
removed. This might make the code more prone to bugs, since the logic of
when to invalidate becomes more complicated.
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>> + invalidate_resource_cache();
>
> Ow. I guess the maintainability situation can be improved by renaming
> resource_lock to something else (to avoid mishaps) then adding wrapper
> functions. But still. I can't say this is a super-exciting patch :(
I considered doing so, but I was not sure it is better. If you want I’ll
implement it using wrapper functions (but both lock and unlock would need
to be wrapped for consistency).
The benefit of this patch over the previous one is not huge, and I don’t
know how to implement it any better (excluding wrapper function, if you
consider it better). If you want, you can just drop this patch.
Powered by blists - more mailing lists