[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9c114cb4-cd93-41b5-f123-13815871d659@redhat.com>
Date: Fri, 8 Nov 2019 13:44:31 -0500
From: Waiman Long <longman@...hat.com>
To: Mike Kravetz <mike.kravetz@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH v2] hugetlbfs: Take read_lock on i_mmap for PMD sharing
On 11/7/19 9:03 PM, Davidlohr Bueso wrote:
> On Thu, 07 Nov 2019, Waiman Long wrote:
>> With this patch applied, the customer is seeing significant performance
>> improvement over the unpatched kernel.
>
> Could you give more details here?
Red Hat has a customer that is running a transactional database
workload. In this particular case, about ~500-1500GB of static hugepages
are allocated. The database then allocates a single large shared memory
segment in those hugepages to use primarily as a database buffer for 8kB
blocks from disk (there are also other database structures in that
shared memory, but it's mostly for buffer). Then thousands of separate
processes reference and load data into that buffer. They were seeing
multi-second pauses when starting up the database.
I first gave them a patched kernel that disabled PMD sharing. That fixed
their problem. After that, I gave them another test kernel that
contained this patch. They said there were significant improved compared
with the unpatched kernel. There is still some degradation compared to
the kernel with huge shared pmd disabled entirely, but they're pretty
close in performance.
Cheer,
Longman
Powered by blists - more mailing lists