[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f4a4f90-a7b1-b1dc-6e7a-042f26254681@oracle.com>
Date: Mon, 22 May 2017 16:43:57 -0700
From: Qing Huang <qing.huang@...cle.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
dledford@...hat.com, sean.hefty@...el.com, artemyko@...lanox.com,
linux-mm@...ck.org
Subject: Re: [PATCH] ib/core: not to set page dirty bit if it's already set.
On 5/19/2017 6:05 AM, Christoph Hellwig wrote:
> On Thu, May 18, 2017 at 04:33:53PM -0700, Qing Huang wrote:
>> This change will optimize kernel memory deregistration operations.
>> __ib_umem_release() used to call set_page_dirty_lock() against every
>> writable page in its memory region. Its purpose is to keep data
>> synced between CPU and DMA device when swapping happens after mem
>> deregistration ops. Now we choose not to set page dirty bit if it's
>> already set by kernel prior to calling __ib_umem_release(). This
>> reduces memory deregistration time by half or even more when we ran
>> application simulation test program.
> As far as I can tell this code doesn't even need set_page_dirty_lock
> and could just use set_page_dirty
It seems that set_page_dirty_lock has been used here for more than 10
years. Don't know the original purpose. Maybe it was used to prevent
races between setting dirty bits and swapping out pages?
Perhaps we can call set_page_dirty before calling ib_dma_unmap_sg?
>> Signed-off-by: Qing Huang<qing.huang@...cle.com>
>> ---
>> drivers/infiniband/core/umem.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
>> index 3dbf811..21e60b1 100644
>> --- a/drivers/infiniband/core/umem.c
>> +++ b/drivers/infiniband/core/umem.c
>> @@ -58,7 +58,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
>> for_each_sg(umem->sg_head.sgl, sg, umem->npages, i) {
>>
>> page = sg_page(sg);
>> - if (umem->writable && dirty)
>> + if (!PageDirty(page) && umem->writable && dirty)
>> set_page_dirty_lock(page);
>> put_page(page);
>> }
>> --
>> 2.9.3
>>
Powered by blists - more mailing lists