[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <660515dd-eb8a-36cc-5fac-a7814bb3ef69@ozlabs.ru>
Date: Wed, 13 Feb 2019 11:37:00 +1100
From: Alexey Kardashevskiy <aik@...abs.ru>
To: Daniel Jordan <daniel.m.jordan@...cle.com>,
Christopher Lameter <cl@...ux.com>
Cc: jgg@...pe.ca, akpm@...ux-foundation.org, dave@...olabs.net,
jack@...e.cz, linux-mm@...ck.org, kvm@...r.kernel.org,
kvm-ppc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-fpga@...r.kernel.org, linux-kernel@...r.kernel.org,
alex.williamson@...hat.com, paulus@...abs.org,
benh@...nel.crashing.org, mpe@...erman.id.au, hao.wu@...el.com,
atull@...nel.org, mdf@...nel.org
Subject: Re: [PATCH 2/5] vfio/spapr_tce: use pinned_vm instead of locked_vm to
account pinned pages
On 13/02/2019 04:18, Daniel Jordan wrote:
> On Tue, Feb 12, 2019 at 04:50:11PM +0000, Christopher Lameter wrote:
>> On Tue, 12 Feb 2019, Alexey Kardashevskiy wrote:
>>
>>> Now it is 3 independent accesses (actually 4 but the last one is
>>> diagnostic) with no locking around them. Why do not we need a lock
>>> anymore precisely? Thanks,
>>
>> Updating a regular counter is racy and requires a lock. It was converted
>> to be an atomic which can be incremented without a race.
>
> Yes, though Alexey may have meant that the multiple reads of the atomic in
> decrement_pinned_vm are racy.
Yes, I meant this race, thanks for clarifying this.
> It only matters when there's a bug that would
> make the counter go negative, but it's there.
>
> And FWIW the debug print in try_increment_pinned_vm is also racy.
>
> This fixes all that. It doesn't try to correct the negative pinned_vm as the
> old code did because it's already a bug and adjusting the value by the negative
> amount seems to do nothing but make debugging harder.
>
> If it's ok, I'll respin the whole series this way (another point for common
> helper)
This looks good, thanks for fixing this.
>
> diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
> index f47e020dc5e4..b79257304de6 100644
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c
> +++ b/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -53,25 +53,24 @@ static long try_increment_pinned_vm(struct mm_struct *mm, long npages)
> atomic64_sub(npages, &mm->pinned_vm);
> }
>
> - pr_debug("[%d] RLIMIT_MEMLOCK +%ld %ld/%lu%s\n", current->pid,
> - npages << PAGE_SHIFT,
> - atomic64_read(&mm->pinned_vm) << PAGE_SHIFT,
> - rlimit(RLIMIT_MEMLOCK), ret ? " - exceeded" : "");
> + pr_debug("[%d] RLIMIT_MEMLOCK +%ld %lld/%lu%s\n", current->pid,
> + npages << PAGE_SHIFT, pinned << PAGE_SHIFT,
> + lock_limit, ret ? " - exceeded" : "");
>
> return ret;
> }
>
> static void decrement_pinned_vm(struct mm_struct *mm, long npages)
> {
> + s64 pinned;
> +
> if (!mm || !npages)
> return;
>
> - if (WARN_ON_ONCE(npages > atomic64_read(&mm->pinned_vm)))
> - npages = atomic64_read(&mm->pinned_vm);
> - atomic64_sub(npages, &mm->pinned_vm);
> - pr_debug("[%d] RLIMIT_MEMLOCK -%ld %ld/%lu\n", current->pid,
> - npages << PAGE_SHIFT,
> - atomic64_read(&mm->pinned_vm) << PAGE_SHIFT,
> + pinned = atomic64_sub_return(npages, &mm->pinned_vm);
> + WARN_ON_ONCE(pinned < 0);
> + pr_debug("[%d] RLIMIT_MEMLOCK -%ld %lld/%lu\n", current->pid,
> + npages << PAGE_SHIFT, pinned << PAGE_SHIFT,
> rlimit(RLIMIT_MEMLOCK));
> }
>
>
--
Alexey
Powered by blists - more mailing lists