lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZO5uGKzGsaliQ1fF@Asurada-Nvidia>
Date:   Tue, 29 Aug 2023 15:15:52 -0700
From:   Nicolin Chen <nicolinc@...dia.com>
To:     <will@...nel.org>, Robin Murphy <robin.murphy@....com>
CC:     <jgg@...dia.com>, <joro@...tes.org>, <jean-philippe@...aro.org>,
        <apopple@...dia.com>, <linux-kernel@...r.kernel.org>,
        <linux-arm-kernel@...ts.infradead.org>, <iommu@...ts.linux.dev>
Subject: Re: [PATCH 1/3] iommu/io-pgtable-arm: Add nents_per_pgtable in
 struct io_pgtable_cfg

On Tue, Aug 29, 2023 at 10:25:10PM +0100, Robin Murphy wrote:

> > > Bleh, apologies, I always confuse myself trying to remember the fiddly
> > > design of io-pgtable data. However, I think this then ends up proving
> > > the opposite point - the number of pages per table only happens to be a
> > > fixed constant for certain formats like LPAE, but does not necessarily
> > > generalise. For instance for a single v7s config it would be 1024 or 256
> > > or 16 depending on what has actually been unmapped.
> > > 
> > > The mechanism as proposed implicitly assumes LPAE format, so I still
> > > think we're better off making that assumption explicit. And at that
> > > point arm-smmu-v3 can then freely admit it already knows the number is
> > > simply 1/8th of the domain page size.
> > 
> > Hmm, I am not getting that "1/8th" part, would you mind elaborating?
> 
> If we know the format is LPAE, then we already know that nearly all
> pagetable levels are one full page, and the PTEs are 64 bits long. No
> magic data conduit required.

Oh, I see!

> > Also, what we need is actually an arbitrary number for max_tlbi_ops.
> > And I think it could be irrelevant to the page size, i.e. either a
> > 4K pgsize or a 64K pgsize could use the same max_tlbi_ops number,
> > because what eventually impacts the latency is the number of loops
> > of building/issuing commands.
> 
> Although there is perhaps a counter-argument for selective invalidation,
> that if you're using 64K pages to improve TLB hit rates vs. 4K, then you
> have more to lose from overinvalidation, so maybe a single threshold
> isn't so appropriate for both.
> 
> Yes, ultimately it all comes down to picking an arbitrary number, but
> the challenge is that we want to pick a *good* one, and ideally have
> some reasoning behind it. As Will suggested, copying what the mm layer
> does gives us an easy line of reasoning, even if it's just in the form
> of passing the buck. And that's actually quite attractive, since
> otherwise we'd then have to get into the question of what really is the
> latency of building and issuing commands, since that clearly depends on
> how fast the CPU is, and how fast the SMMU is, and how busy the SMMU is,
> and how large the command queue is, and how many other CPUs are also
> contending for the command queue... and very quickly it becomes hard to
> believe that any simple constant can be good for all possible systems.

Yea, I had trouble with deciding the number at the first place, so
the previous solution ended up with an SYSFS node. I do agree that
copying from the mm layer solution gives a strong justification of
picking a arbitrary number. My concern here is about whether it'll
be overly too often or not at triggering a full-as invalidation.

Meanwhile, by re-looking at Will's commit log:
    arm64: tlbi: Set MAX_TLBI_OPS to PTRS_PER_PTE

    In order to reduce the possibility of soft lock-ups, we bound the
    maximum number of TLBI operations performed by a single call to
    flush_tlb_range() to an arbitrary constant of 1024.

    Whilst this does the job of avoiding lock-ups, we can actually be a bit
    smarter by defining this as PTRS_PER_PTE. Due to the structure of our
    page tables, using PTRS_PER_PTE means that an outer loop calling
    flush_tlb_range() for entire table entries will end up performing just a
    single TLBI operation for each entry. As an example, mremap()ing a 1GB
    range mapped using 4k pages now requires only 512 TLBI operations when
    moving the page tables as opposed to 262144 operations (512*512) when
    using the current threshold of 1024.

I found that I am actually not quite getting the calculation at the
end for the comparison between 512 and 262144.

For a 4K pgsize setup, MAX_TLBI_OPS is set to 512, calculated from
4096 / 8. Then, any VA range >= 2MB will trigger a flush_tlb_all().
By setting the threshold to 1024, the 2MB size bumps up to 4MB, i.e.
the condition becomes range >= 4MB.

So, it seems to me that requesting a 1GB invalidation will trigger
a flush_tlb_all() in either case of having a 2MB or a 4MB threshold?

I can get that the 262144 is the number of pages in a 1GB size, so
the number of per-page invalidations will be 262144 operations if
there was no threshold to replace with a full-as invalidation. Yet,
that wasn't the case since we had a 4MB threshold with an arbitrary
1024 for MAX_TLBI_OPS?

> > So, combining your narrative above that nents_per_pgtable isn't so
> > general as we have in the tlbflush for MMU,
> 
> FWIW I meant it doesn't generalise well enough to be a common io-pgtable
> interface; I have no issue with it forming the basis of an
> SMMUv3-specific heuristic when it *is* a relevant concept to all the
> pagetable formats SMMUv3 can possibly support.

OK.

Thanks
Nicolin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ