lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1eeeec76-5271-f915-e3fd-f15095efb981@arm.com>
Date:   Fri, 1 Jul 2022 13:01:31 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     John Garry <john.garry@...wei.com>, Feng Tang <feng.tang@...el.com>
Cc:     Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
        iommu@...ts.linux-foundation.org, iommu@...ts.linux.dev,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Paul Menzel <pmenzel@...gen.mpg.de>
Subject: Re: [PATCH] iommu/iova: change IOVA_MAG_SIZE to 127 to save memory

On 2022-07-01 12:33, John Garry wrote:
> On 01/07/2022 04:56, Feng Tang wrote:
>>>> inclination.
>>>>
>>> ok, what you are saying sounds reasonable. I just remember that when we
>>> analyzed the longterm aging issue that we concluded that the FQ size 
>>> and its
>>> relation to the magazine size was a factor and this change makes me a 
>>> little
>>> worried about new issues. Better the devil you know and all that...
>>>
>>> Anyway, if I get some time I might do some testing to see if this 
>>> change has
>>> any influence.
>>>
>>> Another thought is if we need even store the size in the 
>>> iova_magazine? mags
>>> in the depot are always full. As such, we only need worry about mags 
>>> loaded
>>> in the cpu rcache and their sizes, so maybe we could have something like
>>> this:
>>>
>>> struct iova_magazine {
>>> -       unsigned long size;
>>>         unsigned long pfns[IOVA_MAG_SIZE];
>>> };
>>>
>>> @@ -631,6 +630,8 @@ struct iova_cpu_rcache {
>>>         spinlock_t lock;
>>>         struct iova_magazine *loaded;
>>>         struct iova_magazine *prev;
>>> +       int loaded_size;
>>> +       int prev_size;
>>> };
>>>
>>> I haven't tried to implement it though..
>> I have very few knowledge of iova, so you can chose what's the better
>> solution. I just wanted to raise the problem and will be happy to see
>> it solved:)
> 
> I quickly tested your patch for performance and saw no noticeable 
> difference, which is no surprise.
> 
> But I'll defer to Robin if he thinks that your patch is a better 
> solution - I would guess that he does. For me personally I would prefer 
> that this value was not changed, as I mentioned before.

This idea is interesting, but it would mean a bit more fiddly work to 
keep things in sync when magazines are allocated, freed and swapped 
around. It seems like the kind of non-obvious thing that might make 
sense if it gave a significant improvement in cache locality or 
something like that, but for simply fixing an allocation size it feels a 
bit too wacky.

 From my perspective, indeed I'd rather do the simple thing for now to 
address the memory wastage issue directly, then we can do the deeper 
performance analysis on top to see if further tweaking of magazine sizes 
and/or design is justified.

Cheers,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ