lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 18 Apr 2017 22:35:02 +0530
From:   Sunil Kovvuri <sunil.kovvuri@...il.com>
To:     Catalin Marinas <catalin.marinas@....com>
Cc:     Imran Khan <kimran@...eaurora.org>,
        Ganesh Mahendran <opensource.ganesh@...il.com>,
        open list <linux-kernel@...r.kernel.org>,
        "Chalamarla, Tirumalesh" <Tirumalesh.Chalamarla@...ium.com>,
        "open list:ARM/QUALCOMM SUPPORT" <linux-arm-msm@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH] Revert "arm64: Increase the max granular size"

On Tue, Apr 18, 2017 at 8:18 PM, Catalin Marinas
<catalin.marinas@....com> wrote:
> On Mon, Apr 17, 2017 at 04:08:52PM +0530, Sunil Kovvuri wrote:
>> >>     >> Do you have an explanation on the performance variation when
>> >>     >> L1_CACHE_BYTES is changed? We'd need to understand how the network stack
>> >>     >> is affected by L1_CACHE_BYTES, in which context it uses it (is it for
>> >>     >> non-coherent DMA?).
>> >>     >
>> >>     > network stack use SKB_DATA_ALIGN to align.
>> >>     > ---
>> >>     > #define SKB_DATA_ALIGN(X) (((X) + (SMP_CACHE_BYTES - 1)) & \
>> >>     > ~(SMP_CACHE_BYTES - 1))
>> >>     >
>> >>     > #define SMP_CACHE_BYTES L1_CACHE_BYTES
>> >>     > ---
>> >>     > I think this is the reason of performance regression.
>> >>     >
>> >>
>> >>     Yes this is the reason for performance regression. Due to increases L1 cache alignment the
>> >>     object is coming from next kmalloc slab and skb->truesize is changing from 2304 bytes to
>> >>     4352 bytes. This in turn increases sk_wmem_alloc which causes queuing of less send buffers.
>>
>> With what traffic did you check 'skb->truesize' ?
>> Increase from 2304 to 4352 bytes doesn't seem to be real. I checked
>> with ICMP pkts with maximum
>> size possible with 1500byte MTU and I don't see such a bump. If the
>> bump is observed with Iperf
>> sending TCP packets then I suggest to check if TSO is playing a part over here.
>
> I haven't checked truesize but I added some printks to __alloc_skb() (on
> a Juno platform) and the size argument to this function is 1720 on many
> occasions. With sizeof(struct skb_shared_info) of 320, the actual data
> allocation is exactly 2048 when using 64 byte L1_CACHE_SIZE. With a
> 128 byte cache size, it goes slightly over 2K, hence the 4K slab
> allocation.

Understood but still in my opinion this '4K slab allocation' cannot be
considered as
an issue with cache line size, there are many network drivers out
there which do
receive buffer or page recycling to minimize (sometimes almost to
zero) the cost
of buffer allocation.

>The 1720 figure surprised me a bit as well since I was
> expecting something close to 1500.
>
> The thing that worries me is that skb->data may be used as a buffer to
> DMA into. If that's the case, skb_shared_info is wrongly aligned based
> on SMP_CACHE_BYTES only and can lead to corruption on a non-DMA-coherent
> platform. It should really be ARCH_DMA_MINALIGN.

I didn't get this, if you see __alloc_skb()

229         size = SKB_DATA_ALIGN(size);
230         size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));

both DMA buffer and skb_shared_info are aligned to a cacheline separately,
considering 128byte alignment guarantees 64byte alignment as well, how
will this
lead to corruption ?

And if platform is non-DMA-coherent then again it's the driver which
should take
of coherency by using appropriate map/unmap APIs and should avoid any cacheline
sharing btw DMA buffer and skb_shared_info.

>
> IIUC, the Cavium platform has coherent DMA, so it shouldn't be an issue
> if we go back to 64 byte cache lines.

Yes, Cavium platform is DMA coherent and there is no issue with reverting back
to 64byte cachelines. But do we want to do this because some platform has a
performance issue and this is an easy way to solve it. IMHO there seems
to be many ways to solve performance degradation within the driver itself, and
if those doesn't work then probably it makes sense to revert this.

>However, we don't really have an
> easy way to check (maybe taint the kernel if CWG is different from
> ARCH_DMA_MINALIGN *and* the non-coherent DMA API is called).
>
> --
> Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ