lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0d1a308b-4d0e-d91a-52a7-6456ec6713f8@linux.alibaba.com>
Date:   Tue, 8 Jun 2021 12:04:03 +0800
From:   Yaohui Wang <yaohuiwang@...ux.alibaba.com>
To:     Dave Hansen <dave.hansen@...el.com>, dave.hansen@...ux.intel.com
Cc:     luto@...nel.org, peterz@...radead.org,
        linux-kernel@...r.kernel.org, yaohuiwang@...ux.alibaba.com,
        luoben@...ux.alibaba.com
Subject: Re: [PATCH] mm: fix pfn calculation mistake in __ioremap_check_ram



On 2021/6/7 21:55, Dave Hansen wrote:
> On 6/7/21 2:19 AM, Yaohui Wang wrote:
>> According to the source code in function
>> arch/x86/mm/ioremap.c:__ioremap_caller, after __ioremap_check_mem, if the
>> mem range is IORES_MAP_SYSTEM_RAM, then __ioremap_caller should fail. But
>> because of the pfn calculation problem, __ioremap_caller can success
>> on IORES_MAP_SYSTEM_RAM region when the @size parameter is less than
>> PAGE_SIZE. This may cause misuse of the ioremap function and raise the
>> risk of performance issues. For example, ioremap(phys, PAGE_SIZE-1) may
>> cause the direct memory mapping of @phys to be uncached, and iounmap won't
>> revert this change. This patch fixes this issue.
>>
>> In arch/x86/mm/ioremap.c:__ioremap_check_ram, start_pfn should wrap down
>> the res->start address, and end_pfn should wrap up the res->end address.
>> This makes the check more strict and should be more reasonable.
> 
> Was this found by inspection, or was there a real-world bug which this
> patch addresses?
>

I did a performance test for linux kernel in many aspects. One of my 
scripts is to test the performance influence of ioremap. I found that 
applying ioremap on normal RAM may cause terrible performance issues.

To avoid memory cache behavior aliasing, ioremap will call 
memtype_kernel_map_sync to sync the cache attribute in the directing 
mapping, which causes:

1. If the phys addr is in a huge page in the directing mapping, then 
ioremap will split the huge page into 4K pages.
2. It will set the PCD bit in the directing mapping pte.

Both the above behaviors will downgrade the performance of the machine, 
especially when there is important code/data which is accessed 
frequently. What's worse, iounmap won't clear the PCD bit in the 
directing mapping pte, and I need to call ioremap_cache to clear the PCD 
bit. All these should be avoided.

Another thing also confuses me:

 From __ioremap_caller, we can see that __ioremap_caller don't allow us 
to remap normal RAM. In my understanding, direct mapping only maps 
normal RAM. So if the remap behavior is not allowed on normal RAM, it 
should be unnecessary to call memtype_kernel_map_sync. Is this right?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ