lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Sep 2023 19:02:57 +0800
From:   bibo mao <maobibo@...ngson.cn>
To:     Huacai Chen <chenhuacai@...nel.org>
Cc:     WANG Xuerui <kernel@...0n.name>,
        Andrew Morton <akpm@...ux-foundation.org>,
        David Hildenbrand <david@...hat.com>,
        loongarch@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] LoongArch: Remove zero_page_mask symbol



在 2023/9/5 18:58, Huacai Chen 写道:
> Hi, Bibo,
> 
> On Tue, Sep 5, 2023 at 4:01 PM Bibo Mao <maobibo@...ngson.cn> wrote:
>>
>> On LoongArch system, there is only one page for zero page, and
>> there is no COLOR_ZERO_PAGE, so zero_page_mask is useless and
>> macro __HAVE_COLOR_ZERO_PAGE is not necessary. This patch removes
>> zero_page_mask and macro __HAVE_COLOR_ZERO_PAGE.
>>
>> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
>> ---
>>  arch/loongarch/include/asm/pgtable.h | 4 +---
>>  arch/loongarch/mm/init.c             | 9 +--------
>>  2 files changed, 2 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
>> index 06963a172319..342c5f9c25d2 100644
>> --- a/arch/loongarch/include/asm/pgtable.h
>> +++ b/arch/loongarch/include/asm/pgtable.h
>> @@ -71,11 +71,9 @@ struct vm_area_struct;
>>   */
>>
>>  extern unsigned long empty_zero_page;
>> -extern unsigned long zero_page_mask;
>>
>>  #define ZERO_PAGE(vaddr) \
>> -       (virt_to_page((void *)(empty_zero_page + (((unsigned long)(vaddr)) & zero_page_mask))))
>> -#define __HAVE_COLOR_ZERO_PAGE
>> +       (virt_to_page((void *)(empty_zero_page)))
>>
>>  /*
>>   * TLB refill handlers may also map the vmalloc area into xkvrange.
>> diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
>> index 3b7d8129570b..8ec668f97b00 100644
>> --- a/arch/loongarch/mm/init.c
>> +++ b/arch/loongarch/mm/init.c
>> @@ -35,14 +35,8 @@
>>  #include <asm/pgalloc.h>
>>  #include <asm/tlb.h>
>>
>> -/*
>> - * We have up to 8 empty zeroed pages so we can map one of the right colour
>> - * when needed.         Since page is never written to after the initialization we
>> - * don't have to care about aliases on other CPUs.
>> - */
>> -unsigned long empty_zero_page, zero_page_mask;
>> +unsigned long empty_zero_page;
>>  EXPORT_SYMBOL(empty_zero_page);
>> -EXPORT_SYMBOL(zero_page_mask);
>>
>>  void setup_zero_pages(void)
>>  {
>> @@ -60,7 +54,6 @@ void setup_zero_pages(void)
>>         for (i = 0; i < (1 << order); i++, page++)
>>                 mark_page_reserved(page);
>>
>> -       zero_page_mask = ((PAGE_SIZE << order) - 1) & PAGE_MASK;
> In my opinion it is better to combine two patches to one. Because this
> patch can only work *accidently* when 'order' is zero.
sure, will do.

Regards
Bibo Mao

> 
> Huacai
>>  }
>>
>>  void copy_user_highpage(struct page *to, struct page *from,
>> --
>> 2.27.0
>>
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ