lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Nov 2020 13:40:05 -0700
From:   Yu Zhao <yuzhao@...gle.com>
To:     Will Deacon <will@...nel.org>
Cc:     linux-kernel@...r.kernel.org, kernel-team@...roid.com,
        Catalin Marinas <catalin.marinas@....com>,
        Minchan Kim <minchan@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Anshuman Khandual <anshuman.khandual@....com>,
        linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 6/6] mm: proc: Avoid fullmm flush for young/dirty bit
 toggling

On Fri, Nov 20, 2020 at 02:35:57PM +0000, Will Deacon wrote:
> clear_refs_write() uses the 'fullmm' API for invalidating TLBs after
> updating the page-tables for the current mm. However, since the mm is not
> being freed, this can result in stale TLB entries on architectures which
> elide 'fullmm' invalidation.
> 
> Ensure that TLB invalidation is performed after updating soft-dirty
> entries via clear_refs_write() by using the non-fullmm API to MMU gather.
> 
> Signed-off-by: Will Deacon <will@...nel.org>
> ---
>  fs/proc/task_mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index a76d339b5754..316af047f1aa 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
>  			count = -EINTR;
>  			goto out_mm;
>  		}
> -		tlb_gather_mmu_fullmm(&tlb, mm);
> +		tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE);

Let's assume my reply to patch 4 is wrong, and therefore we still need
tlb_gather/finish_mmu() here. But then wouldn't this change deprive
architectures other than ARM the opportunity to optimize based on the
fact it's a full-mm flush?

It seems to me ARM's interpretation of tlb->fullmm is a special case,
not the other way around.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ