lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190122054400.GB6445@brain-police>
Date:   Tue, 22 Jan 2019 05:44:02 +0000
From:   Will Deacon <will.deacon@....com>
To:     Catalin Marinas <catalin.marinas@....com>
Cc:     chenwandun <chenwandun@...wei.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "Wangkefeng (Kevin)" <wangkefeng.wang@...wei.com>,
        anshuman.khandual@....com
Subject: Re: [Qestion] Softlockup when send IPI to other CPUs

Hi Catalin,

On Mon, Jan 21, 2019 at 02:21:28PM +0000, Catalin Marinas wrote:
> On Sat, Jan 19, 2019 at 11:58:27PM +0000, Will Deacon wrote:
> > On Thu, Jan 17, 2019 at 07:42:44AM +0000, chenwandun wrote:
> > > Recently, I do some tests on linux-4.19 and hit a softlockup issue.
> > > 
> > > I find some CPUs get the spinlock in the __split_huge_pmd function and
> > > then send IPI to other CPUs, waiting the response, while several CPUs
> > > enter the __split_huge_pmd function, want to get the spinlock, but always
> > > in queued_spin_lock_slowpath,
> > > 
> > > Because long time no response to the IPI, that results in a softlockup.
> > > 
> > > As to sending IPI, it was in the patch
> > > 3b8c9f1cdfc506e94e992ae66b68bbe416f89610.  The patch is mean to send IPI
> > > to each CPU after invalidating the I-cache for kernel mappings.  In this
> > > case, after modify pmd, it sends IPI to other CPUS to sync memory
> > > mappings.
> > > 
> > > No stable test case to repeat the result, it is hard to repeat the test procedure.
> > > 
> > > The environment is arm64, 64 CPUs. Except for idle CPU, there are 6 kind
> > > of callstacks in total.
> > 
> > This looks like another lockup that would be solved if we deferred our
> > I-cache invalidation when mapping user-executable pages, and instead
> > performed the invalidation off the back of a UXN permission fault, where we
> > could avoid holding any locks.
> 
> Looking back at commit 3b8c9f1cdfc5 ("arm64: IPI each CPU after
> invalidating the I-cache for kernel mappings"), the text implies that it
> should only do this for kernel mappings. I don't think we need this for
> user mappings. We have a few scenarios where we invoke set_pte_at() with
> exec permission:

Yes, I think you're right. I got confused because in this case we are
invalidating lines written by the kernel, but actually it's not about who
writes the data, but about whether or not the page table is being changed.

> 1. Page faulted in - the pte was not previously accessible and the CPU
>    should not have stale decoded instructions (my interpretation of the
>    ARM ARM).
> 
> 2. huge pmd splitting - there is no change to the underlying data. I
>    have a suspicion here that we shouldn't even call
>    sync_icache_aliases() but probably the PG_arch_1 isn't carried over
>    from the head compound page to the small pages (needs checking).
> 
> 3. page migration - there is no change to the underlying data and
>    instructions, so I don't think we need the all CPUs sync.
> 
> 4. mprotect() setting exec permission - that's normally the
>    responsibility of the user to ensure cache maintenance
> 
> (I can add more text to the patch below but need to get to the bottom of
> this first)
> 
> ---------8<-------------------------------------------------
> arm64: Do not issue IPIs for user executable ptes
> 
> From: Catalin Marinas <catalin.marinas@....com>
> 
> Commit 3b8c9f1cdfc5 ("arm64: IPI each CPU after invalidating the I-cache
> for kernel mappings") was aimed at fixing the I-cache invalidation for
> kernel mappings. However, it inadvertently caused all cache maintenance
> for user mappings via set_pte_at() -> __sync_icache_dcache() to call
> kick_all_cpus_sync().
> 
> Fixes: 3b8c9f1cdfc5 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings")
> Cc: <stable@...r.kernel.org> # 4.19.x-
> Signed-off-by: Catalin Marinas <catalin.marinas@....com>
> ---
>  arch/arm64/include/asm/cacheflush.h |    2 +-
>  arch/arm64/kernel/probes/uprobes.c  |    2 +-
>  arch/arm64/mm/flush.c               |   14 ++++++++++----
>  3 files changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 19844211a4e6..18e92d9dacd4 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -80,7 +80,7 @@ extern void __clean_dcache_area_poc(void *addr, size_t len);
>  extern void __clean_dcache_area_pop(void *addr, size_t len);
>  extern void __clean_dcache_area_pou(void *addr, size_t len);
>  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> -extern void sync_icache_aliases(void *kaddr, unsigned long len);
> +extern void sync_icache_aliases(void *kaddr, unsigned long len, bool sync);

I'd much prefer just adding something like sync_user_icache_aliases(), which
would avoid the IPI, since bool arguments tend to make the callsites
unreadable imo.

With that:

Acked-by: Will Deacon <will.deacon@....com>

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ