lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190123102104.GF15019@brain-police>
Date:   Wed, 23 Jan 2019 10:21:06 +0000
From:   Will Deacon <will.deacon@....com>
To:     Mark Rutland <mark.rutland@....com>
Cc:     Catalin Marinas <catalin.marinas@....com>,
        chenwandun <chenwandun@...wei.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "Wangkefeng (Kevin)" <wangkefeng.wang@...wei.com>,
        anshuman.khandual@....com
Subject: Re: [Qestion] Softlockup when send IPI to other CPUs

On Tue, Jan 22, 2019 at 02:55:22PM +0000, Mark Rutland wrote:
> On Tue, Jan 22, 2019 at 05:44:02AM +0000, Will Deacon wrote:
> > On Mon, Jan 21, 2019 at 02:21:28PM +0000, Catalin Marinas wrote:
> > > On Sat, Jan 19, 2019 at 11:58:27PM +0000, Will Deacon wrote:
> > > > On Thu, Jan 17, 2019 at 07:42:44AM +0000, chenwandun wrote:
> > > > > Recently, I do some tests on linux-4.19 and hit a softlockup issue.
> > > > > 
> > > > > I find some CPUs get the spinlock in the __split_huge_pmd function and
> > > > > then send IPI to other CPUs, waiting the response, while several CPUs
> > > > > enter the __split_huge_pmd function, want to get the spinlock, but always
> > > > > in queued_spin_lock_slowpath,
> > > > > 
> > > > > Because long time no response to the IPI, that results in a softlockup.
> > > > > 
> > > > > As to sending IPI, it was in the patch
> > > > > 3b8c9f1cdfc506e94e992ae66b68bbe416f89610.  The patch is mean to send IPI
> > > > > to each CPU after invalidating the I-cache for kernel mappings.  In this
> > > > > case, after modify pmd, it sends IPI to other CPUS to sync memory
> > > > > mappings.
> > > > > 
> > > > > No stable test case to repeat the result, it is hard to repeat the test procedure.
> > > > > 
> > > > > The environment is arm64, 64 CPUs. Except for idle CPU, there are 6 kind
> > > > > of callstacks in total.
> > > > 
> > > > This looks like another lockup that would be solved if we deferred our
> > > > I-cache invalidation when mapping user-executable pages, and instead
> > > > performed the invalidation off the back of a UXN permission fault, where we
> > > > could avoid holding any locks.
> > > 
> > > Looking back at commit 3b8c9f1cdfc5 ("arm64: IPI each CPU after
> > > invalidating the I-cache for kernel mappings"), the text implies that it
> > > should only do this for kernel mappings. I don't think we need this for
> > > user mappings. We have a few scenarios where we invoke set_pte_at() with
> > > exec permission:
> > 
> > Yes, I think you're right. I got confused because in this case we are
> > invalidating lines written by the kernel, but actually it's not about who
> > writes the data, but about whether or not the page table is being changed.
> 
> IIUC we may have a userspace problem analagous to the kernel modules
> problem, if userspace uses dlopen/dlclose to dynamically load/unload
> shared objects.
> 
> If userspace unloads an object, then loads another, the new object might
> get placed at the same VA. A PE could have started speculating
> instructions from the old object, and IIUC the TLB invalidation and
> I-cache maintenance don't cause those instructions be re-fetched from
> the I-cache unless there's a context synchronization event.
> 
> Do we require the use of membarrier when loading or unloading objects?
> If so, when does that happen relative to the unmap or map?

membarrier seems a bit OTT for this. Assumedly userspace is already having
to synchronise threads in this case so that (a) nobody is executing the old
object when it is unloaded and (b) nobody tries to execute the new
object until it has been successfully loaded. Squeezing in an ISB shouldn't
be too tricky, although I don't know whether it's actually done (especially
since the chance of this going wrong is so tiny).

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ