lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230221124355.GA1437176@quicinc.com>
Date:   Tue, 21 Feb 2023 18:13:55 +0530
From:   Srivatsa Vaddagiri <quic_svaddagi@...cinc.com>
To:     Srinivas Kandagatla <srinivas.kandagatla@...aro.org>
CC:     Elliot Berman <quic_eberman@...cinc.com>,
        Alex Elder <elder@...aro.org>,
        Prakruthi Deepak Heragu <quic_pheragu@...cinc.com>,
        Murali Nalajala <quic_mnalajal@...cinc.com>,
        Trilok Soni <quic_tsoni@...cinc.com>,
        "Carl van Schaik" <quic_cvanscha@...cinc.com>,
        Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
        Bjorn Andersson <andersson@...nel.org>,
        "Konrad Dybcio" <konrad.dybcio@...aro.org>,
        Arnd Bergmann <arnd@...db.de>,
        "Greg Kroah-Hartman" <gregkh@...uxfoundation.org>,
        Rob Herring <robh+dt@...nel.org>,
        Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
        Jonathan Corbet <corbet@....net>,
        Bagas Sanjaya <bagasdotme@...il.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Jassi Brar <jassisinghbrar@...il.com>,
        <linux-arm-msm@...r.kernel.org>, <devicetree@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <linux-doc@...r.kernel.org>,
        <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH v10 12/26] gunyah: vm_mgr: Add/remove user memory regions

* Srinivas Kandagatla <srinivas.kandagatla@...aro.org> [2023-02-21 12:28:53]:

> > +void gh_vm_mem_reclaim(struct gh_vm *ghvm, struct gh_vm_mem *mapping)
> > +	__must_hold(&ghvm->mm_lock)
> > +{
> > +	int i, ret = 0;
> > +
> > +	if (mapping->parcel.mem_handle != GH_MEM_HANDLE_INVAL) {
> > +		ret = gh_rm_mem_reclaim(ghvm->rm, &mapping->parcel);
> > +		if (ret)
> > +			pr_warn("Failed to reclaim memory parcel for label %d: %d\n",
> > +				mapping->parcel.label, ret);
> 
> what the behavoir of hypervisor if we failed to reclaim the pages?
> 
> > +	}
> > +
> > +	if (!ret)
> So we will leave the user pages pinned if hypervisor call fails, but further
> down we free the mapping all together.

I think we should cleanup and bail out here, rather than try continuing past the
error. For ex: imagine userspace were to reclaim with VM still running. We would
leave the pages pinned AFAICS (even after VM terminates later) and also not
return any error to userspace indicating failure to reclaim.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ