lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d73182d-9662-196a-2831-6ab50fb08040@quicinc.com>
Date:   Mon, 6 Feb 2023 13:30:30 -0800
From:   Elliot Berman <quic_eberman@...cinc.com>
To:     Srivatsa Vaddagiri <quic_svaddagi@...cinc.com>
CC:     Bjorn Andersson <quic_bjorande@...cinc.com>,
        Alex Elder <elder@...aro.org>,
        Murali Nalajala <quic_mnalajal@...cinc.com>,
        Trilok Soni <quic_tsoni@...cinc.com>,
        Carl van Schaik <quic_cvanscha@...cinc.com>,
        Prakruthi Deepak Heragu <quic_pheragu@...cinc.com>,
        Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
        Arnd Bergmann <arnd@...db.de>,
        "Greg Kroah-Hartman" <gregkh@...uxfoundation.org>,
        Rob Herring <robh+dt@...nel.org>,
        Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
        Jonathan Corbet <corbet@....net>,
        Bagas Sanjaya <bagasdotme@...il.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>,
        Jassi Brar <jassisinghbrar@...il.com>,
        Sudeep Holla <sudeep.holla@....com>,
        <linux-arm-msm@...r.kernel.org>, <devicetree@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <linux-doc@...r.kernel.org>,
        <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH v9 22/27] virt: gunyah: Add resource tickets



On 2/6/2023 1:50 AM, Srivatsa Vaddagiri wrote:
> * Elliot Berman <quic_eberman@...cinc.com> [2023-01-20 14:46:21]:
> 
>> +int ghvm_add_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket)
>> +{
>> +	struct gunyah_vm_resource_ticket *iter;
>> +	struct gunyah_resource *ghrsc;
>> +	int ret = 0;
>> +
>> +	mutex_lock(&ghvm->resources_lock);
>> +	list_for_each_entry(iter, &ghvm->resource_tickets, list) {
>> +		if (iter->resource_type == ticket->resource_type && iter->label == ticket->label) {
>> +			ret = -EEXIST;
>> +			goto out;
>> +		}
>> +	}
>> +
>> +	if (!try_module_get(ticket->owner)) {
>> +		ret = -ENODEV;
>> +		goto out;
>> +	}
>> +
>> +	list_add(&ticket->list, &ghvm->resource_tickets);
>> +	INIT_LIST_HEAD(&ticket->resources);
>> +
>> +	list_for_each_entry(ghrsc, &ghvm->resources, list) {
>> +		if (ghrsc->type == ticket->resource_type && ghrsc->rm_label == ticket->label) {
>> +			if (!ticket->populate(ticket, ghrsc))
>> +				list_move(&ghrsc->list, &ticket->resources);
> 
> Do we need the search to continue in case of a hit? 'gh_vm_add_resource' seems to
> break loop on first occurrence.
> 
> Also do we have examples of more than one 'gunyah_resource' being associated
> with same 'gunyah_vm_resource_ticket'?  Both vcpu and irqfd tickets seem to deal
> with just one resource?
> 

I'll mention this in the commit text as well.

Resources are created by Gunyah as configured in the VM's devicetree 
configuration. Gunyah doesn't process the label and that makes it 
possible for userspace to create multiple resources with the same label. 
The kernel needs to be prepared for that to happen. IMO, this isn't a 
framework issue, so I've chosen the policy to be "many-to-one": resource 
tickets can bind to many resources and resources are bound to only one 
ticket. If the resource ticket handler isn't designed to accept multiple 
resources, they can skip/ignore any further populate callbacks.

>>   static int gh_vm_start(struct gunyah_vm *ghvm)
>>   {
>>   	struct gunyah_vm_memory_mapping *mapping;
>>   	u64 dtb_offset;
>>   	u32 mem_handle;
>> -	int ret;
>> +	struct gunyah_resource *ghrsc;
>> +	struct gh_rm_hyp_resources *resources;
>> +	int ret, i;
>>   
>>   	down_write(&ghvm->status_lock);
>>   	if (ghvm->vm_status != GH_RM_VM_STATUS_NO_STATE) {
>> @@ -241,6 +314,22 @@ static int gh_vm_start(struct gunyah_vm *ghvm)
>>   		goto err;
>>   	}
>>   
>> +	ret = gh_rm_get_hyp_resources(ghvm->rm, ghvm->vmid, &resources);
>> +	if (ret) {
>> +		pr_warn("Failed to get hypervisor resources for VM: %d\n", ret);
>> +		goto err;
>> +	}
>> +
>> +	for (i = 0; i < le32_to_cpu(resources->n_entries); i++) {
> 
> minor nit: not sure if we can rely on compiler to optimize this, but it would
> be better if we run le32_to_cpu once and use the result in loop.
> 

Done.

>> +		ghrsc = gh_rm_alloc_resource(ghvm->rm, &resources->entries[i]);
>> +		if (!ghrsc) {
>> +			ret = -ENOMEM;
>> +			goto err;
>> +		}
>> +
>> +		gh_vm_add_resource(ghvm, ghrsc);
> 
> Shouldn't we have gh_vm_add_resource()->  ticket->populate() return a result and
> in case of failure we should bail out from this loop?
> 

I'm hesitant to treat the resource ticket rejecting the resource as a 
bail condition.

Userspace is able to detect when functions didn't get set up and I 
wanted to avoid adding further complexity to kernel drivers.

>> +	}
>> +
>>   	ret = gh_rm_vm_start(ghvm->rm, ghvm->vmid);
>>   	if (ret) {
>>   		pr_warn("Failed to start VM: %d\n", ret);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ