lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ALIAKgDoCKmzQ6k5LYOILapX.3.1587608811084.Hmail.wenhu.wang@vivo.com>
Date:   Thu, 23 Apr 2020 10:26:51 +0800 (GMT+08:00)
From:   王文虎 <wenhu.wang@...o.com>
To:     王文虎 <wenhu.wang@...o.com>
Cc:     Scott Wood <oss@...error.net>, gregkh@...uxfoundation.org,
        arnd@...db.de, linux-kernel@...r.kernel.org,
        linuxppc-dev@...ts.ozlabs.org, kernel@...o.com, robh@...nel.org,
        Christophe Leroy <christophe.leroy@....fr>,
        Michael Ellerman <mpe@...erman.id.au>,
        Randy Dunlap <rdunlap@...radead.org>
Subject: Re:Re: [PATCH v2,RESEND] misc: new driver sram_uapi for user level SRAM access

>Hi, Scott, Greg,
>
>Thank you for your helpful comments.
>For that Greg mentioned that the patch (or patch series) via UIO should worked through,
>so I want to make it clear that if it would go upstream?(And if so, when? No push, just ask)
>
>Also I have been wondering how the patches with components in different subsystems
>go get upstream to the mainline? Like patch 1-3 are of linuxppc-dev, and patch 4 is of
>subsystem UIO, and if acceptable, how would you deal with them?
>
>Back to the devicetree thing, I make it detached from hardware compatibilities which belong
>to the hardware level driver and also used module parameter for of_id definition as dt-binding
>is not allowed for UIO now. So as I can see, things may go well and there is no harm to anything,
>I hope you(Scott) please take a re-consideration. 
>

I mean I have get some new work done based on the comments of Arnd, Scott and Greg. Also a lot of tests done.
So it would be better to make it clear whether I shoud keep the work going or the UIO version is to be accepted
to go upstream recently in the future.

Thanks & regards,
Wenhu
>
>>On Sun, 2020-04-19 at 20:05 -0700, Wang Wenhu wrote:
>>> +static void sram_uapi_res_insert(struct sram_uapi *uapi,
>>> +				 struct sram_resource *res)
>>> +{
>>> +	struct sram_resource *cur, *tmp;
>>> +	struct list_head *head = &uapi->res_list;
>>> +
>>> +	list_for_each_entry_safe(cur, tmp, head, list) {
>>> +		if (&tmp->list != head &&
>>> +		    (cur->info.offset + cur->info.size + res->info.size <=
>>> +		    tmp->info.offset)) {
>>> +			res->info.offset = cur->info.offset + cur->info.size;
>>> +			res->parent = uapi;
>>> +			list_add(&res->list, &cur->list);
>>> +			return;
>>> +		}
>>> +	}
>>
>>We don't need yet another open coded allocator.  If you really need to do this
>>then use include/linux/genalloc.h, but maybe keep it simple and just have one
>>allocaton per file descriptor so you don't need to manage fd offsets?
>>
>>> +static struct sram_resource *sram_uapi_find_res(struct sram_uapi *uapi,
>>> +						__u32 offset)
>>> +{
>>> +	struct sram_resource *res;
>>> +
>>> +	list_for_each_entry(res, &uapi->res_list, list) {
>>> +		if (res->info.offset == offset)
>>> +			return res;
>>> +	}
>>> +
>>> +	return NULL;
>>> +}
>>
>>What if the allocation is more than one page, and the user mmaps starting
>>somewhere other than the first page?
>>
>>> +	switch (cmd) {
>>> +	case SRAM_UAPI_IOC_SET_SRAM_TYPE:
>>> +		if (uapi->sa)
>>> +			return -EEXIST;
>>> +
>>> +		get_user(type, (const __u32 __user *)arg);
>>> +		uapi->sa = get_sram_api_from_type(type);
>>> +		if (uapi->sa)
>>> +			ret = 0;
>>> +		else
>>> +			ret = -ENODEV;
>>> +
>>> +		break;
>>> +
>>
>>Just expose one device per backing SRAM, especially if the user has any reason
>>to care about where the SRAM is coming from (correlating sysfs nodes is much
>>more expressive than some vague notion of "type").
>>
>>> +	case SRAM_UAPI_IOC_ALLOC:
>>> +		if (!uapi->sa)
>>> +			return -EINVAL;
>>> +
>>> +		res = kzalloc(sizeof(*res), GFP_KERNEL);
>>> +		if (!res)
>>> +			return -ENOMEM;
>>> +
>>> +		size = copy_from_user((void *)&res->info,
>>> +				      (const void __user *)arg,
>>> +				      sizeof(res->info));
>>> +		if (!PAGE_ALIGNED(res->info.size) || !res->info.size)
>>> +			return -EINVAL;
>>
>>Missing EFAULT test (here and elsewhere), and res leaks on error.
>>
>>> +
>>> +		res->virt = (void *)uapi->sa->sram_alloc(res->info.size,
>>> +							 &res->phys,
>>> +							 PAGE_SIZE);
>>
>>Do we really need multiple allocators, or could the backend be limited to just
>>adding regions to a generic allocator (with that allocator also serving in-
>>kernel users)?
>>
>>If sram_alloc is supposed to return a virtual address, why isn't that the
>>return type?
>>
>>> +		if (!res->virt) {
>>> +			kfree(res);
>>> +			return -ENOMEM;
>>> +		}
>>
>>ENOSPC might be more appropriate, as this isn't general-purpose RAM.
>>
>>> +
>>> +		sram_uapi_res_insert(uapi, res);
>>> +		size = copy_to_user((void __user *)arg,
>>> +				    (const void *)&res->info,
>>> +				    sizeof(res->info));
>>> +
>>> +		ret = 0;
>>> +		break;
>>> +
>>> +	case SRAM_UAPI_IOC_FREE:
>>> +		if (!uapi->sa)
>>> +			return -EINVAL;
>>> +
>>> +		size = copy_from_user((void *)&info, (const void __user *)arg,
>>> +				      sizeof(info));
>>> +
>>> +		res = sram_uapi_res_delete(uapi, &info);
>>> +		if (!res) {
>>> +			pr_err("error no sram resource found\n");
>>> +			return -EINVAL;
>>> +		}
>>> +
>>> +		uapi->sa->sram_free(res->virt);
>>> +		kfree(res);
>>> +
>>> +		ret = 0;
>>> +		break;
>>
>>So you can just delete any arbitrary offset, even if you weren't the one that
>>allocated it?  Even if this isn't meant for unprivileged use it seems error-
>>prone.  
>>
>>> +
>>> +	default:
>>> +		pr_err("error no cmd not supported\n");
>>> +		break;
>>> +	}
>>> +
>>> +	return ret;
>>> +}
>>> +
>>> +static int sram_uapi_mmap(struct file *filp, struct vm_area_struct *vma)
>>> +{
>>> +	struct sram_uapi *uapi = filp->private_data;
>>> +	struct sram_resource *res;
>>> +
>>> +	res = sram_uapi_find_res(uapi, vma->vm_pgoff);
>>> +	if (!res)
>>> +		return -EINVAL;
>>> +
>>> +	if (vma->vm_end - vma->vm_start > res->info.size)
>>> +		return -EINVAL;
>>> +
>>> +	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>>> +
>>> +	return remap_pfn_range(vma, vma->vm_start,
>>> +			       res->phys >> PAGE_SHIFT,
>>> +			       vma->vm_end - vma->vm_start,
>>> +			       vma->vm_page_prot);
>>> +}
>>
>>Will noncached always be what's wanted here?
>>
>>-Scott
>>
>>
>
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ