lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 May 2019 17:41:56 +0200
From:   Fredrik Noring <noring@...rew.org>
To:     Laurentiu Tudor <laurentiu.tudor@....com>
Cc:     Robin Murphy <robin.murphy@....com>, "hch@....de" <hch@....de>,
        "stern@...land.harvard.edu" <stern@...land.harvard.edu>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "linux-usb@...r.kernel.org" <linux-usb@...r.kernel.org>,
        "marex@...x.de" <marex@...x.de>,
        "JuergenUrban@....de" <JuergenUrban@....de>,
        Leo Li <leoyang.li@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2 0/3] prerequisites for device reserved local mem
 rework

Hi Laurentiu,

> Wow, that's excellent news! Thanks a lot for looking into this.

You are welcome!

> Are you ok if I add your Signed-Off-by and maybe also Tested-by in the 
> first patch of the series?

Yes, but I have two comments:

1. ohci_mem_init() allocates two DMA pools that are no longer relevant, so
   it seems appropriate to assign NULL to ohci->td_cache and ohci->ed_cache,
   and document this exception in struct ohci_hcd. Unless something more
   elegant can be done, of course.

2. A device address is supplied as phys_addr_t phys to gen_pool_add_virt().
   This seems to work in this particular DMA application, but there will be
   problems if someone does phys_to_virt() or suchlike. Can this be improved
   or clearly explained? gen_pool_virt_to_phys() searches in address ranges,
   for example, so it appears the implementation uses phys quite carefully.

> As a side note, I plan to add a new HCD function and name it something 
> like hcd_setup_local_mem() that would take care of setting up the 
> genalloc for drivers.

Good! Then I suppose the HCD_LOCAL_MEM assignment can be removed from the
drivers too? Like this one:

	ohci_ps2_hc_driver.flags |= HCD_LOCAL_MEM;

Regarding the previous HCD IRQ question, lib/genalloc.c says that

	It is safe to use the allocator in NMI handlers and other special
	unblockable contexts that could otherwise deadlock on locks.  This
	is implemented by using atomic operations and retries on any
	conflicts.  The disadvantage is that there may be livelocks in
	extreme cases.  For better scalability, one allocator can be used
	for each CPU.

so it appears to be good enough for USB purposes.

> Yes, I think it would make sense to put the new API in a distinct patch. 
> I think we can either include it in the next version of the patch series 
> or you can submit separately and I'll mention it as dependency for this 
> patch series, however you prefer.

Please find the patch below and if possible include it in your patch series.

Fredrik

lib/genalloc.c: Add gen_pool_dma_zalloc() for zeroed DMA allocations

gen_pool_dma_zalloc() is a zeroed memory variant of gen_pool_dma_alloc().
Document return values of both, and indicate NULL as a "%NULL" constant.

Signed-off-by: Fredrik Noring <noring@...rew.org>

--- a/include/linux/genalloc.h
+++ b/include/linux/genalloc.h
@@ -121,6 +121,8 @@ extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t,
 		genpool_algo_t algo, void *data);
 extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size,
 		dma_addr_t *dma);
+extern void *gen_pool_dma_zalloc(struct gen_pool *pool, size_t size,
+		dma_addr_t *dma);
 extern void gen_pool_free(struct gen_pool *, unsigned long, size_t);
 extern void gen_pool_for_each_chunk(struct gen_pool *,
 	void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *);
--- a/lib/genalloc.c
+++ b/lib/genalloc.c
@@ -337,12 +337,14 @@ EXPORT_SYMBOL(gen_pool_alloc_algo);
  * gen_pool_dma_alloc - allocate special memory from the pool for DMA usage
  * @pool: pool to allocate from
  * @size: number of bytes to allocate from the pool
- * @dma: dma-view physical address return value.  Use NULL if unneeded.
+ * @dma: dma-view physical address return value.  Use %NULL if unneeded.
  *
  * Allocate the requested number of bytes from the specified pool.
  * Uses the pool allocation function (with first-fit algorithm by default).
  * Can not be used in NMI handler on architectures without
  * NMI-safe cmpxchg implementation.
+ *
+ * Return: virtual address of the allocated memory, or %NULL on failure
  */
 void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma)
 {
@@ -362,6 +364,30 @@ void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma)
 }
 EXPORT_SYMBOL(gen_pool_dma_alloc);
 
+/**
+ * gen_pool_dma_zalloc - allocate special zeroed memory from the pool for DMA usage
+ * @pool: pool to allocate from
+ * @size: number of bytes to allocate from the pool
+ * @dma: dma-view physical address return value.  Use %NULL if unneeded.
+ *
+ * Allocate the requested number of zeroed bytes from the specified pool.
+ * Uses the pool allocation function (with first-fit algorithm by default).
+ * Can not be used in NMI handler on architectures without
+ * NMI-safe cmpxchg implementation.
+ *
+ * Return: virtual address of the allocated zeroed memory, or %NULL on failure
+ */
+void *gen_pool_dma_zalloc(struct gen_pool *pool, size_t size, dma_addr_t *dma)
+{
+	void *vaddr = gen_pool_dma_alloc(pool, size, dma);
+
+	if (vaddr)
+		memset(vaddr, 0, size);
+
+	return vaddr;
+}
+EXPORT_SYMBOL(gen_pool_dma_zalloc);
+
 /**
  * gen_pool_free - free allocated special memory back to the pool
  * @pool: pool to free to

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ