lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1354611218.3410.11.camel@pizza.hi.pengutronix.de>
Date:	Tue, 04 Dec 2012 09:53:38 +0100
From:	Philipp Zabel <p.zabel@...gutronix.de>
To:	linux-kernel@...r.kernel.org, Arnd Bergmann <arnd@...db.de>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:	Grant Likely <grant.likely@...retlab.ca>,
	Rob Herring <rob.herring@...xeda.com>,
	Paul Gortmaker <paul.gortmaker@...driver.com>,
	Shawn Guo <shawn.guo@...aro.org>,
	Richard Zhao <richard.zhao@...escale.com>,
	Huang Shijie <shijie8@...il.com>,
	Dong Aisheng <dong.aisheng@...aro.org>,
	Matt Porter <mporter@...com>,
	Fabio Estevam <fabio.estevam@...escale.com>,
	Javier Martin <javier.martin@...ta-silicon.com>,
	kernel@...gutronix.de, devicetree-discuss@...ts.ozlabs.org
Subject: Re: [PATCH v7 0/4] Add generic driver for on-chip SRAM

Hi,

On Fri, 2012-11-23 at 15:24 +0100, Philipp Zabel wrote:
> These patches add support to configure on-chip SRAM via device-tree
> node or platform data and to obtain the resulting genalloc pool from
> the physical address or a phandle pointing at the device tree node.
> This allows drivers to allocate SRAM with the genalloc API without
> hard-coding the genalloc pool pointer.

are there any further comments on this series?

> The on-chip SRAM on i.MX53 and i.MX6q can be registered via device tree
> and changed to use the simple generic SRAM driver:
> 
>                 ocram: ocram@...00000 {
>                         compatible = "fsl,imx-ocram", "sram";
>                         reg = <0x00900000 0x3f000>;
>                 };
> 
> A driver that needs to allocate SRAM buffers, like the video processing
> unit on i.MX53, can retrieve the genalloc pool from a phandle in the
> device tree using of_get_named_gen_pool(node, "iram", 0) from patch 1:
> 
>                 vpu@...f4000 {
>                         /* ... */
>                         iram = <&ocram>;
>                 };
> 
> The allocation granularity is hard-coded to 32 bytes for now,
> until a way to configure it can be agreed upon. There is overhead
> for bigger SRAMs, where only a much coarser allocation granularity
> is needed: At 32 bytes minimum allocation size, a 256 KiB SRAM
> needs a 1 KiB bitmap to track allocations.
> 
> Once everybody is ok with it, could the first two patches be merged
> through the char-misc tree? I'll resend the i.MX and coda patches to
> the respective lists afterwards.

Arnd, Greg, would you take the first patch "genalloc: add a global pool
list, allow to find pools by phys address" into the char-misc tree if
there are no vetoes? Or should I try and get it merged separately,
first?

regards
Philipp

> Changes since v6:
>  - Reduced the hard coded allocation granularity to 32 bytes.
> 
> regards
> Philipp
> 
> ---
>  Documentation/devicetree/bindings/misc/sram.txt |   17 ++++
>  arch/arm/boot/dts/imx53.dtsi                    |    5 +
>  arch/arm/boot/dts/imx6q.dtsi                    |    6 ++
>  drivers/media/platform/Kconfig                  |    3 +-
>  drivers/media/platform/coda.c                   |   47 ++++++---
>  drivers/misc/Kconfig                            |    9 ++
>  drivers/misc/Makefile                           |    1 +
>  drivers/misc/sram.c                             |  121 +++++++++++++++++++++++
>  include/linux/genalloc.h                        |   14 +++
>  lib/genalloc.c                                  |   67 +++++++++++++
>  10 files changed, 274 insertions(+), 16 deletions(-)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ