lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 27 Feb 2024 09:24:50 +0100
From: Bartosz Golaszewski <brgl@...ev.pl>
To: Kuldeep Singh <quic_kuldsing@...cinc.com>
Cc: Bjorn Andersson <andersson@...nel.org>, Andy Gross <agross@...nel.org>, 
	Konrad Dybcio <konrad.dybcio@...aro.org>, Elliot Berman <quic_eberman@...cinc.com>, 
	Krzysztof Kozlowski <krzysztof.kozlowski@...aro.org>, 
	Guru Das Srinagesh <quic_gurus@...cinc.com>, Andrew Halaney <ahalaney@...hat.com>, 
	Maximilian Luz <luzmaximilian@...il.com>, Alex Elder <elder@...aro.org>, 
	Srini Kandagatla <srinivas.kandagatla@...aro.org>, Arnd Bergmann <arnd@...db.de>, 
	linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org, 
	linux-arm-kernel@...ts.infradead.org, kernel@...cinc.com, 
	Bartosz Golaszewski <bartosz.golaszewski@...aro.org>, Deepti Jaggi <quic_djaggi@...cinc.com>
Subject: Re: [PATCH v7 02/12] firmware: qcom: scm: enable the TZ mem allocator

On Mon, Feb 26, 2024 at 11:30 AM Kuldeep Singh
<quic_kuldsing@...cinc.com> wrote:
>
> > > > As we're not moving from the callers freely allocating what they need,
> > > > to a fixed sized pool of 256kb. Please document why 256kb was choosen,
> > > > so that we have something to fall back on when someone runs out of this
> > > > space, or wonders "why not 128kb?".
> > > >
> > >
> > > If you worry about these pools being taken out of the total memory and
> > > prefer to have a way to avoid it, I was thinking about another
> > > build-time mode for the allocator - one where there's no pool but it
> > > just allocates chunks using dma_alloc_coherent() like before and pool
> > > size is ignored. Does it sound good?
> > >
> >
> > Or we could even have an argument for the initial size of the pool and
> > then once it's exhausted, we'd add a new chunk (maybe twice the size?)
> > and so on.
>
> Hi Bartosz,
>
> Thanks for shmbridge patch series. Few questions.
>
>         1. With current design of every client maintaining it's own pool,
>         For any target, we might end up occupying lot more space by
>         different clients than we actually need.
>

Technically there are only up to two, three in the future with scminvoke.

>         2. Also, there's no option to configure pool size for each client at
>         runtime level and a fixed 256K value is chosen for qcom_scm/qseecom.

You mean via a module parameter?

>         Pool size will be same for each target and thus making it less
>         scalabale if there's adjustment needed at target specific level.
>         Ex: For a low DDR memory target, pool size should scale down accordingly
>         as 256K will become a big ask but there's no way to choose specific pool
>         size for just one target.

Do we really have any low-DDR platforms that would be affected by this
change? Even for db401c the 256K is a tiny fraction of the total
memory.

>                 2.1 One way to do configure custom pool size value is to add new
>                 property in qcom_scm/qseecom or client DT node and then create
>                 pool of size with this provided value. Though there are ways to
>                 tackle this, but still clients specifying it's own pool size
>                 will always fetch more CMA region than what is actually needed.
>
> Can you please share your ideas as well for upcoming version.
>

I will propose a new solution with several configuration options for
pools. Including scaling the pool size as needed. I hope to send it
this week.

Bart

> Regards
> Kuldeep

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ