lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 29 Mar 2024 19:53:44 +0100
From: Maximilian Luz <luzmaximilian@...il.com>
To: Bartosz Golaszewski <brgl@...ev.pl>
Cc: Andy Gross <agross@...nel.org>, Bjorn Andersson <andersson@...nel.org>,
 Konrad Dybcio <konrad.dybcio@...aro.org>,
 Elliot Berman <quic_eberman@...cinc.com>,
 Krzysztof Kozlowski <krzysztof.kozlowski@...aro.org>,
 Guru Das Srinagesh <quic_gurus@...cinc.com>,
 Andrew Halaney <ahalaney@...hat.com>, Alex Elder <elder@...aro.org>,
 Srini Kandagatla <srinivas.kandagatla@...aro.org>,
 Arnd Bergmann <arnd@...db.de>, linux-arm-msm@...r.kernel.org,
 linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
 kernel@...cinc.com, Bartosz Golaszewski <bartosz.golaszewski@...aro.org>
Subject: Re: [PATCH v9 00/13] firmware: qcom: qseecom: convert to using the TZ
 allocator

On 3/29/24 11:22 AM, Bartosz Golaszewski wrote:
> On Thu, Mar 28, 2024 at 7:55 PM Bartosz Golaszewski <brgl@...ev.pl> wrote:
>>
>> On Thu, Mar 28, 2024 at 5:50 PM Maximilian Luz <luzmaximilian@...il.com> wrote:
>>>
>>> If I understand correctly, it enters an atomic section in
>>> qcom_tzmem_alloc() and then tries to schedule somewhere down the line.
>>> So this shouldn't be qseecom specific.
>>>
>>> I should probably also say that I'm currently testing this on a patched
>>> v6.8 kernel, so there's a chance that it's my fault. However, as far as
>>> I understand, it enters an atomic section in qcom_tzmem_alloc() and then
>>> later tries to expand the pool memory with dma_alloc_coherent(). Which
>>> AFAIK is allowed to sleep with GFP_KERNEL (and I guess that that's the
>>> issue here).
>>>
>>> I've also tried the shmem allocator option, but that seems to get stuck
>>> quite early at boot, before I even have usb-serial access to get any
>>> logs. If I can find some more time, I'll try to see if I can get some
>>> useful output for that.
>>>
>>
>> Ah, I think it happens here:
>>
>> +       guard(spinlock_irqsave)(&pool->lock);
>> +
>> +again:
>> +       vaddr = gen_pool_alloc(pool->genpool, size);
>> +       if (!vaddr) {
>> +               if (qcom_tzmem_try_grow_pool(pool, size, gfp))
>> +                       goto again;
>>
>> We were called with GFP_KERNEL so this is what we pass on to
>> qcom_tzmem_try_grow_pool() but we're now holding the spinlock. I need
>> to revisit it. Thanks for the catch!
>>
>> Bart
> 
> Can you try the following tree?
> 
>      https://git.codelinaro.org/bartosz_golaszewski/linux.git
> topic/shm-bridge-v10
> 
> gen_pool_alloc() and gen_pool_add_virt() can be used without external
> serialization. We only really need to protect the list of areas in the
> pool when adding a new element. We could possibly even use
> list_add_tail_rcu() as it updates the pointers atomically and go
> lockless.

Thanks! That fixes the allocations for CONFIG_QCOM_TZMEM_MODE_GENERIC=y.
Unfortunately, with the shmbridge mode it still gets stuck at boot (and
I haven't had the time to look into it yet).

And for more bad news: It looks like the new allocator now fully exposes
a bug that I've been tracking down the last couple of days. In short,
uefisecapp doesn't seem to be happy when we split the allocations for
request and response into two, causing commands to fail. Instead it
wants a single buffer for both. Before, it seemed to be fairly sporadic
(likely because kzalloc in sequence just returned consecutive memory
almost all of the time) but now it's basically every call that fails.

I have a fix for that almost ready and I'll likely post it in the next
hour. But that means that you'll probably have to rebase this series
on top of it...

Best regards,
Max


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ