lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 23 Aug 2019 12:48:30 -0700
From:   Scott Branden <scott.branden@...adcom.com>
To:     Takashi Iwai <tiwai@...e.de>, Luis Chamberlain <mcgrof@...nel.org>
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Andy Gross <andy.gross@...aro.org>,
        David Brown <david.brown@...aro.org>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Shuah Khan <shuah@...nel.org>, bjorn.andersson@...aro.org,
        "Rafael J . Wysocki" <rafael@...nel.org>,
        linux-kernel@...r.kernel.org, linux-arm-msm@...r.kernel.org,
        linux-fsdevel@...r.kernel.org,
        BCM Kernel Feedback <bcm-kernel-feedback-list@...adcom.com>,
        Olof Johansson <olof@...om.net>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dan Carpenter <dan.carpenter@...cle.com>,
        Colin Ian King <colin.king@...onical.com>,
        Kees Cook <keescook@...omium.org>,
        linux-kselftest@...r.kernel.org
Subject: Re: [PATCH 3/3] firmware: add mutex fw_lock_fallback for race
 condition


On 2019-08-23 3:31 a.m., Takashi Iwai wrote:
> On Tue, 20 Aug 2019 03:26:55 +0200,
> Luis Chamberlain wrote:
>> On Mon, Aug 19, 2019 at 09:19:51AM -0700, Scott Branden wrote:
>>> To be honest, I find the entire firmware code sloppy.
>> And that is after years of cleanup on my part. Try going back to v4.1
>> for instance, check the code out then for an incredible horrific sight :)
>>
>>> I don't think the cache/no-cache feature is
>>> implemented or tested properly nor fallback to begin with.
>> I'm in total agreement! I *know* there must be holes in that code, and I
>> acknowledge a few possible gotchas on the commit logs. For instance, I
>> acknowledged that the firmware cache had a secondary purpose which was
>> not well documented or understood through commit e44565f62a720
>> ("firmware: fix batched requests - wake all waiters"). The firmware
>> cache allows for batching requests and sharing the same original request
>> for multiple consecutive requests which *race against each other*.
>> That's when I started having my doubts about the architecture of the
>> firmware cache mechanism, it seemed too complex and perhaps overkill
>> and considered killing it.
>>
>> As I noted in that commit, the firmware cache is used for:
>>      
>> 1) Addressing races with file lookups during the suspend/resume cycle by
>> keeping firmware in memory during the suspend/resume cycle
> Right, this one is the significant need.  And currently the fw loader
> core takes a complicated approach as:
>
> - Store firmware name string in devres for each firmware
> - Upon suspend, loop over all devices and associated firmware names,
>    create a list, then loop over the list for loading the firmware
>    files before sleeping.
> - Upon resume, release the firmware files that have been loaded at
>    suspend in a delayed manner.
>
> So we have different level of lists there, which make the code quite
> hard to understand.
>
> The reason of the above approach is because we didn't know which
> device driver would need the firmware at resume, so basically we do
> cache for all devices.  Maybe it'd better to look for the exact
> drivers that require the firmware at resume, and handle only such
> ones instead of catch-all approach.

Yes, that would be better.  Or remove this cache mechanism entirely

and provide some helper functions of some sort to the limited

drivers that actually require such mechanism.

>
> OTOH, I find it's not bad to keep the loaded firmware file names per
> device and expose e.g. via sysfs.  Currently we have no way to look at
> which firmware files have been loaded afterwards; the only way to see
> it is enabling some debug option and read through kernel messages.
> (FWIW, I stumbled on this problem since I wanted to provide the split
>   kernel-firmware package on SUSE distro, and let the installer decide
>   which package to pick up.)
>
>> 2) Batched requests for the same file rely only on work from the first
>> file lookup, which keeps the firmware in memory until the last
>> release_firmware() is called
> IMO, this feature can be omitted if it makes things too complicated.
> I guess it were added because we handle the fw caching in anyway.
> There isn't a big need for this due to performance.  If the
> performance matters, such driver should re-use its own firmware by
> itself.

Any simplifications would be appreciated.

I sure don't understand what the code is trying to do.

>
> (snip)
>>> 3) I have a driver that uses request_firmware_into_buf and have multiple
>>> instances of the driver
>> Cool, is the driver upstream?
>>
>>> loading the same firmware in parallel.  Some of the data is not read
>>> correctly in each instance.
>> Makes perfect sense considering the lack of testing I noted.
>>
>>> I haven't yet to reproduce this issue with the firmware test
>> That's because of batched firmware request mechanism.
>>
>>> but currently
>>> have a mutex around the entire
>>> call to request_firmware_into_buf in our driver.
>> I will take a look at this now.
>>
>>> Perhaps it is better at this point to add a mutex in
>>> request_firmware_into_buf to make is entirely safe?
>> No, that is not sufficient, although it would also solve the
>> issue.
> The mutex for request_firmware_into_buf() doesn't sound like a good
> approach.  Basically the direct fw loading should work in parallel
> for the same firmware file.  We might have some bug wrt cache stuff,
> but it can be fixed properly.
>
> However, the fw loading in fallback mode can't run in parallel for
> the same file, per design -- no matter whether cached or not.
> So, if any, we'd need put a mutex around the fallback loader code.
> And, the mutex should be rather per device, not a global one.

Sure, whatever solves the issue.  All I wish to do is read

part of file into a buffer specified.

>
> Or we may trick it by appending the second parallel caller into the
> same wait queue, but the code will be more complex, so I don't think
> worth for it.
>
>
> thanks,
>
> Takashi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ