lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 29 Sep 2018 17:00:50 +0800
From:   Jia-Ju Bai <baijiaju1990@...il.com>
To:     Jiri Kosina <jikos@...nel.org>
Cc:     benjamin.tissoires@...hat.com, linux-input@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2] hid: hid-core: Fix a sleep-in-atomic-context bug in
 __hid_request()



On 2018/9/24 17:26, Jiri Kosina wrote:
> On Thu, 13 Sep 2018, Jia-Ju Bai wrote:
>
>> hid_alloc_report_buf() has to be called with GFP_ATOMIC in
>> __hid_request(), because there are the following callchains
>> leading to __hid_request() being an atomic context:
>>
>> picolcd_send_and_wait (acquire a spinlock)
>>    hid_hw_request
>>      __hid_request
>>        hid_alloc_report_buf(GFP_KERNEL)
>>
>> picolcd_reset (acquire a spinlock)
>>    hid_hw_request
>>      __hid_request
>>        hid_alloc_report_buf(GFP_KERNEL)
>>
>> lg4ff_play (acquire a spinlock)
>>    hid_hw_request
>>      __hid_request
>>        hid_alloc_report_buf(GFP_KERNEL)
>>
>> lg4ff_set_autocenter_ffex (acquire a spinlock)
>>    hid_hw_request
>>      __hid_request
>>        hid_alloc_report_buf(GFP_KERNEL)
> Hm, so it's always drivers calling out into core in atomic context. So
> either we take this, and put our bets on being able to allocate the buffer
> without sleeping,

In my opinion, I prefer this way.


Best wishes,
Jia-Ju Bai

> or actually fix the few drivers (it's just lg4ff and
> picolcd at the end of the day) not to do that, and explicitly anotate
> __hid_request() with might_sleep().
>
> Hmm?
>
> Thanks,
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ