[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2add9ba7-7bc8-bd1d-1963-61e8154b0e3c@quicinc.com>
Date: Sat, 12 Feb 2022 00:35:08 -0800
From: Abhinav Kumar <quic_abhinavk@...cinc.com>
To: Johannes Berg <johannes@...solutions.net>,
Greg KH <gregkh@...uxfoundation.org>
CC: <linux-kernel@...r.kernel.org>, <rafael@...nel.org>,
<robdclark@...il.com>, <dri-devel@...ts.freedesktop.org>,
<linux-arm-msm@...r.kernel.org>, <freedreno@...ts.freedesktop.org>,
<seanpaul@...omium.org>, <swboyd@...omium.org>,
<nganji@...eaurora.org>, <aravindh@...eaurora.org>,
<khsieh@...eaurora.org>, <daniel@...ll.ch>,
<dmitry.baryshkov@...aro.org>
Subject: Re: [PATCH] devcoredump: increase the device delete timeout to 10
mins
Hi Johannes
On 2/12/2022 12:24 AM, Johannes Berg wrote:
> On Fri, 2022-02-11 at 23:52 -0800, Abhinav Kumar wrote:
>>
>> The thread is writing the data to a file in local storage. From our
>> profiling, the read is the one taking the time not the write.
>>
>
> That seems kind of hard to believe, let's say it's a 4/3 split (4
> minutes reading, 3 minutes writing, to make read > write as you say),
> and 3MiB size, that'd mean you get 12.8KiB/sec? That seems implausibly
> low, unless you're reading with really tiny buffers?
>
> Can you strace this somehow? (with timestamp info)
>
Yes, like I have already mentioned in earlier comments, we continue to
check whats taking that long.
Once we find something from our analysis and also have the trace, will
update the thread.
>> Just doubling what we have currently. I am not sure how the current 5
>> mins timeout came from.
>>
>
> To be honest it came out of thin air, and wasn't really meant as a limit
> on how fast you can read (feels like even if it's tens of MiB you should
> read it in milliseconds into userspace), but more of a maximum time that
> we're willing to waste kernel memory if nobody is around to read the
> data.
>
> I thought it'd be better if we could somehow pin it while the userspace
> is reading it, but OTOH maybe that's actually bad, since that means
> userspace (though suitably privileged) could pin this kernel memory
> indefinitely.
>
> johannes
Powered by blists - more mailing lists