[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5656DE3B.9030602@st.com>
Date: Thu, 26 Nov 2015 11:26:03 +0100
From: Giuseppe CAVALLARO <peppe.cavallaro@...com>
To: David Miller <davem@...emloft.net>, <zhengsq@...k-chips.com>
CC: <linux-kernel@...r.kernel.org>,
<linux-rockchip@...ts.infradead.org>, <netdev@...r.kernel.org>,
<dianders@...gle.com>
Subject: Re: [PATCH v1] net: stmmac: Free rx_skbufs before realloc
On 11/25/2015 4:13 PM, Giuseppe CAVALLARO wrote:
> Hello
>
> On 11/24/2015 7:09 PM, David Miller wrote:
>> From: Shunqian Zheng <zhengsq@...k-chips.com>
>> Date: Sun, 22 Nov 2015 16:44:18 +0800
>>
>>> From: ZhengShunQian <zhengsq@...k-chips.com>
>>>
>>> The init_dma_desc_rings() may realloc the rx_skbuff[] when
>>> suspend and resume. This patch free the rx_skbuff[] before
>>> reallocing memory.
>>>
>>> Signed-off-by: ZhengShunQian <zhengsq@...k-chips.com>
>>
>> This isn't really the right way to fix this.
>>
>> I see two reasonable approaches:
>>
>> 1) suspend liberates the RX ring, although this approach is less
>> desirable
>>
>> 2) resume doesn't try to allocate already populated RX ring
>> entries
>>
>> Freeing the whole RX ring just to allocate it again immediately
>> makes no sense at all and is wasteful work.
>
> This is a bug in this driver version that, to be honest, we fixed with
> the first approach on STi kernel.
> The patch just called the dma_free_rx_skbufs(priv) in the suspend.
> I can give you this patch that is tested on my side too.
> But! I do think we should move on second approach.
> Indeed, also on ST platforms, when we play with suspend states
> the DDR although in self-refresh the data are not lost at all.
> No reason to free and reallocate all in suspend/resume.
> I can test that and then provide another patch to this mailing list
> asap.
I have just send the patch (directly for approach #2).
Peppe
>
> Let me know.
> peppe
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists