[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <00f4b3f1-ada0-d07d-2640-d902a437b24e@huawei.com>
Date: Wed, 14 Jun 2017 14:08:16 +0100
From: John Garry <john.garry@...wei.com>
To: wangyijing <wangyijing@...wei.com>,
Johannes Thumshirn <jthumshirn@...e.de>,
<jejb@...ux.vnet.ibm.com>, <martin.petersen@...cle.com>
CC: <chenqilin2@...wei.com>, <hare@...e.com>,
<linux-scsi@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<chenxiang66@...ilicon.com>, <huangdaode@...ilicon.com>,
<wangkefeng.wang@...wei.com>, <zhaohongjiang@...wei.com>,
<dingtianhong@...wei.com>, <guohanjun@...wei.com>,
<yanaijie@...wei.com>, <hch@....de>, <dan.j.williams@...el.com>,
<emilne@...hat.com>, <thenzl@...hat.com>, <wefu@...hat.com>,
<charles.chenxin@...wei.com>, <chenweilong@...wei.com>,
Yousong He <heyousong@...wei.com>
Subject: Re: [PATCH v2 1/2] libsas: Don't process sas events in static works
On 14/06/2017 10:04, wangyijing wrote:
>>> static void notify_ha_event(struct sas_ha_struct *sas_ha, enum ha_event event)
>>> >> {
>>> >> + struct sas_ha_event *ev;
>>> >> +
>>> >> BUG_ON(event >= HA_NUM_EVENTS);
>>> >>
>>> >> - sas_queue_event(event, &sas_ha->pending,
>>> >> - &sas_ha->ha_events[event].work, sas_ha);
>>> >> + ev = kzalloc(sizeof(*ev), GFP_ATOMIC);
>>> >> + if (!ev)
>>> >> + return;
>> > GFP_ATOMIC allocations can fail and then no events will be queued *and* we
>> > don't report the error back to the caller.
>> >
> Yes, it's really a problem, but I don't find a better solution, do you have some suggestion ?
>
Dan raised an issue with this approach, regarding a malfunctioning PHY
which spews out events. I still don't think we're handling it safely.
Here's the suggestion:
- each asd_sas_phy owns a finite-sized pool of events
- when the event pool becomes exhausted, libsas stops queuing events
(obviously) and disables the PHY in the LLDD
- upon attempting to re-enable the PHY from sysfs, libsas first checks
that the pool is still not exhausted
If you cannot find a good solution, then let us know and we can help.
John
Powered by blists - more mailing lists