lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 16 Sep 2014 06:57:17 +0800 From: Chen Gang <gang.chen.5i5j@...il.com> To: David Vrabel <david.vrabel@...rix.com> CC: konrad.wilk@...cle.com, boris.ostrovsky@...cle.com, stefano.stabellini@...citrix.com, mukesh.rathor@...cle.com, xen-devel@...ts.xenproject.org, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org> Subject: Re: [PATCH 4/4] drivers/xen/xenbus/xenbus_client.c: Improve the failure processing for __xenbus_switch_state() On 09/15/2014 10:39 PM, David Vrabel wrote: > On 14/09/14 11:52, Chen Gang wrote: >> When failure occurs, need return failure code instead of 0, or the upper >> caller will misunderstand. >> >> Also when retry for EAGAIN reason, better to schedule out for a while, >> so can let others have chance to continue their tasks (especially, >> their tasks are related EAGAIN under UP kernel). > > Is this fixing a real world problem you have seen? > Not real world, only reading by source code, and some of upper level callers really check the return value, indirectly (they may misunderstand). > xenbus_scanf() and xenbus_printf() already sleep while waiting for the > response and delaying isn't going to reduce the likelihood of the > transaction being aborted on the retry. > OK, thanks, what you said sound reasonable to me, I shall remove the waiting code when send patch v2 for it. I shall try to send patch v2 within this week end (2014-09-21), if it is too late to bare, please let me know (I shall try in time). Thanks. -- Chen Gang Open share and attitude like air water and life which God blessed -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists