[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <FFF73D592F13FD46B8700F0A279B802F2E5818F1@ORSMSX114.amr.corp.intel.com>
Date: Thu, 8 Mar 2018 05:38:50 +0000
From: "Prakhya, Sai Praneeth" <sai.praneeth.prakhya@...el.com>
To: Borislav Petkov <bp@...en8.de>
CC: "linux-efi@...r.kernel.org" <linux-efi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Chun-Yi Lee <jlee@...e.com>,
"Luck, Tony" <tony.luck@...el.com>,
Will Deacon <will.deacon@....com>,
"Hansen, Dave" <dave.hansen@...el.com>,
Mark Rutland <mark.rutland@....com>,
Bhupesh Sharma <bhsharma@...hat.com>,
"Neri, Ricardo" <ricardo.neri@...el.com>,
"Shankar, Ravi V" <ravi.v.shankar@...el.com>,
Matt Fleming <matt@...eblueprint.co.uk>,
"Zijlstra, Peter" <peter.zijlstra@...el.com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
"Williams, Dan J" <dan.j.williams@...el.com>,
Miguel Ojeda <miguel.ojeda.sandonis@...il.com>
Subject: RE: [PATCH V2 2/3] efi: Introduce efi_rts_workqueue and some
infrastructure to invoke all efi_runtime_services()
+Cc Miguel Ojeda
> > > +({ \
> > > + struct efi_runtime_work efi_rts_work; \
> > > + \
> > > + INIT_WORK_ONSTACK(&efi_rts_work.work, efi_call_rts); \
> > > + efi_rts_work.func = _rts; \
> > > + efi_rts_work.arg1 = _arg1; \
> > > + efi_rts_work.arg2 = _arg2; \
> > > + efi_rts_work.arg3 = _arg3; \
> > > + efi_rts_work.arg4 = _arg4; \
> > > + efi_rts_work.arg5 = _arg5; \
> > > + /* \
> > > + * queue_work() returns 0 if work was already on queue, \
> > > + * _ideally_ this should never happen. \
> > > + */ \
> > > + if (queue_work(efi_rts_wq, &efi_rts_work.work))
> > \
> > > + flush_work(&efi_rts_work.work);
> > \
> > > + else \
> > > + BUG(); \
> >
> > So failure to queue that work is such a critical problem that we need
> > to BUG() and can't possibly continue and shoult not attempt recovery at all?
> >
>
> I think it's not critical, we can just return error status.
> I think the problem in itself is not at all critical but when I initially thought about
> why the problem could have occurred, it sounded like one i.e. ideally (if the
> system is running fine) we should always be able to queue work. Failure to queue
> means that the previous work is already on queue and that shouldn't be the
> case.
> So, thought, maybe something bad had happened already (just doubtful).
>
> But, I see your point. BUG() sounds more like an over kill. Instead of fixing an
> existing problem, this patch could completely take down the system.
>
> > IOW, we should always strive to fail gracefully and not shit in pants
> > at the first sign of trouble.
> >
>
> Yes, that makes sense. I will remove BUG() in V3 (in the two places that I
> introduced).
>
> > Even checkpatch warns here:
> >
> > WARNING: Avoid crashing the kernel - try using WARN_ON & recovery code
> > rather than BUG() or BUG_ON()
> > #184: FILE: drivers/firmware/efi/runtime-wrappers.c:92:
> > + BUG(); \
> >
>
> Sure! I will fix this
>
> >
> > and by looking at the other output, you should run your patches
> > through checkpatch. Some of the things make sense like:
> >
> > WARNING: quoted string split across lines
> > #97: FILE: drivers/firmware/efi/efi.c:341:
> > + pr_err("Failed to create efi_rts_workqueue, EFI runtime services "
> > + "disabled.\n");
> >
> > for example.
> >
>
> I will fix this one too.
>
> Another warning by checkpatch is "use of in_atomic() in drivers code"
> Do you think it's OK to check if were are "in_atomic()" in drivers code.
> I wasn't able to decide on other alternative, if possible, could you please suggest
> one?
>
> Regards,
> Sai
Powered by blists - more mailing lists