[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-am33-list] Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/

Nicolas Pitre <nico cam org> wrote:

> Disabling preemption is unneeded.

I think you may be wrong on that.  MEI came up with the following point:

	I think either disable preemption or disable interrupt is really
	necessary for cnt32_to_63 macro, because there seems to be assumption
	that the series of the code (1)-(4) must be executed within a half
	period of the 32-bit counter.

	#define cnt32_to_63(cnt_lo) 
	       static volatile u32 __m_cnt_hi = 0; 
	       cnt32_to_63_t __x; 
	(1)    __x.hi = __m_cnt_hi; 
	(2)    __x.lo = (cnt_lo); 
	(3)    if (unlikely((s32)(__x.hi ^ __x.lo) < 0))
	(4)            __m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); 

	If a task is preempted while executing the series of the code and
	scheduled again after the half period of the 32-bit counter, the task
	may destroy __m_cnt_hi.

Their suggested remedy is:

	So I think it's better to disable interrupt the cnt32_to_63 and to
	ensure that the series of the code are executed within a short period.

I think this is excessive...  If we're sat there with interrupts disabled for
more than a half period (65s) then we've got other troubles.  I think
disabling preemption for the duration ought to be enough.  What do you think?

Now, I'm happy to put these in sched_clock() rather then cnt32_to_63() for my
purposes (see attached patch).

MN10300: Prevent cnt32_to_63() from being preempted in sched_clock()

From: David Howells <dhowells redhat com>

Prevent cnt32_to_63() from being preempted in sched_clock() because it may
read its internal counter, get preempted, get delayed for more than the half
period of the 'TSC' and then write the internal counter, thus corrupting it.

Whilst some callers of sched_clock() have interrupts disabled or hold
spinlocks, not all do, and so preemption must be held here.

Note that sched_clock() is called from lockdep, but that shouldn't be a problem
because although preempt_disable() calls into lockdep, lockdep has a recursion
counter to deal with this.

Signed-off-by: David Howells <dhowells redhat com>

 arch/mn10300/kernel/time.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/mn10300/kernel/time.c b/arch/mn10300/kernel/time.c
index e460658..38f88bb 100644
--- a/arch/mn10300/kernel/time.c
+++ b/arch/mn10300/kernel/time.c
@@ -55,6 +55,9 @@ unsigned long long sched_clock(void)
 	unsigned long tsc, tmp;
 	unsigned product[3]; /* 96-bit intermediate value */
+	/* cnt32_to_63() is not safe with preemption */
+	preempt_disable();
 	/* read the TSC value
 	tsc = 0 - get_cycles(); /* get_cycles() counts down */
@@ -65,6 +68,8 @@ unsigned long long sched_clock(void)
 	tsc64.ll = cnt32_to_63(tsc) & 0x7fffffffffffffffULL;
+	preempt_enable();
 	/* scale the 64-bit TSC value to a nanosecond value via a 96-bit
 	 * intermediate

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]