an example of mutex and spinlock

typedef struct {
    pthread_mutex_t mutex_hw; intrspin_t spinlock;     pthread_mutex_t mutex_reset; volatile unsigned ref_count; volatile unsigned reset; volatile unsigned enabled[2]; } share_context_t;

typedef struct { pthread_mutex_t mutex; share_context_t *share_ctx; } private_context_t;

private_context_t *private_ctx;
  • mutex is used to protect the access to the content of private_ctx, maybe between different threads, as long as they all have access to private_ctx.
  • mutex_hw is used for protect the access to critical sections (e.g.  Shared registers, shared variables) between threads, unnecessarily the same context.
  • spinlock is used to protect the access to  critical sections (e.g.  Shared registers, shared variables)  between threads and interrupt handlers.
New issue: the FIFO controller, which is shared by two contexts, needs to be reset on-the-fly to fix some problems. We want to reset FIFO controller when a context is being started.
Solution: As resetting FIFO controller while the other one is using it, is not safe. I introduced extra members to shared_context_t:
  • ref_count counts the number of clients who use the specified hardware block (FIFO).
  • enabled indicates if the other context is enabled/active.
  • reset If we are the only user, we will reset the hardware block immediately; if there is another user and it is active (enabled), we set “reset” to “1”, and ask the other user to reset the FIFOwhen it is safe: in isr handler when EOF is received (1st implementation); or when timeout (waiting for EOF timeouts) occurs (added in 2nd implementation).
1. 1st implementation
client_start() {
   …..
    atomic_add(&ctx->share_ctx->ref_count, 1);
    // what if someone else changes ref_count at this point? ….No problem.     // what if someone else changes enable here?  … No problem.

    pthread_mutex_lock(&ctx->share_ctx->mutex_reset);
    if(ctx->share_ctx->ref_count == 1 || ctx->share_ctx->enable == 0) {
  // what if the other user changes ref_count, or enable, here? The reset might bring unpredictable affect on other user.  Therefore, the “if” check has to be mutex locked (reset_mutex). 
    do reset;
    pthread_mutex_unlock(&ctx->share_ctx->mutex_reset);
    } else {
    atomic_set(&ctx->share_ctx->reset, 1);  // the other user will do reset
while it’s safe
    pthread_mutex_unlock(&ctx->share_ctx->mutex_reset);
    // we don’t need mutex lock to check whether ctx->reset is cleared or not.
    While(ctx->share_ctx->reset) {
            delay(1);
    }
  }
…..
}
 
In isr_handler,
    if(EOF IRQ is set) {
        if(ctx->share_ctx->reset) {
            reset;
            atomi_clr(&ctx->share_ctx->reset, 1);
        }
    }
2. 2nd implementation
 As the EOF interrupt might not come in error condition. We need do the reset when the waiting for EOF gets timeouted. This brings complexity here, as the “reset” bit might be cleared by ISR handler, or by a thread when timeout occurs.
check_n_reset()
{

    pthread_mutex_lock(&ctx->share_ctx->mutex_reset);
    InterruptLock(ctx->share_ctx->spin_lock);
    if(ctx->share_ctx->ref_count == 1 || ctx->share_ctx->enable == 0 || ctx->share_ctx->reset) {
  // what if the EOF interrupt is raised here? the reset will be done twice. a spinlock is required.
    do reset;
    pthread_mutex_unlock(&ctx->share_ctx->mutex_reset);
    InterruptUnlock(ctx->share_ctx->spin_lock);
    } else {
    atomic_set(&ctx->share_ctx->reset, 1);  // the other user will do reset
while it’s safe
     InterruptUnlock(ctx->share_ctx->spin_lock);
     pthread_mutex_unlock(&ctx->share_ctx->mutex_reset);
    // we don’t need mutex lock to check whether ctx->reset is cleared or not.
    While(ctx->share_ctx->reset) {
            delay(1);
    }
  }
…..
}
client_start()
{
    atomic_add(&ctx->share_ctx->ref_count, 1);
    check_n_reset();
}
 isr_handler,….
 
 

DMA issue

The driver configures DMA to write 2 frames/fields into one buffer before switching to next buffer. Under some condition (e.g. the field being received is out of order, top1—bottom1—bottom2—top3-bottom3…), we don’t’ want DMA to continue writing top3 into the same buffer (buf1) which contains bottom2. What I did: disable the corresponding CSI2 context after bottom2 has been saved in buf_1, provide a new physical address (buf_2) to CSI2 context, and reenable CSi2 context.  By doing this, I hope CSI2 DMA will store top3 into buf_2. However,  in my test, CSI DMA would continue write the next field (top3) into buf_1, most of the time. What we expected:

———-     ———–    ———

|   t1   |        |           |     |   t3   |

———      ———–   ———-

|  b1   |        |   b2    |     |   b3   |

———      ———-    ———-

buf_0      buf_1         buf3

What really happens:

———-     ———–    ———

|   t1     |    |     t3     |   |   t4   |

———      ———–   ———-

|  b1     |    |   b2     |   |   b3   |

———      ———-    ———-

buf_0      buf_1         buf3

The solution is to disable the corresponding interface, to force DMA write to the new buffer after the interface being enabled.

Capture: frame drop

Scenario: capturing is running in the background, and keep making captured buffers available to applications. The application gets one frame, display it, and get another frame, display it…

in

theory: 60fps capture + 60fps display

in practice: display rate is slightly slower than capture rate.

If the application asks for a captured buffer, only after the previous one being displayed. the frame dropping occurs inside capture part;

if the application doesn’t wait the previous frame being displayed, the frame dropping will occur on the display side.

Screen shot 2015-06-02 at 10.14.57 AM

VSYNC interval is 17.183ms

Screen shot 2015-06-02 at 10.28.11 AM

Capture interval is 16.652ms

Screen shot 2015-06-02 at 10.29.56 AM