RT-Thread Study Notes 2 - Mutex and Semaphore

Table of contents

1. Critical section protection

A critical section is a shared resource that only one thread is allowed to access. It can be a specific hardware device, or a variable or a buffer. Multiple threads must have mutually exclusive access to them

1.1 Method 1: Close the system scheduling protection critical section

Disable scheduling

/* The scheduler is locked. After locking, it will not switch to other threads and only respond to interrupts. */
/* critical operation */

Turn off interrupts: Because the scheduling of all threads is based on interrupts, the system will no longer be able to schedule after the interrupts are turned off, and the threads themselves will naturally not be preempted by other threads.

rt_base_t level;
/* Turn off interrupts */
level = rt_hw_interrupt_disable();
/* critical operation */

1.2 Method 2: Mutual exclusion feature protects critical sections

Semaphore, Mutex

2. Semaphore

The code run by the embedded system mainly includes threads and ISR s. During their running process, their running steps sometimes need to be synchronized (run in a predetermined order), and sometimes the accessed resources need to be mutually exclusive (only one thread is allowed to access at a time). resources), sometimes it is necessary to exchange data more than this time. These mechanisms are called IPC for inter-process communication. The IPC mechanism in RT-Thread includes semaphore, mutex, event, mailbox, and message queue. Through IPC, the tacit work of multiple threads (including ISR s) can be coordinated. A semaphore is a lightweight kernel object used to solve the synchronization problem between threads. A thread can acquire or release it to achieve synchronization or mutual exclusion. Each semaphore object has a semaphore value and a thread waiting queue. The value of the semaphore corresponds to the number of instances (number of resources) of the semaphore object. If the semaphore value is N, it means that there are N semaphore instances (resources) that can be used. When the value is 0, the thread that requests the semaphore will be suspended on the waiting queue of the semaphore.

2.1 Definition of semaphore

struct rt_semaphore
    struct rt_ipc_object parent; /* IPC object inherited */
    rt_uint16_t value; /* the value of the semaphore */

Static semaphore: struct rt_semaphore static_sem
 Dynamic semaphore: rt_sem_t dynamic_sem

typedef struct rt_semaphore *rt_sem_t;

2.2 Operation of semaphore

  1. Init and Detach
Static semaphore:
rt_err_t rt_sem_init(rt_sem_t sem, const char *name, rt_uint32_t value, rt_uint8_t flag)
rt_err_t rt_sem_detach(rt_sem_t sem) //Detach unused static semaphores from the system
  1. Create and delete
Determine whether the return value is RT_NULL
 Dynamic semaphore:
rt_sem_t rt_sem_create(const char *name, rt_uint32_t value, rt_uint8_t flag) //Waiting for the threads in the queue, when there are resources: RT_IPC_FLAG_FIFO first come first serve, in order RT_IPC_FLAG_PRIO is arranged according to thread priority
rt_err_t rt_sem_delete(rt_sem_t sem)//Free up system resources
  1. get semaphore
rt_err_t rt_sem_take(rt_sem_t sem, rt_int32_t time) //RT_WAITING_FOREVER = -1, in system ticks. That is, 100HZ, wait for multiples of 10ms. If it times out, return -RT_ETIMEOUT. Do not call this function in an interrupt, because it will cause the thread to be suspended. can only be called on a thread
rt_err_t rt_sem_trytake(rt_sem_t sem) //The time parameter is 0, do not wait for a second
  1. release semaphore
rt_err_t rt_sem_release(rt_sem_t sem) // It can be called either in a thread or in an interrupt. because it doesn't cause the thread to be suspended

3. Producer and consumer issues

Two threads, a producer thread and a consumer thread, share an initially empty buffer of fixed size n. The job of the producer is to produce a piece of data. Only when the buffer is not full can the producer put the message into the buffer, otherwise it must wait, and so on. Only when the buffer is not empty, the consumer can take out data from it, consuming a piece of data at a time, otherwise it must wait. The core of the problem is:

  1. Ensure that the producer does not still have to write data inward when the cache is still full
  2. Do not let consumers try to fetch data from an empty cache

To solve the producer-consumer problem is actually to solve the problem of mutual exclusion and synchronization between threads. Since the buffer is a critical resource, only one producer is allowed to put in a message at a time, or one consumer can take out a message from it. There is a problem of mutually exclusive access that needs to be solved here. At the same time, the producer and the consumer are in a cooperative relationship. Only after the producer produces, the consumer can consume, so a synchronization problem needs to be solved.

/* Producer thread entry */
void producer_thread_entry(void* parameter)
    int cnt = 0;

    /* run 100 times */
    while( cnt < 100)
        /* get a vacancy */
        rt_sem_take(&sem_empty, RT_WAITING_FOREVER);

        /* Modify the contents of the array, lock */
        rt_sem_take(&sem_lock, RT_WAITING_FOREVER);
        array[set%MAXSEM] = cnt + 1;
        rt_kprintf("the producer generates a number: %d\n", array[set%MAXSEM]);

        /* post a full */

        /* pause for a while */

    rt_kprintf("the producer exit!\n");

/* consumer thread entry */
void consumer_thread_entry(void* parameter)
    rt_uint32_t no;
    rt_uint32_t sum;

    /* The nth thread, passed in by the entry parameter */
    no = (rt_uint32_t)parameter;

    sum = 0;
        /* get a full */
        rt_sem_take(&sem_full, RT_WAITING_FOREVER);

        /* Critical section, locked for operation */
        rt_sem_take(&sem_lock, RT_WAITING_FOREVER);
        sum += array[get%MAXSEM];
        rt_kprintf("the consumer[%d] get a number: %d\n", no, array[get%MAXSEM] );

        /* release a vacancy */

        /* The producer produces 100 numbers, stops, and the consumer thread stops accordingly */
        if (get == 100) break;

        /* Pause for a while */

    rt_kprintf("the consumer[%d] sum is %d \n ", no, sum);
    rt_kprintf("the consumer[%d] exit!\n");

int semaphore_producer_consumer_init()
    /* Initialize 3 semaphores */
    rt_sem_init(&sem_lock , "lock",     1,      RT_IPC_FLAG_FIFO);
    rt_sem_init(&sem_empty, "empty",    MAXSEM, RT_IPC_FLAG_FIFO);
    rt_sem_init(&sem_full , "full",     0,      RT_IPC_FLAG_FIFO);

    /* create thread 1 */
    producer_tid = rt_thread_create("producer",
                                    producer_thread_entry, RT_NULL, /* The thread entry is producer_thread_entry, and the entry parameter is RT_NULL */
                                    THREAD_STACK_SIZE, THREAD_PRIORITY - 1, THREAD_TIMESLICE);
    if (producer_tid != RT_NULL)
        tc_stat(TC_STAT_END | TC_STAT_FAILED);

    /* create thread 2 */
    consumer_tid = rt_thread_create("consumer",
                                    consumer_thread_entry, RT_NULL, /* The thread entry is consumer_thread_entry, and the entry parameter is RT_NULL */
                                    THREAD_STACK_SIZE, THREAD_PRIORITY + 1, THREAD_TIMESLICE);
    if (consumer_tid != RT_NULL)
        tc_stat(TC_STAT_END | TC_STAT_FAILED);

    return 0;

4. Mutex

A mutex control block is a data structure used by the operating system to manage mutexes.

4.1 Mutex Control Block

struct rt_mutex
    struct rt_ipc_object parent; /* IPC object inherited */
    rt_uint16_t value; /* There are only two values ​​LOCK and UNLOCK */
    rt_uint8_t original_priority; /* The priority of the thread that last acquired the lock */
    rt_uint8_t hold; /* how many times the thread acquired the mutex */
    struct rt_thread *owner; /* The thread handle that currently owns the lock */

Static mutex: struct rt_mutex static_mutex;
Dynamic Mutex: rt_mutex_t dynamic_mutex;

4.2 The operation of mutex

  1. Init and Detach
static mutex
rt_err_t rt_mutex_init(rt_mutex_t mutex, const char *name, rt_uint8_t flag); //RT_IPC_FIFO RT_IPC_FLAG_PRIO
rt_err_t rt_mutex_detach(rt_mutex_t mutex);
  1. Create and delete
dynamic mutex
rt_mutex rt_mutex_create(const char *name, rt_uint8_t flag);
rt_err_t rt_mutex_delete(rt_mutex_t mutex);
  1. get mutex
Can only be called in a thread, and the same thread can take the same mutex multiple times, its members hold+1
rt_err_t rt_mutex_take(rt_mutex_t mutex, rt_int32_t time) // RT_WAITING_FOREVER = -1
  1. release mutex
Can only be called in a thread, not in an interrupt. The same mutex must be acquired by the same thread in order to release the mutex in that thread
rt_err_t rt_mutex_release(rt_mutex_t mutex)

4.3 The difference between mutex and semaphore

  1. The semaphore can be released by any thread (and interrupt), it is like a traffic light when it is used for synchronization, the thread can only run when it is granted permission, the emphasis is on the running step; the mutex can only be used by the holding it. Thread release, that is, only the thread that locks it has the key to open it, emphasizing permission and authority
  2. Using semaphores may lead to priority inversion, and mutex can solve the problem of priority inversion through priority integration

5. Thread priority flip

When a high-priority thread attempts to access a shared resource through a mutual exclusion IPC object mechanism, if the IPC object is already held by a low-priority thread, and the low-priority thread may be run by some other Medium priority threads preempt, thus creating a situation where high priority threads are blocked by many threads with lower priority. High-priority real-time performance is not guaranteed

5.1 Priority inheritance

In RT-Thread, the priority inversion problem can be effectively solved through the priority inheritance algorithm of mutex. Priority inheritance refers to raising the priority of a low-priority thread that occupies a shared resource to make it equal to the priority of the thread with the highest priority among all threads waiting for the resource, so that it can be executed faster and then released. Shared resources, when the low-priority thread releases the resource, the priority returns to its initial setting. Threads with inherited priorities avoid system shared resources from being preempted by any intermediate priority threads

The priority flip line reminds the programmer that the code segment for mutually exclusive access to shared resources should be kept as short as possible. Let low-priority threads finish work as quickly as possible, freeing up shared resources


  1. Getting Started with RT-Thread Video Center Kernel
  2. RT-Thread Documentation Center

Author: CrazyCatJack

Link to this article: https://www.cnblogs.com/CrazyCatJack/p/14408842.html

Copyright statement: All articles in this blog are used unless otherwise stated. BY-NC-SA  agreement. Please indicate the source!

Follow the blogger: If you think this article is helpful to you, you can click on the bottom right corner of the article to recommend it. Your support will be my biggest motivation!

Tags: rt-thread

Posted by shage on Fri, 03 Jun 2022 18:42:33 +0530