Linux Camera Driver 2 - UVC

CSDN is only used to increase the weight of Baidu's inclusion, the layout is not optimized, and it is not maintained on a daily basis. Please visit: www.hceng.cn View comments.
The corresponding address of this blog post: https://hceng.cn/2018/04/22/Linux Camera Driver 2 - UVC/#more

The second part of Linux camera driver study is to analyze and write the USB camera driver USB video class(UVC) in detail.

This time to write a real camera driver, the content is a bit more.
First briefly introduce the USB interface, understand the meaning of the USB device descriptor in Linux.
Then transplant the USB camera driver that comes with the kernel, and also verify the availability of the camera.
Finally, in order to learn, write a camera driver sentence by sentence, and then summarize.

1.UVC Basics

UVC is the abbreviation of USB video class, which is the video device of the USB interface.
UVC is actually well understood, it is V4L2+USB.
In the previous virtual camera driver, the source of data is the virtual data constructed by itself, and now the data source of V4L2 is the real camera video data transmitted through USB.
In addition to video data, the camera also tells the driver its own characteristics (such as which resolutions are supported), and the driver configures the camera (specifies which resolution).

1.1 USB Basics

USB is divided into master and slave systems. Generally speaking, the USB system in the PC is the master system, while the general USB mouse and U disk are typical USB slave systems.
In order to facilitate development, USB defines a set of standards. As long as it is a host that supports USB, it can support USB mice and U disks from any manufacturer. As long as it is a device included in the USB system, as long as these devices support the corresponding standards, there is no need to Redesign the driver and use it directly.
The following briefly lists the types of USB devices. Ideally, a USB system should fully support these devices, and the devices must also meet the requirements of the USB specification.

Base ClassDescriptor UsageDescription
00hDeviceUse class information in the Interface Descriptors
01hInterfaceAudio
02hBothCommunications and CDC Control
03hInterfaceHID (Human Interface Device)
05hInterfacePhysical
06hInterfaceImage
07hInterfacePrinter
08hInterfaceMass Storage
09hDeviceHub
0AhInterfaceCDC-Data
0BhInterfaceSmart Card
0DhInterfaceContent Security
0EhInterfaceVideo
0FhInterfacePersonal Healthcare
10hInterfaceAudio/Video Devices
11hDeviceBillboard Device Class
12hInterfaceUSB Type-C Bridge Class
DChBothDiagnostic Device
E0hInterfaceWireless Controller
EFhBothMiscellaneous
FEhInterfaceApplication Specific
FFhBothVendor Specific

Among them, UVC is the Video class.

In order to better describe the characteristics of USB devices, USB proposes the concept of device architecture.
From this point of view, a USB device can be considered to be composed of some configurations, interfaces and endpoints;
That is, a USB device may contain one or more configurations, each configuration may contain one or more interfaces, and each interface may contain several endpoints.

Also, the driver is bound to the USB interface, not the entire device.

Reflected in the driver, it is a structure one by one, corresponding to devices, configurations, interfaces, and endpoints.

Among them, the USB video class is extended on the standard USB protocol, and the extended part is called Class Specific.

  • Standard device descriptors:
    {% codeblock lang:c %}
    typedef struct Device_Descriptor
    {
    uchar bLength; //Number of bytes of device descriptor
    uchar bDescriptorType; //device descriptor type number
    uint bcdUSB; //USB version number
    uchar bDeviceClass; //Device class allocated by USB
    uchar bDeviceSubClass; //Device subclass allocated by USB
    uchar bDeviceProtocol; //The device protocol code allocated by USB
    uchar bMaxPacketSize0; //Maximum packet size of endpoint 0
    uint idVendor; //Vendor ID
    uint idProduct; //Product number
    uint bcdDevice; //Device factory number
    uchar iManufacturer; //Device manufacturer string index
    uchar iProduct; //Product string index
    uchar iSerialNumber; //Device serial number index
    uchar bNumConfigurations; //Number of possible configurations

}Device_Descriptor,*pDevice_Descriptor;
{% endcodeblock %}

  • Configuration descriptor:
    {% codeblock lang:c %}
    typedef struct Configuration_Descriptor
    {
    uchar bLength; //Number of bytes of configuration descriptor
    uchar bDescriptorType; //Configuration descriptor type number
    uint wTotalLength; //The size of all data returned by this configuration
    uchar bNumInterfaces; //Number of interfaces supported by this configuration
    uchar bConfigurationValue;//Parameters required by the Set_Configuration command
    uchar iConfiguration; // string index describing the configuration
    uchar bmAttributes; //Selection of power supply mode
    uchar bMaxPower; //The maximum current the device gets from the bus

}Configuration_Descriptor,*pConfiguration_Descriptor;
{% endcodeblock %}

  • Interface descriptor:
    {% codeblock lang:c %}
    typedef struct Interface_Descriptor
    {
    uchar bLength; //Number of bytes of interface descriptor
    uchar bDescriptorType; //Type number of interface descriptor
    uchar bInterfaceNumber; //Number of the interface
    uchar bAlternateSetting; //Number of alternate interface descriptor
    uchar bNumEndPoints; //Number of endpoints used by this interface, excluding endpoint 0
    uchar bInterfaceClass; //Interface class
    uchar bInterfaceSubClass; //interface subclass
    uchar bInterfaceProtocol; //Interface class protocol
    uchar iInterface; //The string index value describing the interface
    }Interface_Descriptor,*pInterface_Descriptor;
    {% endcodeblock %}

  • Endpoint descriptor:
    {% codeblock lang:c %}
    typedef struct EndPoint_Descriptor
    {
    uchar bLength; //Number of bytes of endpoint descriptor
    uchar bDescriptorType; //Endpoint descriptor type number
    uchar bEndpointAddress; //Endpoint address and input and output type
    uchar bmAtrributes; //The transmission type of the endpoint
    uint wMaxPacketSize; //Maximum packet size sent and received by the endpoint
    uchar bInterval; //The time interval for the host to query the endpoint

}EndPoint_Descriptor,*pEndPoint_Descriptor;
{% endcodeblock %}

1.2 UVC hardware model

First from USB official website Download standard protocol related materials: Video Class -> Video Class 1.5 document set (.zip format, size 6.58MB).
In USB_Video_Example 1.5.pdf, it can be known that the hardware model is divided into two parts: VC interface and VS interface.

The VC interface is used for control, and the interior is divided into multiple units and terminals. The unit is used for internal processing, and the terminal is used for internal and external links;
VS interface is used for transmission, including the endpoint of video data transmission and the video format supported by the camera and other information;

Each video has only one Vieo Control interface and can have multiple Video Streaming interfaces;

An interface is equivalent to a logical USB device.
Now, imagine that when the USB camera is plugged into the host, it is equivalent to plugging in two devices at the same time. You can select one of the devices through the function to operate it.
A device is used for control, such as setting brightness, etc.;
A device is used to obtain data, select a supported format, etc.;
In this way, the control and data are basically separated. If you want to control, you will operate the control interface, and if you want data, you will pass the data interface.

  • The VideoControl Interface is used for controls, such as setting brightness.
    It has multiple Unit/Terminal inside (Unit/Terminal is called entity in the program)
    It can be accessed through functions like uvc_query_ctrl:
ret = uvc_query_ctrl(dev /*which USB device*/, SET_CUR, ctrl->entity->id /*which unit/terminal*/, dev->intfnum /*Which interface: VC interface*/, ctrl->info->selector, uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT), ctrl->info->size);
  • VideoStreaming Interface is used to obtain video data, and can also be used to select fromat/frame(VS may have multiple formats, one format supports multiple frames, and frame is used to represent information such as resolution)
    It can be accessed through functions like __uvc_query_ctrl:
ret = __uvc_query_ctrl(video->dev /*which USB device*/, SET_CUR, 0, video->streaming->intfnum /*Which interface: VS*/, probe ? VS_PROBE_CONTROL : VS_COMMIT_CONTROL, data, size, uvc_timeout_param);

The parameter VS_PROBE_CONTROL here is just an enumeration attempt, not a setting. The real setting needs to use the parameter VS_COMMIT_CONTROL.

1.3 USB Descriptors

As mentioned earlier, the camera should tell the driver its own characteristics (such as which resolutions it supports), and this characteristic is placed in the USB descriptor.
In the previously downloaded USB_Video_Example 1.5.pdf document, there is an example of the UVC descriptor hierarchy:

Plug the USB into the Ubuntu host and execute lsusb to see the current USB device:

Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 012: ID 1b3b:2977 iPassion Technology Inc. 
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

According to the manufacturer's name iPassion Technology Inc, we know that the USB device with ID 1b3b:2977 is the camera.
Then use -v (display detailed information of USB devices) and -d (display only devices with the specified manufacturer and product number) to obtain detailed information of the specified device:

lsusb -v -d 1b3b:2977

will print many information , simplifies and removes the detailed data, leaving only the general framework as follows:

Device Descriptor:
  Configuration Descriptor:
    Interface Association:
    Interface Descriptor:
      VideoControl Interface Descriptor:
      VideoControl Interface Descriptor:
      Endpoint Descriptor:
    Interface Descriptor:
      VideoStreaming Interface Descriptor:
      VideoStreaming Interface Descriptor:
    Interface Descriptor:
      Endpoint Descriptor:
    Interface Descriptor:
      Endpoint Descriptor:
    
	Interface Association:
    Interface Descriptor:
      AudioControl Interface Descriptor:
      AudioControl Interface Descriptor:
    Interface Descriptor:
      AudioStreaming Interface Descriptor:
      AudioStreaming Interface Descriptor:
      Endpoint Descriptor:
        AudioControl Endpoint Descriptor:
    Interface Descriptor:
      AudioStreaming Interface Descriptor:
      AudioStreaming Interface Descriptor:
      Endpoint Descriptor:
        AudioControl Endpoint Descriptor:     

You can see that there is a configuration descriptor under the device descriptor, and there are two joint interfaces (IAD) under the configuration descriptor, one for video and one for audio.
There are also several interface descriptors at the same level. There are several VC s, VS s and endpoints under the interface descriptors, which are completely corresponding to the previous framework.

Take any one of these descriptors:

      VideoStreaming Interface Descriptor:
        bLength                            30
        bDescriptorType                    36
        bDescriptorSubtype                  7 (FRAME_MJPEG)
        bFrameIndex                         1
        bmCapabilities                   0x01
          Still image supported
        wWidth                            640
        wHeight                           480
        dwMinBitRate                  2304000
        dwMaxBitRate                  2304000
        dwMaxVideoFrameBufferSize       76800
        dwDefaultFrameInterval         333333
        bFrameIntervalType                  1
        dwFrameInterval( 0)            333333

It can be known that the camera supports a format called FRAME_MJPEG, the resolution is 640*480 and other information.
Therefore, from the above series of descriptors, the characteristics of the camera can be completely known, and the specific characteristics will be explained later with the driver.

2. Kernel camera driver

To learn UVC, the steps are roughly as follows:
First analyze how the UVC that comes with the kernel is implemented;
Then let the camera in your hand work, maybe the driver that comes with the kernel can be used directly, or it may need to be transplanted;
Finally, try to write a simplified version of the UVC driver to understand it in depth.

2.1 Analysis of the kernel camera driver

In the 4.13.9 kernel, the UVC driver is in the drivers/media/usb/uvc/ folder, and the uvc_driver.c is analyzed below.
a. Construct usb_driver
{% codeblock lang:c %}
struct uvc_driver {
struct usb_driver driver;
};

struct uvc_driver uvc_driver = {
.driver = {
.name = "uvcvideo",
.probe = uvc_probe,
.disconnect = uvc_disconnect,
.suspend = uvc_suspend,
.resume = uvc_resume,
.reset_resume = uvc_reset_resume,
.id_table = uvc_ids,
.supports_autosuspend = 1,
},
};
{% endcodeblock %}
The .id_table lists which USB devices the driver supports.

b. Set usb_driver

uvc_probe
    kzalloc //assign video_device
        uvc_register_chains  
            uvc_register_terms  
                uvc_register_video
                    vdev->v4l2_dev = &dev->vdev; //set video_device
                    vdev->fops = &uvc_fops; 
                    vdev->ioctl_ops = &uvc_ioctl_ops;
                    vdev->release = uvc_release;
                    video_register_device //register video_device

c. Register usb_driver

uvc_init
    usb_register

It can be seen that the operation in the probe() function is the same as that in the previous vivid driver.
Then a usb "shell" was added to the outside.

The core of the driver is still fops and ioctl_ops. The implementation of these two operation functions is analyzed below.
First is v4l2_file_operations:
{% codeblock lang:c %}
const struct v4l2_file_operations uvc_fops = {
.owner = THIS_MODULE,
.open = uvc_v4l2_open,
.release = uvc_v4l2_release,
.unlocked_ioctl = video_ioctl2,
#ifdef CONFIG_COMPAT
.compat_ioctl32 = uvc_v4l2_compat_ioctl32,
#endif
.read = uvc_v4l2_read,
.mmap = uvc_v4l2_mmap,
.poll = uvc_v4l2_poll,
#ifndef CONFIG_MMU
.get_unmapped_area = uvc_v4l2_get_unmapped_area,
#endif
};
{% endcodeblock %}
There are open(), release(), ioctl2, read, mmap, poll, which is the same as the previous virtual driver.

The most important of these is ioctl2, which uses video_usercopy() to obtain the parameters passed in from user space, and calls __video_do_ioctl() to find the corresponding uvc_ioctl_ops in the v4l2_ioctls[] array.

The implementation of each function of uvc_ioctl_ops is written in the code later, and explained one by one.

The focus of UVC driver is:

  • Parsing of descriptors;
  • Attribute control: set through VideoControl Interface;
  • Format selection: set by VideoStreaming Interface;
  • Data acquisition: obtained through URB of VideoStreaming Interface;

2.2 Porting the kernel camera driver

I use the one provided by Baiwen.com 2-in-1 camera , it has both CMOS interface and USB interface.
When using the USB interface, there is a DSP chip on it, which can convert the original YUV data into MJPEG compressed data.

It basically conforms to the UVC specification, but there are some minor differences. There are instructions in the documentation provided by the manufacturer, and you can modify it according to the instructions.
Mainly added usb_device_id and modified data processing. Detailed reference patch , the modified code is in Github .
After the compilation is complete, first load the uvcvideo and dependencies that come with the kernel, then remove the driver that comes with the kernel, install the modified driver, and run the xawtv application:

sudo modprobe uvcvideo
sudo rmmod uvcvideo
sudo insmod uvcvideo.ko
xawtv -noalsa
  • Effect:

3. Write UVC driver

The driver of UVC is a bit long. I try to decompose it into several parts according to the function and write them one by one.
When the USB is plugged into the host, two interfaces (VC and VS) will be generated, and then the USB descriptor will be obtained and parsed to set the camera (such as resolution, format); then the buffer will be allocated, the camera will be started, and the camera will be obtained from the USB The camera captures data and saves it to a buffer for use by the application.
The whole process is roughly like this, so it is divided into 6 parts for writing.

  • 1. Registration (USB and Video)
  • 2. Data format settings related
  • 3. Buffer operation related
  • 4. Attribute correlation (take brightness control as an example)
  • 5.URB
  • 6. Start/Stop
  • 7. Other operation functions (mmap and poll)
  • 8. Test/Effect

3.1 Registration (USB and Video)

In the entry function, first "set" a USB driver framework, first allocate a usb_driver:
{% codeblock lang:c %}
static struct usb_driver my_uvc_driver = {
.name = "my_uvc",
.probe = my_uvc_probe,
.disconnect = my_uvc_disconnect,
.id_table = my_uvc_ids,
};
{% endcodeblock %}
The id_table contains only the VC and VS we need, so that the Audio interface of the camera will not be recognized:
{% codeblock lang:c %}
static struct usb_device_id my_uvc_ids[] =
{
/* Generic USB Video Class /
{ USB_INTERFACE_INFO(USB_CLASS_VIDEO, 1, 0) }, / VideoControl Interface /
{ USB_INTERFACE_INFO(USB_CLASS_VIDEO, 2, 0) }, / VideoStreaming Interface */
{}
};
{% endcodeblock %}
Here the USB_INTERFACE_INFO macro parameters are bInterfaceClass (interface class), bInterfaceSubClass (interface subclass), and bInterfaceProtocol (interface class protocol) in the previous interface descriptor.
{% codeblock lang:c %}
#define USB_INTERFACE_INFO(cl, sc, pr)
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO,
.bInterfaceClass = (cl),
.bInterfaceSubClass = (sc),
.bInterfaceProtocol = (pr)
{% endcodeblock %}
The first parameter passed in here is the video class, the second is VC and VS, and the third parameter is no protocol. The basis for these settings comes from the camera's USB descriptor:

    Interface Descriptor:
      ......
      bInterfaceClass        14 Video
      bInterfaceSubClass      1 Video Control
      bInterfaceProtocol      0 
      ......   
      
    Interface Descriptor:
      ......
      bInterfaceClass        14 Video
      bInterfaceSubClass      2 Video Streaming
      bInterfaceProtocol      0 
      ......   

Once the usb_device_id driven here matches the one provided by the camera, the probe() function will be called. Here, the two interfaces will be called twice.

In the probe() function, you need to get usb_device first, which is used to operate the usb device, and get the numbers of the two interfaces respectively, which are used to call each interface separately later.
Then in the probe() function, do the conventional allocation, setting, and registration of video_device.
{% codeblock lang:c %}
static int my_uvc_probe(struct usb_interface *intf, const struct usb_device_id *id)
{
static int cnt = 0;
int ret;

printk("enter %s\n", __func__);

//usb_device_id will cause probe() to be called twice, while creating video_device only needs to be done once
cnt++;

my_uvc_udev = interface_to_usbdev(intf); //get usb device
if (cnt == 1) //get number
    my_uvc_control_intf = intf->cur_altsetting->desc.bInterfaceNumber;
else if (cnt == 2)
    my_uvc_streaming_intf = intf->cur_altsetting->desc.bInterfaceNumber;

if (cnt == 2)
{
    /* 1.Allocate a video_device structure */
    my_uvc_vdev = video_device_alloc();
    if (NULL == my_uvc_vdev)
    {
        printk("Faile to alloc video device (%d)\n", ret);
        return -ENOMEM;
    }

    /* 2.set up */
    my_uvc_vdev->release   = my_uvc_release;
    my_uvc_vdev->fops      = &my_uvc_fops;
    my_uvc_vdev->ioctl_ops = &my_uvc_ioctl_ops;
    my_uvc_vdev->v4l2_dev  = &v4l2_dev;

    /* 3. register */
    ret = video_register_device(my_uvc_vdev, VFL_TYPE_GRABBER, -1);
    if (ret < 0)
    {
        printk("Faile to video_register_device.\n");
        return ret;
    }
    else
        printk("video_register_device ok.\n");

    /* Which setting to use in order to determine bandwidth */
    my_uvc_try_streaming_params(&my_uvc_params); //Test parameters
    my_uvc_get_streaming_params(&my_uvc_params); //take out parameters
    my_uvc_set_streaming_params(&my_uvc_params); //Setting parameters
}

return 0;

}
{% endcodeblock %}

The corresponding disconnect is also called twice, but only once for the release operation:
{% codeblock lang:c %}
static void my_uvc_disconnect(struct usb_interface *intf)
{
static int cnt = 0;

printk("enter %s\n", __func__);

cnt++;
if (cnt == 2)
{
    video_unregister_device(my_uvc_vdev);
    video_device_release(my_uvc_vdev);
}

}
{% endcodeblock %}

Now, the registration of USB device and Video device is completed.
And the operation function is bound to the Video device, and the follow-up work is to improve the operation function.

3.2 Data format settings related

The previous Video device is bound to fops. There are five main operation functions here:
{% codeblock lang:c %}
static const struct v4l2_file_operations my_uvc_fops =
{
.owner = THIS_MODULE,
.open = my_uvc_open,
.release = my_uvc_close,
.mmap = my_uvc_mmap,
.unlocked_ioctl = video_ioctl2, /* V4L2 ioctl handler */
.poll = my_uvc_poll,
};
{% endcodeblock %}

There is nothing to say about open() and close(), normal operations:
{% codeblock lang:c %}
static int my_uvc_open(struct file *file)
{
printk("enter %s\n", func);

return 0;

}

static int my_uvc_close(struct file *file)
{
printk("enter %s\n", func);

my_uvc_vidioc_streamoff(NULL, NULL, 0);

return 0;

}
{% endcodeblock %}
When it is turned off, call vidioc_streamoff to turn off data collection.

mmap() and poll() involve the operation of buf, which will be discussed later.
Let’s talk about a few slightly simpler operation functions in ioctl.

The first is vidioc_querycap(), which is used to indicate that the device is a camera device.
It is necessary to name the driver of the v4l2_capability structure, the card name, the version number to specify the version number, the capabilities to specify the supported functions, and the device_caps to access the function through the node.
{% codeblock lang:c %}
static int my_uvc_vidioc_querycap(struct file *file, void *priv, struct v4l2_capability *cap)
{
struct video_device *vdev = video_devdata(file);

printk("enter %s\n", __func__);

strlcpy(cap->driver, "my_uvc_video", sizeof(cap->driver));
strlcpy(cap->card, vdev->name, sizeof(cap->card));

cap->version = 4;

cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING | V4L2_CAP_DEVICE_CAPS;
cap->device_caps  = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING | V4L2_CAP_DEVICE_CAPS;

return 0;

}
{% endcodeblock %}

Then there is vidioc_enum_fmt_vid_cap(), which is used to enumerate the formats supported by the camera.
It can be seen from the device descriptor of the USB camera that the camera only supports one MJPEG format, and only one format is accepted by index.
You need to set the description (format name), pixelformat (pixel format corresponding to the format) and type(v4l2_buf_type) of the v4l2_fmtdesc structure.
{% codeblock lang:c %}
static int my_uvc_vidioc_enum_fmt_vid_cap(struct file *file, void *priv, struct v4l2_fmtdesc *f)
{
printk("enter %s\n", func);

/* According to the device descriptor of the camera, only one format is supported: VS_FORMAT_MJPEG */
if(f->index >= 1)
    return -EINVAL;

strcpy(f->description, MY_UVC_FMT); //Supported formats
f->pixelformat = V4L2_PIX_FMT_MJPEG;
f->type        = V4L2_BUF_TYPE_VIDEO_CAPTURE;

return 0;

}
{% endcodeblock %}

After that is the operation function to obtain the camera data format vidioc_g_fmt_vid_cap().
This is relatively simple, just return my_uvc_format directly.
{% codeblock lang:c %}
static int my_uvc_vidioc_g_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f)
{
printk("enter %s\n", func);

memcpy(f, &my_uvc_format, sizeof(my_uvc_format));

return 0;

}
{% endcodeblock %}

Then there is vidioc_try_fmt_vid_cap(), which is used to try to format the camera data.
First determine whether the type and pixelformat in the incoming v4l2_format structure are in the correct format.
Then set the width (width), height (height) and filed (data scanning mode: not interleaved) of the v4l2_pix_format structure.
And sizeimage (image size per frame), the size of the value here is determined by the dwMaxVideoFrameSize value printed in probe(), where the theoretical size of each frame is width*height=320*240=76800 is less than dwMaxVideoFrameSize=77312, it is estimated that the maximum Frame images also contain other data.
The colorspace of most webcams is V4L2_COLORSPACE_SRGB.
priv (private data) is determined by pixelformat.
All the set values ​​here are theoretically derived from the parsing of the USB device descriptor. This simplifies the process of code parsing and assigns values ​​directly. In actual development, in order to adapt to multiple cameras, it should be read and parsed.
{% codeblock lang:c %}
static int my_uvc_vidioc_try_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f)
{
printk("enter %s\n", func);

if (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE || f->fmt.pix.pixelformat != V4L2_PIX_FMT_MJPEG)
    return -EINVAL;

/* Adjust the width, height of the format */
f->fmt.pix.width  = my_uvc_wWidth; //Resolutions supported in the device descriptor: 640x480,320x240,160x120
f->fmt.pix.height = my_uvc_wHeight;

f->fmt.pix.field      = V4L2_FIELD_NONE;

/* Calculate bytesperline, sizeimage */
//bBitsPerPixel = my_uvc_bBitsPerPixel; //lsusb:bBitsPerPixel
//f->fmt.pix.bytesperline = (f->fmt.pix.width * bBitsPerPixel) >> 3;
f->fmt.pix.sizeimage = dwMaxVideoFrameSize; //f->fmt.pix.height * f->fmt.pix.bytesperline;

f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
f->fmt.pix.priv       = 0;		/* private data, depends on pixelformat */

return 0;

}
{% endcodeblock %}

The last is to set the format of the camera's data vidioc_s_fmt_vid_cap().
First set the incoming v4l2_format by parameter, and return an error if it is not supported. If supported, assign it directly to my_uvc_format.
{% codeblock lang:c %}
static int my_uvc_vidioc_s_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f)
{
int ret;

printk("enter %s\n", __func__);

ret = my_uvc_vidioc_try_fmt_vid_cap(file, NULL, f);
if (ret < 0)
    return ret;

memcpy(&my_uvc_format, f, sizeof(my_uvc_format));

return 0;

}
{% endcodeblock %}
At this point, the setting of the camera data format my_uvc_format is completed.
The application layer can operate on the camera data format, such as which data format, which resolution to choose, etc. Of course, the driver here does not provide a choice, and all are directly assigned.

3.3 Buffer operation related

The buf operation is a difficult and prone to problems.
The first is to apply for the buffer vidioc_reqbufs(). The application layer ioctl calls this function to allocate several bufs, and the application layer will read video data from these bufs later.
The driver first obtains count (the number of bufs) from the incoming v4l2_requestbuffers structure. The size of each buf is the sizeimage (image size per frame) of the previous my_uvc_format, and the length is page-aligned.

  • PAGE_ALIGN
    The role of PAGE_ALIGN in the kernel is to align the data to the upper bound of the 4K page size.
    for example:
    If the incoming data size is 4000 bytes, then the result is 4096 bytes;
    If the incoming data size is 4096 bytes, then the result is 4096 bytes;
    If the incoming data size is 5000 bytes, then the result is 8192 bytes;

Source code:
#define PAGE_SIZE 4096
#define PAGE_MASK (~(PAGE_SIZE-1))
#define PAGE_ALIGN(x) ((x + PAGE_SIZE - 1) & PAGE_MASK)

Essential: PAGE_ALIGN(x) = ((x + 4095) & (~4095))

Then judge whether the mem (memory address) in the my_uvc_queue structure is empty. If it is not empty, it means that buf has been allocated, and you need to release the memory and empty my_uvc_queue first.
If the required number of buf s is 0, it indicates that no allocation is required and exits directly.
Then allocate buf, allocate all buf as a whole at one time, the size is nbuffers * bufsize, if the allocation fails, reduce the number of buf and try again.
Now there is a whole block of buf, the corresponding starting address is mem, and then clear my_uvc_queue for initialization.
Then initialize two queues (doubly linked lists), the mainqueue is used for reading data by the application layer, and the irqqueue is used for the driver to generate data.
Then set the index (index), m.offset (offset), length (size), type (type), sequence (sequence count), field (scanning mode), memory (memory type) of each buf's v4l2_buffer structure in turn ), flags (flag), and then set the state (state) of my_uvc_buffer and initialize the waiting queue wait.
Finally, set my_uvc_q to record the first address, quantity and size of buf.
{% codeblock lang:c %}
/* APP calls this ioctl to let the driver allocate several bufs, and APP will read video data from these bufs */
static int my_uvc_vidioc_reqbufs(struct file *file, void *priv, struct v4l2_requestbuffers *p)
{
unsigned int i;
void *mem = NULL;
int nbuffers = p->count; //Number of buf
int bufsize = PAGE_ALIGN(my_uvc_format.fmt.pix.sizeimage); //buf size, and length page alignment

printk("enter %s\n", __func__);

if (my_uvc_q.mem)    //If buf was originally allocated, release the original buf first
{
    vfree(my_uvc_q.mem);
    memset(&my_uvc_q, 0, sizeof(my_uvc_q));
    my_uvc_q.mem = NULL;
}

if (nbuffers == 0)   //There is no need to allocate, just exit
    return 0;

for (; nbuffers > 0; --nbuffers)          //Decrease the number of buf s in turn until the allocation is successful
{
    mem = vmalloc_32(nbuffers * bufsize); //These buf s are allocated as a whole at one time
    if (mem != NULL)
        break;
}

if (mem == NULL)
    return -ENOMEM;

memset(&my_uvc_q, 0, sizeof(my_uvc_q)); //Clear my_uvc_q, initialize

INIT_LIST_HEAD(&my_uvc_q.mainqueue); //Initialize two queues, my_uvc_vidioc_qbuf
INIT_LIST_HEAD(&my_uvc_q.irqqueue);

for (i = 0; i < nbuffers; ++i)
{
    my_uvc_q.buffer[i].buf.index    = i;  //index
    my_uvc_q.buffer[i].buf.m.offset = i * bufsize; //offset
    my_uvc_q.buffer[i].buf.length   = my_uvc_format.fmt.pix.sizeimage; //Original size; measured PAGE_ALIGN alignment, no problem
    my_uvc_q.buffer[i].buf.type     = V4L2_BUF_TYPE_VIDEO_CAPTURE; //video capture device
    my_uvc_q.buffer[i].buf.sequence = 0;
    my_uvc_q.buffer[i].buf.field    = V4L2_FIELD_NONE;
    my_uvc_q.buffer[i].buf.memory   = V4L2_MEMORY_MMAP;
    my_uvc_q.buffer[i].buf.flags    = 0;
    my_uvc_q.buffer[i].state        = VIDEOBUF_IDLE; //After the allocation is completed, the update status is idle
    init_waitqueue_head(&my_uvc_q.buffer[i].wait); //Initialize a waiting queue
}

my_uvc_q.mem = mem;
my_uvc_q.count = nbuffers;
my_uvc_q.buf_size = bufsize;

return nbuffers;

}
{% endcodeblock %}
In this way, we get a my_uvc_queue structure, and the my_uvc_buffer structure array in this structure stores the information of each buf. The indication is as follows:

Next is vidioc_querybuf(), which is used to query buf, get the address information of buf, etc.
First judge whether the index in the incoming v4l2_buffer structure exceeds the range of the number of bufs.
Then pass the corresponding v4l2_buffer in my_uvc_q to the incoming v4l2_buf.
Then judge whether the vma_use_count in my_uvc_buffer indicates that it is mmap(), corresponding to modifying the standard bit.
Finally, convert the state flags of uvc to the state flags of V4L2. In fact, their values ​​are the same.
{% codeblock lang:c %}
/* Query cache status, such as address information (APP can use mmap to map) */
static int my_uvc_vidioc_querybuf(struct file *file, void *priv, struct v4l2_buffer *v4l2_buf)
{
int ret = 0;

printk("enter %s\n", __func__);

if (v4l2_buf->index >= my_uvc_q.count)
{
    ret = -EINVAL;
    goto done;
}

memcpy(v4l2_buf, &my_uvc_q.buffer[v4l2_buf->index].buf, sizeof(*v4l2_buf));

if (my_uvc_q.buffer[v4l2_buf->index].vma_use_count) //update flags
    v4l2_buf->flags |= V4L2_BUF_FLAG_MAPPED;

#if 0
switch (my_uvc_q.buffer[v4l2_buf->index].state) //Convert uvc flags to V4L2 flags
{
case VIDEOBUF_ERROR:
case VIDEOBUF_DONE:
v4l2_buf->flags |= V4L2_BUF_FLAG_DONE;
break;
case VIDEOBUF_QUEUED:
case VIDEOBUF_ACTIVE:
v4l2_buf->flags |= V4L2_BUF_FLAG_QUEUED;
break;
case VIDEOBUF_IDLE:
default:
break;
}
#endif

done:
return ret;
}
{% endcodeblock %}
In this way, the corresponding v4l2_buffer related information is passed to the application layer, and the application layer queries each buf information through this function.

vidioc_qbuf() puts the previous buf into the queue.
The first is to determine the incoming v4l2_buffer type, memory type, whether the node exceeds the maximum number and whether the my_uvc_buffer state of my_uvc_q is idle.
Then modify the my_uvc_buffer state of my_uvc_q to be in the queue VIDEOBUF_QUEUED, and initialize the bytesused (the size of the data in the buffer) in the v4l2_buffer to 0.
Then add the stream and irq corresponding to buf to the tail of the queue mainqueue and queue irqqueue respectively.

  • Queue mainqueue: It is used by the application layer. When there is data in the buffer in the queue, the application layer takes data from the mainqueue queue;

  • Queue irqqueue: used by functions that generate data. When data is collected, the first buffer is taken from the irqqueue queue and stored in the data;
    {% codeblock lang:c %}
    /* Put the incoming buffer into the queue, the underlying hardware operation function will put the data into the buffer of this queue */
    static int my_uvc_vidioc_qbuf(struct file *file, void *priv, struct v4l2_buffer *v4l2_buf)
    {
    printk("enter %s\n", func);

    /* 0. There may be a problem with the v4l2_buf passed in by the APP, and a judgment must be made */
    if (v4l2_buf->type != V4L2_BUF_TYPE_VIDEO_CAPTURE || v4l2_buf->memory != V4L2_MEMORY_MMAP)
    return -EINVAL;

    if (v4l2_buf->index >= my_uvc_q.count)
    return -EINVAL;

    if (my_uvc_q.buffer[v4l2_buf->index].state != VIDEOBUF_IDLE)
    return -EINVAL;

    /* 1. Modify the state */
    my_uvc_q.buffer[v4l2_buf->index].state = VIDEOBUF_QUEUED;
    my_uvc_q.buffer[v4l2_buf->index].buf.bytesused = 0;

    /* 2. Put 2 queues */
    //Queue 1: for use by the application layer
    //When there is data in the buffer in the queue, the application layer takes the data from the mainqueue queue
    list_add_tail(&my_uvc_q.buffer[v4l2_buf->index].stream, &my_uvc_q.mainqueue);

    //Queue 2: Used by functions that generate data
    //When the data is collected, take the first buffer from the irqqueue queue and store the data
    list_add_tail(&my_uvc_q.buffer[v4l2_buf->index].irq, &my_uvc_q.irqqueue);

    return 0;
    }
    {% endcodeblock %}
    Through this function, the incoming v4l2_buffer is placed in two queues.

Finally, the data is taken out of the queue via vidioc_dqbuf().
Here is the application layer wants to get the data, so it is obtained from the mainqueue queue.
First determine whether the mainqueue is an empty queue, and then use my_uvc_q.mainqueue as the head node to search for the stream in the my_uvc_buffer structure to get the address of the first my_uvc_buffer in the queue.
Then change the state of my_uvc_buffer to VIDEOBUF_IDLE (idle). ,
Then delete the node from the queue, and finally return to v4l2_buf.
{% codeblock lang:c %}
/* After the APP determines that there is data through poll/select, it takes the buf out of the mainqueue queue */
static int my_uvc_vidioc_dqbuf(struct file *file, void *priv, struct v4l2_buffer *v4l2_buf)
{
struct my_uvc_buffer *get_buf;

printk("enter %s\n", __func__);

if (list_empty(&my_uvc_q.mainqueue))
    return -EINVAL;

get_buf = list_first_entry(&my_uvc_q.mainqueue, struct my_uvc_buffer, stream); //remove buf

switch (get_buf->state)   //Modify status
{
    case VIDEOBUF_ERROR:
        return -EIO;
    case VIDEOBUF_DONE:
        get_buf->state = VIDEOBUF_IDLE;
        break;
    case VIDEOBUF_IDLE:
    case VIDEOBUF_QUEUED:
    case VIDEOBUF_ACTIVE:
    default:
        return -EINVAL;
}

list_del(&get_buf->stream); //remove from queue
memcpy(v4l2_buf, &get_buf->buf, sizeof *v4l2_buf); //Copy return data

return 0;

}
{% endcodeblock %}

At this point, the basic operations on buf are completed, including buf application, query, and put/remove to the queue.
Among them, the changes of the queue are as follows:

In the initial state, the queue mainqueue and the queue irqqueue concatenate the incoming buf.
When data is generated, buf[0] loads the data and disconnects from the queue irqqueue. At this time, buf[1] is the first node of the queue irqqueue.
When fetching data, buf[0] fetches the data and disconnects the connection with the queue mainqueue. At this time, buf[1] is the first node of the queue mainqueue.
After the data processing is completed, buf[0] will be put into the queue again, this time at the end of the queue.
Repeatedly complete the put in and take out of the queue.

3.4 Attribute correlation (take brightness control as an example)

The next step is to operate the camera properties. Take brightness control as an example to query, get, and set the brightness properties of the camera.
As can be seen from the previous UVC hardware model, the VC interface is used to control the camera, and the PU unit is used for attribute control.
In the UVC 1.5 Class specification.pdf document, find the Processing Unit Descriptor, where bmControls represents the meaning of the camera support properties:

A bit set to 1 indicates that the mentioned Control is supported for the video stream.
D0: Brightness
D1: Contrast
D2: Hue
D3: Saturation
D4: Sharpness
D5: Gamma
D6: White Balance Temperature
D7: White Balance Component
D8: Backlight Compensation
D9: Gain
......

Then find bmControls in PROCESSING_UNIT of VC interface Descriptor in the USB descriptor of this camera, its value is 0x0000053f, and the corresponding supported attributes are the following attributes, Brightness (brightness) control is supported.

      VideoControl Interface Descriptor:
        bLength                11
        bDescriptorType        36
        bDescriptorSubtype      5 (PROCESSING_UNIT)
      Warning: Descriptor too short
        bUnitID                 3
        bSourceID               1
        wMaxMultiplier          0
        bControlSize            2
        bmControls     0x0000053f
          Brightness
          Contrast
          Hue
          Saturation
          Sharpness
          Gamma
          Backlight Compensation
          Power Line Frequency

In the code, the attributes defined by the UVC specification are in the vc_ctrls array of a uvc_control_info structure type in uvc_ctrl.c.
{% codeblock lang:c %}
{
.entity = UVC_GUID_UVC_PROCESSING, //which entity (such as PU) belongs to
.selector = UVC_PU_BRIGHTNESS_CONTROL, // for brightness
.index = 0, //corresponding to bmControls[0] of Processing Unit Descriptor
.size = 2, //The data length is 2 bytes
.flags = UVC_CTRL_FLAG_SET_CUR //Support SET_CUR, GET_RANGE(GET_CUR, GET_MIN, GET_MAX), etc.
| UVC_CTRL_FLAG_GET_RANGE
| UVC_CTRL_FLAG_RESTORE,
},
{
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_CONTRAST_CONTROL,
.index = 1,
.size = 2,
.flags = UVC_CTRL_FLAG_SET_CUR
| UVC_CTRL_FLAG_GET_RANGE
| UVC_CTRL_FLAG_RESTORE,
},
{% endcodeblock %}
Now, documentation, hardware, and code have all found correspondence.

In addition, the uvc_ctrl_mappings array of the uvc_control_mapping structure type describes the properties in more detail.
{% codeblock lang:c %}
{
.id = V4L2_CID_BRIGHTNESS, //The application layer finds the corresponding attribute according to the ID
.name = "Brightness", //name
.entity = UVC_GUID_UVC_PROCESSING, //Which entity belongs to (such as PU)
.selector = UVC_PU_BRIGHTNESS_CONTROL, //for brightness control
.size = 16, //how many bits the data occupies
.offset = 0, // where to start
.v4l2_type = V4L2_CTRL_TYPE_INTEGER, //Attribute type (integer)
.data_type = UVC_CTRL_DATA_TYPE_SIGNED, //Data type (signed integer)
},
{
.id = V4L2_CID_CONTRAST,
.name = "Contrast",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_CONTRAST_CONTROL,
.size = 16,
.offset = 0,
.v4l2_type = V4L2_CTRL_TYPE_INTEGER,
.data_type = UVC_CTRL_DATA_TYPE_UNSIGNED,
},
{% endcodeblock %}

Therefore, the preparations for attribute control are:

1. Obtain the device descriptor of the camera, and know which attributes it supports according to the bmControls of the PU descriptor;
2. Find the corresponding attributes from the uvc_ctrls array according to the entity and index, and know the supported operations (SET_CUR, GET_CUR, etc.);
3. Find the corresponding attribute according to the ID from the uvc_ctrl_mappings array, and learn its more detailed information (integer, etc.);

The first is the query attribute vidioc_queryctrl(), the application layer passes in a v4l2_queryctrl structure, and the driver sets its parameters to return.
The parameters that need to be set are id(ID), type (type), name (name), flags (flag), minimum (minimum value), maximum (maximum value), step (step size), default_value (typical value), among which The first few are the values ​​obtained according to the previous preparations, which are directly assigned, and the latter ones need to use the usb_control_msg() function to initiate USB transmission to the camera to obtain the corresponding values.

  • usb_control_msg()
    Function: Send a simple control message to the specified endpoint and wait for the message to complete or time out;
    parameter:
    dev: pointer to the target USB device (usb_device) sent by the control message; <here is my_uvc_udev obtained in probe()>
      pipe: The specific endpoint of the target USB device sent by the control message, call usb_sndctrlpipe (set the specified endpoint of the specified USB device as a control OUT endpoint) or usb_rcvctrlpipe (set the specified endpoint of the specified USB device as a control IN endpoint) to create ; <set my_uvc_udev as the receiving endpoint here>
    request: The USB request value of the control message; <here are the required GET_MIN, GET_MAX, GET_RES, GET_DEF>
    requesttype: USB request type value of control message; <here is USB_TYPE_CLASS(1<<5), USB_RECIP_INTERFACE(1<<0), USB_DIR_IN(1<<7)>
    D7: Data transmission direction: 0 means from the host to the device; 1 means from the device to the host;
    D6~5: Command type: 0 means standard command; 1 means class command; 2 means manufacturer-provided command; 3 reserved;
    D4~0: Receiving object: 0 means device; 1 means interface; 2 means endpoint; 3 means other;
    value: USB message value of control message; <here is PU brightness control>
    index: The USB message index value of the control message; <here is the ID and control interface corresponding to the PU>
    data: pointer to the data to be sent/received; <here is the received data>
    size: The size of the buffer pointed to by the data parameter; <here is two bytes, bControlSize=2>
    timeout: In msecs, the expected timeout time to wait, if it is 0, the function will wait for the end of the message; <here is 5s>
    return value:
    Returns the number of bytes received/sent successfully, otherwise returns a negative error value;

The data obtained through usb_control_msg() also needs to be converted into value by calling my_uvc_get_le_value().

{% codeblock lang:c %}
int my_uvc_vidioc_queryctrl (struct file *file, void *fh, struct v4l2_queryctrl *ctrl)
{
unsigned char data[2];

if (ctrl->id != V4L2_CID_BRIGHTNESS)     //Only the v4l2_queryctrl that controls the brightness is operated here.
    return -EINVAL;

memset(ctrl, 0, sizeof * ctrl);          //initialize, clear
ctrl->id   = V4L2_CID_BRIGHTNESS;        //set ID
ctrl->type = V4L2_CTRL_TYPE_INTEGER;     //set attribute class (integer)
strcpy(ctrl->name, "MY_UVC_BRIGHTNESS"); //set name
ctrl->flags = 0;                         //Default support settings, etc.

/* Initiate a USB transfer and get these values ​​from the camera */
//set minimum value
if(2 != usb_control_msg(my_uvc_udev, usb_rcvctrlpipe(my_uvc_udev, 0), 
                        GET_MIN, USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN, 
                        PU_BRIGHTNESS_CONTROL << 8, my_uvc_bUnitID << 8 | my_uvc_control_intf, data, 2, 5000)) 
    return -EIO;
ctrl->minimum = my_uvc_get_le_value(data);	

//set maximum value
if(2 != usb_control_msg(my_uvc_udev, usb_rcvctrlpipe(my_uvc_udev, 0), 
                        GET_MAX, USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN, 
                        PU_BRIGHTNESS_CONTROL << 8, my_uvc_bUnitID << 8 | my_uvc_control_intf, data, 2, 5000))
    return -EIO;
ctrl->maximum = my_uvc_get_le_value(data);	

//set step size
if(2 != usb_control_msg(my_uvc_udev, usb_rcvctrlpipe(my_uvc_udev, 0), 
                        GET_RES, USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN, 
                        PU_BRIGHTNESS_CONTROL << 8, my_uvc_bUnitID << 8 | my_uvc_control_intf, data, 2, 5000))
    return -EIO;
ctrl->step = my_uvc_get_le_value(data);	

//Set typical value
if(2 != usb_control_msg(my_uvc_udev, usb_rcvctrlpipe(my_uvc_udev, 0), 
                        GET_DEF, USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN, 
                        PU_BRIGHTNESS_CONTROL << 8, my_uvc_bUnitID << 8 | my_uvc_control_intf, data, 2, 5000))
    return -EIO;
ctrl->default_value = my_uvc_get_le_value(data);	

printk("Brightness: min =%d, max = %d, step = %d, default = %d\n", ctrl->minimum, ctrl->maximum, ctrl->step, ctrl->default_value);

return 0;

}
{% endcodeblock %}

After that, vidioc_g_ctrl() (get attributes) and vidioc_s_ctrl() (set attributes), the operation is similar to the previous one, all use the usb_control_msg() function to establish control messages to send/receive luminance data.
{% codeblock lang:c %}
int my_uvc_vidioc_g_ctrl (struct file *file, void *fh, struct v4l2_control *ctrl)
{
unsigned char data[2];

if (ctrl->id != V4L2_CID_BRIGHTNESS)
    return -EINVAL;

if(2 != usb_control_msg(my_uvc_udev, usb_rcvctrlpipe(my_uvc_udev, 0), 
                        GET_CUR, USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN, 
                        PU_BRIGHTNESS_CONTROL << 8, my_uvc_bUnitID << 8 | my_uvc_control_intf, data, 2, 5000))
    return -EIO;

ctrl->value = my_uvc_get_le_value(data);	

return 0;

}

int my_uvc_vidioc_s_ctrl (struct file *file, void *fh, struct v4l2_control *ctrl)
{
unsigned char data[2];

if (ctrl->id != V4L2_CID_BRIGHTNESS)
    return -EINVAL;

my_uvc_set_le_value(ctrl->value, data);

if(2 != usb_control_msg(my_uvc_udev, usb_sndctrlpipe(my_uvc_udev, 0), 
                        SET_CUR, USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_OUT, 
                        PU_BRIGHTNESS_CONTROL << 8, my_uvc_bUnitID  << 8 | my_uvc_control_intf, data, 2, 5000))
    return -EIO;

return 0;

}
{% endcodeblock %}
At this point, the operation of attributes, such as the control of camera brightness, is completed, and other attribute controls are similar.

3.5 URB

The USB Request Block(URB) is a data structure implemented by the USB driver in the Linux kernel to organize each data transfer request of the USB device driver.
That is to say, the USB transmission related information is put into the URB structure and sent to the USB core, and the USB core parses the structure to perform the required data/control related operations.

There are roughly three steps to the required operation:
1. Allocate usb_buffers as data buffers;
2. Allocate URB;
3. Set URB;

  • Why usb_buffer?
    From this point of view: the previous my_uvc_buffer acts as the kernel to interact with the buf for space, and the urb_buffer acts as the kernel to interact with the buf of the USB device. Finally, similar to urb_buffer = my_uvc_buffer, the data of the USB device is transmitted to the user layer. .

First of all, the data size of each USB transmission is variable, and it is determined according to the capabilities of the external device. For example, the external device supports the transmission of 100, 200 or 800 bytes of data at a time, and each transmission is called a Packet (packet);
Secondly, the data that USB needs to transmit each time is likely to be larger than the previous largest packet (800 bytes), so the data transmitted each time will be divided into N packets for transmission.
Therefore, URB is used to record the information of a complete transmission, including how much each time is transmitted, how many times it is transmitted, and the target location of the transmission.

{% codeblock lang:c %}
psize = my_uvc_wMaxPacketSize; //The maximum number of bytes that the real-time transmission endpoint can transmit at one time; lsusb: wMaxPacketSize 0x0320 1x800 bytes;
size = my_uvc_params.dwMaxVideoFrameSize; //Maximum length of one frame of data
npackets = DIV_ROUND_UP(size, psize); //How many times to pass (round up)
{% endcodeblock %}

  • psize is the data size of each transfer, which can be known through the device descriptor wMaxPacketSize (maximum packet size) of the USB camera.
  • size is the size of each frame of image, which has been set in my_uvc_params before, and is obtained by printing dwMaxPayloadTransferSize in probe();
  • npackets is size/psize and then rounded up to get how many times it needs to be passed.
    Finally, size = psize * npackets update the new size after rounding up.

This allocates MY_UVC_URBS_NUM (one is fine) urb_buffer and urb.
The urb_buffer is allocated by the usb_alloc_coherent() function, the size is the previous adjusted size, and the pointer to buf and the DMA address are obtained.
The urb is allocated by the my_uvc_uninit_urbs function, the number is npackets, and a pointer to the urb is obtained.

Correspondingly, if the allocation fails, the corresponding calls usb_free_coherent() and usb_free_urb() release the space, and correspondingly clear the pointer and reset my_uvc_q.urb_size.

Then is to set the URB:

urb->dev: pointer to the target device; <here is the USB camera my_uvc_udev>
urb->pipe: The pipeline set with the target; <use usb_rcvisocpipe() to create an isochronous (ISO:Isochronous) pipeline, the parameter is the endpoint address of the corresponding VS>
urb->transfer_flags: transfer flags; <URB_ISO_ASAP (start scheduling) and URB_NO_TRANSFER_DMA_MAP (use DMA corresponding buf)>
urb->interval: transmission interval; <bInterval=1 from USB descriptor>
urb->transfer_buffer: buf to be transferred; <my_uvc_q.urb_buffer[i] pointer obtained earlier>
urb->transfer_dma: dma physical address corresponding to buf; <my_uvc_q.urb_dma[i] address obtained earlier>
urb->complete: Interrupt processing function after receiving data; <Write later>
urb->number_of_packets: how many packets the URB will transmit; <npackets calculated earlier>
urb->transfer_buffer_length: total data length; <previously calculated size>
urb->iso_frame_desc[j].offset: offset position of each packet; <j * psize corresponds to the offset of each packet>
urb->iso_frame_desc[j].length: the size of each packet; <psize obtained earlier>
about URB data structure reference blog.

{% codeblock lang:c %}
static void my_uvc_uninit_urbs(void)
{
unsigned int i;

for (i = 0; i < MY_UVC_URBS_NUM; ++i)   
{
    //free usb_buffers
    //At the same time, the size of urb is judged. If it is not 0, it will be executed, because this function will set it to 0 at the end. When streamoff is called, it should not be released again.
    if (my_uvc_q.urb_buffer[i] && my_uvc_q.urb_size)
    {
        usb_free_coherent(my_uvc_udev, my_uvc_q.urb_size, my_uvc_q.urb_buffer[i], my_uvc_q.urb_dma[i]);
        my_uvc_q.urb_buffer[i] = NULL;
    }

    //release urb
    if (my_uvc_q.urb[i])   
    {
        usb_free_urb(my_uvc_q.urb[i]);
        my_uvc_q.urb[i] = NULL;
    }
}
my_uvc_q.urb_size = 0;

}

static int my_uvc_alloc_init_urbs(void)
{
int i, j;
int npackets;
unsigned int size;
unsigned short psize;

struct urb *urb;

psize = my_uvc_wMaxPacketSize; //The maximum number of bytes that the real-time transmission endpoint can transmit at one time; lsusb: wMaxPacketSize 
size  = my_uvc_params.dwMaxVideoFrameSize; //The maximum size of a frame of data
npackets = DIV_ROUND_UP(size, psize); //How many times to pass (round up)
if (npackets == 0)
    return -ENOMEM;

size = my_uvc_q.urb_size = psize * npackets; //new size after rounding

for (i = 0; i < MY_UVC_URBS_NUM; ++i)
{
    /* 1.allocate usb_buffers */
    my_uvc_q.urb_buffer[i] = usb_alloc_coherent(my_uvc_udev, size, 
                                                GFP_KERNEL | __GFP_NOWARN, &my_uvc_q.urb_dma[i]);
    /* 2.assign urb */
    my_uvc_q.urb[i] = usb_alloc_urb(npackets, GFP_KERNEL);

    if (!my_uvc_q.urb_buffer[i] || !my_uvc_q.urb[i]) //If allocation fails
    {
        my_uvc_uninit_urbs();

        return -ENOMEM;
    }
}

/* 3. set urb */
for (i = 0; i < MY_UVC_URBS_NUM; ++i)
{
    urb = my_uvc_q.urb[i];

    urb->dev = my_uvc_udev;
    urb->pipe = usb_rcvisocpipe(my_uvc_udev, my_uvc_bEndpointAddress); //lsusb: bEndpointAddress 0x82
    urb->transfer_flags = URB_ISO_ASAP | URB_NO_TRANSFER_DMA_MAP;
    urb->interval = 1; //lsusb: bInterval 1
    urb->transfer_buffer = my_uvc_q.urb_buffer[i];
    urb->transfer_dma = my_uvc_q.urb_dma[i];
    urb->complete = my_uvc_video_complete; //interrupt handler
    urb->number_of_packets = npackets;
    urb->transfer_buffer_length = size;

    for (j = 0; j < npackets; ++j)
    {
        urb->iso_frame_desc[j].offset = j * psize;
        urb->iso_frame_desc[j].length = psize;
    }
}

return 0;

}
{% endcodeblock %}

Now we have set the URB, including the target device USB camera and urb_buffer and other information, as long as the URB is passed to the USB core, the USB core will parse the URB, transfer data with the specified USB device, and the data will be placed in urb_buffer , when the data packet from the USB device is received, an interrupt will be generated and the interrupt processing function my_uvc_video_complete will be executed.
The interrupt function will process each packet in turn, and put the data of the packet into the buf pointed to by the first node of the my_uvc_q.irqqueue queue. When the data volume of multiple packets is enough for one frame, it will wake up the sleeping application layer, and the application layer will Will get the data, and finally interrupt the program and then send the URB, enter the interrupt again, and cycle in turn.

The following is to implement my_uvc_video_complete, in which the result of the previous URB transmission is first judged:
{% codeblock lang:c %}
switch (urb->status) {
case 0: //Success
break;
case -ETIMEDOUT: //Nak
case -ECONNRESET: //Kill
case -ENOENT:
case -ESHUTDOWN:
default: //Error
return;
}
{% endcodeblock %}
Only urb->status = 0 indicates that the transmission is successful, otherwise it will return directly.

Then, when it is judged that the my_uvc_q.irqqueue queue is not empty, the first buf is taken out, and the data obtained from the URB is placed in this buf:
{% codeblock lang:c %}
if (!list_empty(&my_uvc_q.irqqueue)) //Determine whether it is an empty queue
buf = list_first_entry(&my_uvc_q.irqqueue, struct my_uvc_buffer, irq);//Remove the first buf for subsequent data storage
else
buf = NULL;
{% endcodeblock %}

After that, the sub-packages of each URB are processed:

1. Judging that the status urb->iso_frame_desc[i].status is less than 0, skip processing the sub-package;
2. Calculate the data source (from URB), length, destination address (buf extracted from the queue);
3. Determine whether the packet data is valid, where data[0] contains the length of the header, and data[1] contains the error status;
4. Use the special processing provided by the camera manufacturer to complete the fid operation; <fid introduction see below>
5. If buf=NULL, it means that there is no space in the irqqueue queue before, and there is no need for subsequent operations;
6. Determine whether buf->state is VIDEOBUF_ACTIVE (receiving data), that is, whether it is the first time to start receiving data, if yes, change it to VIDEOBUF_ACTIVE;
7. Let last_fid = fid, indicating that you want to start receiving this frame of data;
8. the data length of transmission is: the minimum value of the data length after the subpacket removes the header information and the remaining space of buf;
9. Copy the URB subpackage into buf;
10. Quoting the manufacturer's code to perform some processing on the buf data;
11. When the sub-packet data length is greater than the remaining space of the buf, the flag UVC_STREAM_EOF is obtained, and the received data is not empty, it indicates that one frame of data has been transmitted, and the buf state VIDEOBUF_DONE is modified;
12. Delete the node from the irqqueue queue; wake up the application layer to read the data of the mainqueue queue, that is, the data of this frame; modify the mem offset and date_len, and take out the next buf;

The above is the operation of each sub-packet, mainly including the judgment of the status of the sub-packet, the judgment of whether to complete a frame transmission, copying the sub-packet data to buf, special processing by the manufacturer, and obtaining the buf from the queue again.

Talk about fid(frame id).
The continuous video we see can be divided into several 1s videos, and then each 1s video is divided into 30 parts, each of which is a picture, called a frame.
The data of this frame is composed of several packs in URB transmission. In URB transmission, a continuous pack is generated. How do we know that some of these packs belong to a certain frame?
The solution of the camera manufacturer is to also number each pack, and several consecutive packs belonging to the same frame have the same number, which realizes that when 0 and 1 alternate on the pack, it means that the frame has been transmitted, and the next transmission starts. frame.

At the end of the interrupt function, the URB must be submitted again, so that the interrupt can be entered again, the data can be copied, and so on.

{% codeblock lang:c %}
static void my_uvc_video_complete(struct urb *urb)
{
int ret, i;
unsigned char *mem;
unsigned char *src, *dest;
struct my_uvc_buffer *buf;
int len, maxlen, nbytes, data_len;

static int fid, last_fid = -1;

//To modify image data, a special type of index variable must be declared before the data in memory can be correctly accessed
unsigned char *point_mem;
static unsigned char *mem_temp = NULL;

//Memory location for initializing scratchpad
static unsigned int nArrayTemp_Size = 1000;

printk("enter %s\n", __func__);
printk("=======urb->status: %d ======\n", urb->status);

switch (urb->status) {
    case 0:             //Success 
        break;
    case -ETIMEDOUT:    //Nak
    case -ECONNRESET:   //Kill
    case -ENOENT:
    case -ESHUTDOWN:
    default:            //Error
        return;
}

/* Take the first buffer from the irqqueue queue */
if (!list_empty(&my_uvc_q.irqqueue)) //Determine if the queue is empty
    buf = list_first_entry(&my_uvc_q.irqqueue, struct my_uvc_buffer, irq);//Take out the first buf for subsequent storage of data
else
    buf = NULL;

for (i = 0; i < urb->number_of_packets; ++i) //A urb transfer contains number_of_packets sub-packets
{
    if (urb->iso_frame_desc[i].status < 0)
        continue;
    
    src = urb->transfer_buffer + urb->iso_frame_desc[i].offset; //data source
    len = urb->iso_frame_desc[i].actual_length; //Data length
    if(buf)
        dest = my_uvc_q.mem + buf->buf.m.offset + buf->buf.bytesused; //Destination address

    //Determine whether the data is valid; URB data meaning: data[0]->header length; data[1]->error status
    if ((len < 2) || (src[0] < 2) || (src[0] > len) || (src[1] & UVC_STREAM_ERR))
        continue;
    
    if (my_uvc_udev->descriptor.idVendor == 0x1B3B) /* ip2970/ip2977 */
    {
        if ( len >= 16 ) // have data in buffer
        {
            // The data must be judged from data[12], because the previous data is dedicated to the packet
            if ( (src[12] == 0xFF && src[13] == 0xD8 && src[14] == 0xFF) ||
                    (src[12] == 0xD8 && src[13] == 0xFF && src[14] == 0xC4))
            {
                if(last_fid) //Effect: negate
                    fid &= ~UVC_STREAM_FID; 
                else
                    fid |= UVC_STREAM_FID;
            }
        }
    }
    else
    {
        fid = src[1] & UVC_STREAM_FID;
    }

    /* Store the payload FID bit and return immediately when the buffer is NULL.*/
    if (buf == NULL)
    {
        last_fid = fid;//?necessity?
        continue;
    }

    if (buf->state != VIDEOBUF_ACTIVE)  //!= VIDEOBUF_ACTIVE, means "no data has been received before" 
    {
        if (fid == last_fid)
            continue; //Because it is the first time to receive data, the previous fid has been reversed and should not be equal to the last_fid of the previous time
        buf->state = VIDEOBUF_ACTIVE; //Indicates that it starts to receive the first data
    }

    last_fid = fid; //Start transmitting this frame of data


    len -= src[0]; //The length of the data after removing the header
    maxlen = buf->buf.length - buf->buf.bytesused; //How much data can be stored in the buffer
    nbytes = min(len, maxlen);

    //dest = my_uvc_q.mem + buf->buf.m.offset + buf->buf.bytesused; //destination address

    memcpy(dest, src + src[0], nbytes); //copy data
    
    buf->buf.bytesused += nbytes; //Update buf used space

    /* ip2970/ip2977 */
    if (my_uvc_udev->descriptor.idVendor == 0x1B3B)
    {
        if(mem_temp == NULL)
        {
            mem_temp = kmalloc(nArrayTemp_Size, GFP_KERNEL);
        }
        else if(nArrayTemp_Size <= nbytes)  //When the length of the received data is greater than the previous data length, the required space will be reallocated+
        {
            kfree(mem_temp);
            nArrayTemp_Size += 500;
            mem_temp = kmalloc(nArrayTemp_Size, GFP_KERNEL);
        }
        memset(mem_temp, 0x00, nArrayTemp_Size);

        // Points to the memory location where the data is stored
        point_mem = (unsigned char *)dest;
        if( *(point_mem) == 0xD8 && *(point_mem + 1) == 0xFF && *(point_mem + 2) == 0xC4)
        {
            memcpy( mem_temp + 1, point_mem, nbytes);
            mem_temp[0] = 0xFF;
            memcpy(point_mem, mem_temp, nbytes + 1);
        }
    }

    /* Determine whether a frame of data has been fully received */
    if (len > maxlen)
        buf->state = VIDEOBUF_DONE;

    /* Mark the buffer as done if the EOF marker is set. */
    if ((src[1] & UVC_STREAM_EOF) && (buf->buf.bytesused != 0))
        buf->state = VIDEOBUF_DONE;

    /* When a frame of data is received, delete the buffer from the irqqueue and wake up the process waiting for data */
    if ((buf->state == VIDEOBUF_DONE) || (buf->state == VIDEOBUF_ERROR))
    {
        list_del(&buf->irq);
        wake_up(&buf->wait);

        mem = my_uvc_q.mem + buf->buf.m.offset;
        data_len = buf->buf.bytesused;

        /* remove the next buf */
        if (!list_empty(&my_uvc_q.irqqueue))
            buf = list_first_entry(&my_uvc_q.irqqueue, struct my_uvc_buffer, irq);
        else
            buf = NULL;
    }

}

/* Submit URB again */
if ((ret = usb_submit_urb(urb, GFP_ATOMIC)) < 0)
{
    printk("Failed to resubmit video URB (%d).\n", ret);
}

}
{% endcodeblock %}

3.6 Start/Stop

When the parameter passed in by the application layer calling ioctl() is VIDIOC_STREAMON, it will call vidioc_streamon() to start the camera to collect data.
In this driver function, there are three main things to do:

  1. Set USB camera parameters; (such as which video data format and resolution to use)
  2. Allocate setting URB; (call the previous my_uvc_alloc_init_urbs() function)
  3. Submit URB and wait for interruption;

General cameras will support multiple formats, such as MJPEG, H264, etc., and will also support multiple resolutions.
Therefore, it is necessary to set the camera through USB before starting the transfer, so that it can return the correct data later.

If we set it directly, the camera may not support the format we set, and there may be errors in the corresponding parsing data later. Therefore, we first try to pass in the setting parameters, the camera will save it after receiving it, and make some corrections according to its own situation, then read the setting, and then make the real setting.
Here we define a my_uvc_streaming_control structure to save the parameters in this setup process.

The first is to try to set the parameters. According to the camera version, a corresponding data space is allocated to save the parameters for USB transmission.
{% codeblock lang:c %}
//lsusb get: bcdUVC = 1.00; then BCD conversion, eg: 2.10 -> 210H, 1.50 -> 150H
size = my_uvc_bcdUVC >= 0x0110 ? 34 : 26; //Allocate buf size according to version
data = kmalloc(size, GFP_KERNEL);
if (data == NULL)
return -ENOMEM;
{% endcodeblock %}

Then clear the incoming my_uvc_streaming_control structure, set the corresponding parameters, and then refer to the kernel UVC driver to use cpu_to_le16() to assign my_uvc_streaming_control to data.
{% codeblock lang:c %}
memset(ctrl, 0, sizeof * ctrl);

ctrl->bmHint = 1;	    //Keep dwFrameInterval unchanged    
ctrl->bFormatIndex = 1; //Number of supported formats
ctrl->bFrameIndex  = bFrameIndex; //Use second resolution: 640x480(1),320x240(2),160x120(3)
ctrl->dwFrameInterval = 333333;   //lsusb: dwFrameInterval(0) 333333 30 frames per second

ctrl_to_data(ctrl, data, size);

{% endcodeblock %}

Finally, call usb_control_msg() to pass the data to the camera. The usb_control_msg() here describes the meaning of each parameter in detail in the previous brightness control. At that time, the VC interface was used, and the VS interface was used here.
This is not a real setting, so the parameter passed in is VS_PROBE_CONTROL.
{% codeblock lang:c %}
ret = usb_control_msg(my_uvc_udev, usb_sndctrlpipe(my_uvc_udev, 0),
SET_CUR, USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_OUT,
VS_PROBE_CONTROL << 8, 0 << 8 | my_uvc_streaming_intf, data, size, 5000);
kfree(data);
{% endcodeblock %}

After trying to set up the USB, read the camera correction parameters and save them in the my_uvc_streaming_control structure.
{% codeblock lang:c %}
static int my_uvc_get_streaming_params(struct my_uvc_streaming_control *ctrl)
{
int ret = 0;
unsigned char *data;
unsigned short size;

//lsusb get: bcdUVC=1.00; then BCD conversion, eg: 2.10 -> 210H, 1.50 -> 150H
size = my_uvc_bcdUVC >= 0x0110 ? 34 : 26; //Allocate buf size based on version
data = kmalloc(size, GFP_KERNEL);
if (data == NULL)
    return -ENOMEM;

//Get camera parameters through usb
ret = usb_control_msg(my_uvc_udev, usb_rcvctrlpipe(my_uvc_udev, 0),
                      GET_CUR, USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_IN,
                      VS_PROBE_CONTROL << 8, 0 << 8 | my_uvc_streaming_intf, data, size, 5000); 
if (ret < 0)
    goto done;

//Return camera parameters
data_to_ctrl(data, ctrl, size);

done:
kfree(data);

return ret;

}
{% endcodeblock %}

Finally, set the new parameters to the camera, so as to ensure that the parameters set now are valid for the camera.
Here is the real setting, so the parameter passed in is VS_COMMIT_CONTROL.
{% codeblock lang:c %}
static int my_uvc_set_streaming_params(struct my_uvc_streaming_control *ctrl)
{
int ret = 0;
unsigned char *data;
unsigned short size;

//lsusb get: bcdUVC=1.00; then BCD conversion, eg: 2.10 -> 210H, 1.50 -> 150H
size = my_uvc_bcdUVC >= 0x0110 ? 34 : 26; //Allocate buf size based on version
data = kmalloc(size, GFP_KERNEL);
if (data == NULL)
    return -ENOMEM;

ctrl_to_data(ctrl, data, size);

//Try to set camera parameters through usb
ret = usb_control_msg(my_uvc_udev,  usb_sndctrlpipe(my_uvc_udev, 0),
                      SET_CUR, USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_OUT,
                      VS_COMMIT_CONTROL << 8, 0 << 8 | my_uvc_streaming_intf, data, size, 5000); 
kfree(data);

return ret;

}
{% endcodeblock %}

Then also specify bAlternateSetting, which is used to switch among multiple descriptors in the same interface.
That is to say, the USB camera provides a variety of Interface Descriptors (interfaces), each interface supports a wMaxPacketSize (bandwidth, the amount of data provided by one transmission), dwMaxPayloadTransferSize (the maximum data per frame, the actual measurement is equal to the resolution plus 512).
When the resolution of the camera changes, the required interface will also change. For example, if the resolution becomes larger, an interface with a larger bandwidth should be selected.
bAlternateSetting is equivalent to the index of the interface, so the corresponding interface should be selected for different resolutions. For example, the resolution used this time is 640x480, and the recommended interface obtained from my_uvc_params is bAlternateSetting=6:

    Interface Descriptor:
      bLength                 9
      bDescriptorType         4
      bInterfaceNumber        1
      bAlternateSetting       6
      bNumEndpoints           1
      bInterfaceClass        14 Video
      bInterfaceSubClass      2 Video Streaming
      bInterfaceProtocol      0 
      iInterface              0 
      Endpoint Descriptor:
        bLength                 7
        bDescriptorType         5
        bEndpointAddress     0x82  EP 2 IN
        bmAttributes            5
          Transfer Type            Isochronous
          Synch Type               Asynchronous
          Usage Type               Data
        wMaxPacketSize     0x03bc  1x 956 bytes
        bInterval               1

{% codeblock lang:c %}
/* 1. Set parameters to the USB camera: such as which format to use, which frame (resolution, etc.) under this format to use */
// Set the data packet according to the structure my_uvc_streaming_control; then call usb_control_msg to send the data packet;

//a. Test parameters
my_uvc_try_streaming_params(&my_uvc_params);
//b. Take out parameters
my_uvc_get_streaming_params(&my_uvc_params);
//c. Setting parameters
my_uvc_set_streaming_params(&my_uvc_params);

//d. Set the setting s used by VideoStreaming Interface
//Get the required bandwidth from my_uvc_params.dwMaxPayloadTransferSize; the measured resolution is different, and the required bandwidth is also different;
//Get the corresponding bAlternateSetting according to wMaxPacketSize;
usb_set_interface(my_uvc_udev, my_uvc_streaming_intf, my_uvc_bAlternateSetting); 

{% endcodeblock %}

After setting the format (format), frame (resolution), etc. of the camera, you can assign and set the URB, and prepare to transmit data with the USB camera.
{% codeblock lang:c %}
/* 2. Assign settings URB */
ret = my_uvc_alloc_init_urbs();
if (0 != ret)
{
printk("my_uvc_alloc_init_urbs err : ret = %d\n", ret);
return ret;
}
{% endcodeblock %}

After the allocation is completed, submit it to the USB core, wait for the interrupt to come, and read the data from the camera.
{% codeblock lang:c %}
/* 3. Submit URB to receive data */
for (i = 0; i < MY_UVC_URBS_NUM; ++i)
{
if ((ret = usb_submit_urb(my_uvc_q.urb[i], GFP_KERNEL)) < 0)
{
printk("Failed to submit URB %u (%d).\n", i, ret);
my_uvc_uninit_urbs();

        return ret;
    }
}

{% endcodeblock %}

Stopping data collection vidioc_streamoff() also needs to do three things:

  1. cancel the URB transmission;
  2. Release urb_buffer and URB;
  3. Set the interface to 0, let it sleep;

{% codeblock lang:c %}
static int my_uvc_vidioc_streamoff(struct file *file, void *priv, enum v4l2_buf_type t)
{
struct urb *urb;
unsigned int i;

printk("enter %s\n", __func__);

/* 1. kill all URB */
for (i = 0; i < MY_UVC_URBS_NUM; ++i)
{
    if ((urb = my_uvc_q.urb[i]) == NULL)
        continue;
    usb_kill_urb(urb);
}

/* 2. free all URB */
my_uvc_uninit_urbs();

/* 3. Set VideoStreaming Interface to setting 0 */
usb_set_interface(my_uvc_udev, my_uvc_streaming_intf, 0);

return 0;

}
{% endcodeblock %}

3.7 Other operation functions (mmap and poll)

There are still two operation functions mmap() and poll() left. Because buf and queue are involved, I couldn't understand it before, but it should be easy to understand now.
The first is mmap(). As mentioned above, when the application layer calls vidioc_queryctrl(), it will let the driver allocate several bufs, that is, my_uvc_q.buf[N];
Now all we need to do is to map buf to user space, and after user space operates the mapped space, it indirectly operates my_uvc_q.buf[N] of the kernel.

According to the incoming vma->vm_pgoff offset, find my_uvc_q.buf correspondingly, if not found or the size is wrong, exit.
If the my_uvc_q.buf corresponding to the offset is found, the physical address addr can be obtained according to the starting address and offset of the buf;
Then pass the physical address into the vmalloc_to_page() function to get the page structure, and then use the vm_insert_page() function to bind the page structure to the incoming vma virtual address, and divide the total size by the size of PAGE_SIZE.
Finally, when the usage count is incremented by 1, it is used to update the flag when vidioc_querybuf() queries the cache status later.
{% codeblock lang:c %}
//Map the cache to the APP space, and then the APP can directly operate this cache
static int my_uvc_mmap(struct file *file, struct vm_area_struct *vma)
{
int i, ret = 0;
struct page *page;
struct my_uvc_buffer *buffer;
unsigned long addr, start, size;

printk("enter %s\n", __func__);

start = vma->vm_start;
size = vma->vm_end - vma->vm_start;

//When the application calls the mmap function, it will pass in the offset parameter, and then find the specified buffer according to the offset
for (i = 0; i < my_uvc_q.count; ++i)
{
    buffer = &my_uvc_q.buffer[i];
    if ((buffer->buf.m.offset >> PAGE_SHIFT) == vma->vm_pgoff)
        break;
}

//The corresponding my_uvc_q.buffer was not found or the size is wrong
if ((i == my_uvc_q.count) || (size != my_uvc_q.buf_size))
    return -EINVAL;

/* VM_IO marks the area as being an mmaped region for I/O to a
 * device. It also prevents the region from being core dumped. */
vma->vm_flags |= VM_IO;

//Get the page structure corresponding to the buffer according to the virtual address
addr = (unsigned long)my_uvc_q.mem + buffer->buf.m.offset;
while (size > 0) //Loop to turn the size of the space into a page
{
    page = vmalloc_to_page((void *)addr);

    //Construct the virtual address passed in by page and APP
    if ((ret = vm_insert_page(vma, start, page)) < 0)
        return ret ;

    start += PAGE_SIZE;
    addr  += PAGE_SIZE;
    size  -= PAGE_SIZE;
}

buffer->vma_use_count++; //reference count +1

return ret;

}
{% endcodeblock %}

Finally, there is the poll() function, which is used to determine whether the buf is ready, that is, contains data.
When the application layer calls poll(), it will try to fetch the first buffer from the my_uvc_q.mainqueue queue, get its buf->wait, and then call poll_wait() with wait as the flag to enter sleep. Wait for wake_up() in the interrupt, and then wake up. According to buf->state, the corresponding mask is returned, and the corresponding application reads the data.
{% codeblock lang:c %}
//APP calls POLL/select to determine whether the cache is ready (with data)
static unsigned int my_uvc_poll(struct file *file, struct poll_table_struct *wait)
{
struct my_uvc_buffer *buf;
unsigned int mask = 0;

printk("enter %s\n", __func__);

//Take the first buffer from mainqueuq, judge its status, if not ready, sleep
if (list_empty(&my_uvc_q.mainqueue))
{
    mask |= POLLERR;
    goto done;
}

buf = list_first_entry(&my_uvc_q.mainqueue, struct my_uvc_buffer, stream);

poll_wait(file, &buf->wait, wait);
if (buf->state == VIDEOBUF_DONE || buf->state == VIDEOBUF_ERROR)
    mask |= POLLIN | POLLRDNORM; //Normal or priority with data readable | Normal data readable

done:
return mask;
}
{% endcodeblock %}

3.8 Test/Effect

As in the previous test of the driver that comes with the kernel, first compile your own driver, then load the uvcvideo and dependencies that come with the kernel, then remove the driver that comes with the kernel, install the new driver you wrote, and run the xawtv application:

make

sudo modprobe uvcvideo
sudo rmmod uvcvideo
sudo insmod my_uvc.ko
xawtv -noalsa
  • Effect:

See the complete code GitHub.

4. Overall analysis

The overall block diagram is as follows:

A few basic concepts:
1. There are five operation functions in the application layer, of which there are at least 11 basic operation functions under ioctl;
2. The USB camera has only one VC interface for control, and can have multiple VS interfaces for data transmission;
3.11 operation functions can be divided into four categories: data buf operation, camera format operation, camera attribute operation, camera start and stop;
4. Operation of data buf:
a. Generate a specified number of v4l2_buffer s according to the application layer parameters, and these bufs are on two queues at the same time: mianquque and irqquque;
b. The data generated by the camera is put into the first buf of the irqquque queue through the VS interface and the URB of the USB core, and the buf is deleted from the queue;
c. The application layer takes out the first buf of the mianquque queue, obtains the data, and deletes the buf from the queue. At this time, the buf is not on the two queues at the same time, and will be placed at the end again;
5. Operation of camera format: use interface_to_usbdev() to get the USB device descriptor of the corresponding interface. The descriptor contains various feature information of the camera and is stored in the v4l2_format structure;
6. Operation of camera properties: use usb_control_msg() to set relevant properties through the VC interface;

With the above basic concepts, now start the transmission by calling vidioc_streamon():
1. Set the corresponding bandwidth interface of the USB camera, etc.;
2. Allocate usb_buffers and urb, and set urb;
3. Report the urb, the USB core parses the urb, and receives data from the specified interface (camera VS interface) (put it in usb_buffers);
4. After the urb transmission is completed, an interrupt is generated, the first buf of the irqquque queue is taken out from the interrupt, the usb_buffers data is put in, and the dormant poll is awakened;
5.poll wakes up, vidioc_dqbuf() takes the first buf from the mianquque queue and returns it to the application layer, completing the transmission of camera data to the application layer.

Reference article:
Wei Dongshan Phase III Project Video_Camera

Tags: Linux Driver v4l2

Posted by Stripy42 on Sun, 09 Oct 2022 22:26:13 +0530