I am afraid! Have you ever thought about how the screen recording software gets the screen content?

Author: simowce


Some time ago, Android R released a Beta version, which also brought the function of recording screen that native users wanted. Although this function was available in other Android customized ROM s, such as MIUI, several years ago. Is it difficult to realize the function of screen recording? Why is Google reluctant to implement this feature on Android?

Moreover, the current popular mobile live broadcast can be divided into two forms: one is to let the audience see the content captured by the mobile camera; The other is to let the audience see the contents of the mobile phone screen. The latter, in fact, can be understood as another form of "screen recording".

So, the function of "video recording" is so common in our daily life. Have you ever thought about the principle behind video recording? How does the screen recording software get the screen content?

After reading this article, you can learn that:

1. State and Transaction in App rendering synthesis

2. the hero behind the screen recording - the core interface of Virtual Display and how SurfaceFlinger finds and handles VirtualDisplay

3. principle of screen recording and complete data stream transmission

If you are interested in these contents, then read on. If you feel headache about these lengthy analyses and want to see the conclusion directly, you can put it directly in the last summary. Let's start.

Introduction to VirtualDisplay

In current Android, it supports multiple screen types (Display, and the Display mentioned later refers to the following screens):

  • Built in home screen
  • External screen via HDMI connection
  • Virtual Display

The first two have specific physical screen devices, while the opposite Virtual Display does not. It is simulated by SurfaceFlinger. One of its major functions is to provide infrastructure for the "screen recording" repeatedly mentioned above.

Core interface

As mentioned earlier, VirtualDisplay is used behind the screen recording. Here, click on the core interfaces related to VirtualDisplay in C++ and Java respectively:


There is a screenrecord command in Android. This command is written in pure C++. The source code path is: frameworks/av/cmds/screenrecord/. Through this official Google source code, we can see the principle of screen recording in the native layer (in fact, Android supports screen recording a long time ago, ha ha). Core code:

static status_t prepareVirtualDisplay(const DisplayInfo& mainDpyInfo,
        const sp<IGraphicBufferProducer>& bufferProducer,
        sp<IBinder>* pDisplayHandle) {
    sp<IBinder> dpy = SurfaceComposerClient::createDisplay(
            String8("ScreenRecorder"), false /*secure*/);

    SurfaceComposerClient::Transaction t;
    t.setDisplaySurface(dpy, bufferProducer);

Three core interfaces are involved:

  1. SurfaceComposerClient::createDisplay()

The implementation is very simple. The Binder calls createDisplay() on the SurfaceFlinger side to create a VirtualDisplay. As for how SurfaceFlinger creates VirtualDisplay, it will be analyzed in detail later.

  1. SurfaceComposerClient::Transaction::setDisplaySurface()
status_t SurfaceComposerClient::Transaction::setDisplaySurface(const sp<IBinder>& token,
      const sp<IGraphicBufferProducer>& bufferProducer) {

    DisplayState& s(getDisplayState(token));
    s.surface = bufferProducer;
    s.what |= DisplayState::eSurfaceChanged;
    return NO_ERROR;

Associate the VirtualDisplay created above with the local IGraphicBufferProducer (the Client can obtain the IGraphicBufferProducer and IGraphicBufferConsumer of the BufferQueue through createBufferQueue()). Note that DisplayState::eSurfaceChanged here will be an important flag bit for the following series of processes.

  1. SurfaceComposerClient::Transaction::apply()

This function is also very important. Changes on the App side need to be notified to the SurfaceFlinger side.

The three interfaces will be analyzed in depth later.


There is a class OverlayDisplayAdapter in the Android Framework, which is convenient for Framework developers to create analog auxiliary display devices. It also has three core interfaces mentioned in C++. In fact, these interfaces on the Java side are actually encapsulated, and finally called to the native layer through JNI. The final implementation is in SurfaceFlinger.

Status and transactions



In frameworks/native/libs/gui/include/gui/layerstate Definition in H:

struct DisplayState {
    enum {
        eOrientationDefault = 0,
        eOrientation90 = 1,
        eOrientation180 = 2,
        eOrientation270 = 3,
        eOrientationUnchanged = 4,
        eOrientationSwapMask = 0x01

    enum {
        eSurfaceChanged = 0x01,
        eLayerStackChanged = 0x02,
        eDisplayProjectionChanged = 0x04,
        eDisplaySizeChanged = 0x08

    void merge(const DisplayState& other);

    uint32_t what;
    sp<IBinder> token;
    sp<IGraphicBufferProducer> surface;
    uint32_t layerStack;

    uint32_t orientation;
    Rect viewport;
    Rect frame;

    uint32_t width, height;

    status_t write(Parcel& output) const;
    status_t read(const Parcel& input);

This structure is defined on the Client side (i.e. the App side). It describes the set of all States of the Client side about the Display, including the direction of the Display, Surface changes in the Display, LayerStack changes, etc. (corresponding to the above enum variable). what is the set of states, All States can be combined by "and" (look at the value of enum variable above carefully. Each state occupies one hexadecimal digit).



struct DisplayDeviceState {
    bool isVirtual() const { return !displayId.has_value(); }

    int32_t sequenceId = sNextSequenceId++;
    std::optional<DisplayId> displayId;
    sp<IGraphicBufferProducer> surface;
    uint32_t layerStack = DisplayDevice::NO_LAYER_STACK;
    Rect viewport;
    Rect frame;
    uint8_t orientation = 0;
    uint32_t width = 0;
    uint32_t height = 0;
    std::string displayName;
    bool isSecure = false;

    static std::atomic<int32_t> sNextSequenceId;

DisplayDeviceState is defined on the Server side (i.e. the SurfaceFlinger side). Not only is the name very similar to the previous DisplayDevice, but also the internal members are very similar. So what is the relationship between these two classes?

Personally, these two classes are actually different representations of the Display state on the App side and the SurfaceFlinger side. One of the functions of SurfaceComposerClient::Transaction::apply() mentioned above is to pass the DisplayState to the DisplayDeviceState, which will be described in detail in the principle analysis later.

Another important point is how does the DisplayDeviceState distinguish whether the corresponding Display is a VirtualDisplay? The answer is in the type of displayId - std::optional. std::optional is a new feature newly introduced by C++ 17. It is used to facilitate the representation or processing of the "possibly empty" state of a variable. In the past, we would choose to use special values such as null, null or -1 to mark. However, now, std::optional gives a more convenient scheme. There is no too much syntax description here.

isVirtual() in the DisplayDeviceState is used to judge whether the Display corresponding to the DisplayDeviceState is a VirtualDisplay, and the basis for judgment is displayId Has_ Value(), but for the Virtual Display, its displayId will not be assigned, while the main Display and the external Display will be assigned, so! displayId Has_ Value() is true, so you can determine whether Display is VirtualDisplay.


The DisplayState and DisplayDeiveState mentioned above need to be bound to a specific Display device (whether it is a VirtualDisplay or not). DisplayToken is the bridge between these state types and specific Display settings. DisplayToken is actually a variable of type IBinder, and its value is meaningless. It is only used for indexing.


Between each VSYNC, many changes may occur in the Display or each Layer. These changes are packaged by SurfaceFlinger for unified processing, collectively referred to as Transaction transaction. In the current Android Q, various state s are involved, and the SurfaceFlinger end is packaged into the following transactions, which are described by enumeration variables:

enum {
    eTransactionNeeded = 0x01,
    eTraversalNeeded = 0x02,
    eDisplayTransactionNeeded = 0x04,
    eDisplayLayerStackChanged = 0x08,
    eTransactionFlushNeeded = 0x10,
    eTransactionMask = 0x1f,

These transactions are processed in SurfaceFlinger::handleTransaction(). This function will be called every time Vsync SF triggers SurfaceFlinger synthesis. It's like an ancient emperor going to the morning every day. handleTransaction() is like the eunuch next to the emperor shouting,

"If you have something to start, leave the court if you have nothing to do"

If there is a State change on the Client side of the last VSYNC, it will be known and processed by SurfaceFlinger through handleTransaction(), as the minister said below,

"I have something to start with"

Then the emperor's busy day began.

These transactions will be uniformly recorded in the mTransactionFlags variable, and the current mTransactionFlags value will be updated / obtained through setTransactionFlags(), peekTransactionFlags() and getTransactionFlags:

uint32_t SurfaceFlinger::peekTransactionFlags() {
    return mTransactionFlags;

// Note:
// Fetch here_ And() and the following fetch_or(), both of which are mTransactionFlags before modification, which is very important
uint32_t SurfaceFlinger::getTransactionFlags(uint32_t flags) {
    return mTransactionFlags.fetch_and(~flags) & flags;

uint32_t SurfaceFlinger::setTransactionFlags(uint32_t flags) {
    return setTransactionFlags(flags, Scheduler::TransactionStart::NORMAL);

uint32_t SurfaceFlinger::setTransactionFlags(uint32_t flags,
                                             Scheduler::TransactionStart transactionStart) {
    uint32_t old = mTransactionFlags.fetch_or(flags);
    if ((old & flags)==0) { // wake the server up
    return old;

peekTransactionFlags() and getTransactionFlags() both obtain the value of mTransactionFlags from the function name, but there is a big difference.

peekTransactionFlags() simply returns the current mTransactionFlags directly.

However, getTransactionFlags() is not. Its surface function is to judge and return whether the current mTransactionFlags contain the specified transactionflags (the original mTransactionFlags are used to perform * * and * * operations with the transmitted flags).

However, getTransactionFlags() will change the value of the original mTransactionFlags to include only the bits of the TransactionFlags passed in, and the rest will be set to 0.

As an aside, it can be seen from the above description that the names of peekTransactionFlags() and getTransactionFlags() are very confusing, which is easy to lead to cognitive misunderstanding.

If I name it, peekTransactionFlags() should be named getTransactionFlags(), and getTransactionFlags() should be named checkTransactionFlags().


You may be surprised to see this. The status has been mentioned before? Why did another State pop up? In fact, the State here is a new class, and the type of mCurrentState and mDrawingState mentioned earlier in explaining the fps calculation principle is State.

State is an internal class of SurfaceFlinger:

class State {
    explicit State(LayerVector::StateSet set) : stateSet(set), layersSortedByZ(set) {}
    const LayerVector::StateSet stateSet = LayerVector::StateSet::Invalid;
    LayerVector layersSortedByZ;
    DefaultKeyedVector< wp<IBinder>, DisplayDeviceState> displays;

    bool colorMatrixChanged = true;
    mat4 colorMatrix;

    void traverseInZOrder(const LayerVector::Visitor& visitor) const;
    void traverseInReverseZOrder(const LayerVector::Visitor& visitor) const;

This State class is actually very informative, but the core of our article is the display member. It is a type (a type customized by Android, similar to std::map), and the key value is the DisplayToken and DisplayDeviceState we mentioned earlier.

mCurrentState and mDrawingState have different emphases:

  • mCurrentState focuses on "change". mCurrentState represents the latest state of the current system. All changes at any time will be recorded in mCurrentState
  • mDrawingState focuses on "determination". mDrawingState represents the state at the time of this synthesis. SurfaceFlinger needs to determine the state of this synthesis before starting synthesis. Therefore, before each synthesis, SurfaceFlinger will synchronize the mCurrentState recording the latest state with mDrawingState through SurfaceFlinger::commitTransaction()

Principle analysis

After so long preparation, I finally came to the center of this article:

Create VirtualDisplay

Whether it's surfacecontrol Createvirtualdisplay() or SurfaceComposerClient::createDisplay() of C++ code, creating VirtualDisplay will eventually go to createDisplay() of SurfaceFlinger:

 sp<IBinder> SurfaceFlinger::createDisplay(const String8& displayName,
         bool secure)
     sp<BBinder> token = new DisplayToken(this);

     Mutex::Autolock _l(mStateLock);
     // Display ID is assigned when virtual display is allocated by HWC.
     DisplayDeviceState state;
     state.isSecure = secure;
     state.displayName = displayName;
     mCurrentState.displays.add(token, state);
     return token;

The most important thing of this function is to generate a DisplayDeviceState and a DisplayToken of the VirtualDisplay, and increase the DisplayDeviceState to mCurrentState.

It should be noted that at this time, the Virtual Display has not actually been created. Here, the change of state is recorded by modifying the mCurrentState. The real creation process is later.

state to transaction

Look back at the SurfaceComposerClient::Transaction::apply() mentioned in the previous core interface section:

status_t SurfaceComposerClient::Transaction::apply(bool synchronous) {
    sf->setTransactionState(composerStates, displayStates, flags, applyToken, mInputWindowCommands,
                            {} /*uncacheBuffer - only set in doUncacheBufferTransaction*/,

This function will eventually send the DisplayToken, DisplayState and other contents in the DisplayState to the SurfaceFlinger end through SurfaceFlinger::setTransactionState(), and then make the following calls:

  \_ SurfaceFlinger::applyTransactionState()
       \_  SurfaceFlinger::setDisplayStateLocked()

In SurfaceFlinger::setDisplayStateLocked:

uint32_t SurfaceFlinger::setDisplayStateLocked(const DisplayState& s) {
    const ssize_t index = mCurrentState.displays.indexOfKey(s.token);
    if (index < 0) return 0;

    uint32_t flags = 0;
    DisplayDeviceState& state = mCurrentState.displays.editValueAt(index);

    const uint32_t what = s.what;
    if (what & DisplayState::eSurfaceChanged) {
        if (IInterface::asBinder(state.surface) != IInterface::asBinder(s.surface)) {
            state.surface = s.surface;
            flags |= eDisplayTransactionNeeded;

Pass the surface in the DisplayState (i.e. the BufferProducer created on the App side) to the DisplayDeviceState, and convert eSurfaceChanged (recall the previous content, both surface and what are set in SurfaceComposerClient::Transaction::setDisplaySurface()) to eDisplayTransactionNeeded. This time, not only the content of DisplayState was transferred to DisplayDeviceState, but also the great feat of converting state to Transaction was completed. SurfaceFlinger finally learned about the change of App side state.

Then go back to SurfaceFlinger::applyTransactionState() to save the previous eDisplayTransactionNeeded transaction through SurfaceFlinger::setTransactionFlags() for processing.

SurfaceFlinger processing transactions

The previous eDisplayTransactionNeeded transaction will go through the following function calls in the composition process of the next SurfaceFlinger:

 \_ SurfaceFlinger::handleTransacion()
     \_ SurfaceFlinger::handleTransactionLocked()

It is finally processed in processDisplayChangesLocked().

First of all, let's think about a question:

❔ How does SurfaceFlinger know that Display was added or removed in the last VSYNC?

The answer is the mDrawingState and mCurrentState mentioned earlier. mCurrentState represents the latest state. mDrawingState represents the state of the last composition (before commitTransaction() compared to this composition). Therefore, it is assumed that:

  1. If there is one in the DisplayDeviceState in the mCurrentState but not in the mDrawingState, it means that a new Display is added in the previous VSYNC
  2. If there is one in the DisplayDeviceState in the mDrawingState but not in the mCurrentState, it means that the Display in the previous VSYNC has been removed

After knowing this, we can easily judge the change of Display. The analysis of this article focuses on the new Display:

void SurfaceFlinger::processDisplayChangesLocked() {
    // find displays that were added
    // (ie: in current state but not in drawing state)
    for (size_t i = 0; i < cc; i++) {
        if (draw.indexOfKey(curr.keyAt(i)) < 0) {
            const DisplayDeviceState& state(curr[i]);

            sp<compositionengine::DisplaySurface> dispSurface;
            sp<IGraphicBufferProducer> producer;
            sp<IGraphicBufferProducer> bqProducer;
            sp<IGraphicBufferConsumer> bqConsumer;
            getFactory().createBufferQueue(&bqProducer, &bqConsumer, false);

            std::optional<DisplayId> displayId;
            if (state.isVirtual()) {
                if (state.surface != nullptr) {
                    // Create its DisplaySurface for VirtualDisplay -- VirtualDisplaySurface
                    sp<VirtualDisplaySurface> vds =
                            new VirtualDisplaySurface(getHwComposer(),
                                                      displayId, state.surface,
                                                      bqProducer, bqConsumer,
                    dispSurface = vds;
                    producer = vds;
            } else {
                displayId = state.displayId;
                // Create its DisplaySurface for the main / external display -- FrameBufferSurface
                dispSurface =
                    new FramebufferSurface(getHwComposer(), *displayId, bqConsumer);
                producer = bqProducer;

            const wp<IBinder>& displayToken = curr.keyAt(i);
            if (dispSurface != nullptr) {
                // Where the DisplayDevice is actually created and added to mDisplays
                                                                displayId, state,
                                                                dispSurface, producer));
                if (!state.isVirtual()) {
                    dispatchDisplayHotplugEvent(displayId->value, true);

There are a lot of new Display content, which is divided into two parts (Note: the rest of the content will focus on code flow and data flow, and a new article will be opened to explain in detail the numerous classes involved and their subclasses. At the same time, the following content will also cover the CompositionEngine, which will be briefly introduced first, and a new article will be opened to explain separately):

Create DisplaySurface

As mentioned earlier, Android supports multiple Display types, and each Display will have an associated Buffer, which is described by the DisplaySurface class. Different types of displays use different displaysurfaces: the main Display and the external Display use FrameBufferSurface, while the virtual Display uses VirtualDisplaySurface:

VirtualDisplaySurface::VirtualDisplaySurface(HWComposer& hwc,
                                             const std::optional<DisplayId>& displayId,
                                             const sp<IGraphicBufferProducer>& sink,
                                             const sp<IGraphicBufferProducer>& bqProducer,
                                             const sp<IGraphicBufferConsumer>& bqConsumer,
                                             const std::string& name)
    mSource[SOURCE_SINK] = sink;
    mSource[SOURCE_SCRATCH] = bqProducer;

The BufferProducer transmitted from the App side is saved as msource[source\u sink] in the VirtualDisplaySurface, which is very important and will be used later.

Create DisplayDevice

Then, using the VirtualDisplaySurface created earlier, call setupNewDisplayDeviceInternal():

sp<DisplayDevice> SurfaceFlinger::setupNewDisplayDeviceInternal(
        const wp<IBinder>& displayToken, const std::optional<DisplayId>& displayId,
        const DisplayDeviceState& state, const sp<compositionengine::DisplaySurface>& dispSurface,
        const sp<IGraphicBufferProducer>& producer) {

    auto nativeWindowSurface = getFactory().createNativeWindowSurface(producer);
    auto nativeWindow = nativeWindowSurface->getNativeWindow();
    creationArgs.nativeWindow = nativeWindow;


    sp<DisplayDevice> display = getFactory().createDisplayDevice(std::move(creationArgs));


    return display;

First, the displaySurface and producer parameters of the setupNewDisplayDeviceInternal() function are the VirtualDisplaySurface created earlier.

Then use the VirtualDisplaySurface created earlier to create a native window using createNativeWindowSurface(). Here is a brief description of the concept of native window:

As we know, OpenGL ES is a cross platform graphics API. However, even if it is cross platform, it needs to be landed on a specific platform eventually. Landing requires "localization" on a specific platform system - associating the cross platform OpenGL ES with the window system in a specific platform, so as to ensure normal operation. EGL is the one that provides local windows (i.e. native windows) for OenGL ES, In Android, the native window actually refers to the class Surface, which is displayed in frameworks/native/libs/gui/Surface Defined in CPP.

Then take a look at how the native window is created:

std::unique_ptr<surfaceflinger::NativeWindowSurface> createNativeWindowSurface(
        const sp<IGraphicBufferProducer>& producer) {
    class NativeWindowSurface final : public surfaceflinger::NativeWindowSurface {
        explicit NativeWindowSurface(const sp<IGraphicBufferProducer>& producer)
              : mSurface(new Surface(producer, /* controlledByApp */ false)) {}

        ~NativeWindowSurface() override = default;

        sp<ANativeWindow> getNativeWindow() const override { return mSurface; }

        void preallocateBuffers() override { mSurface->allocateBuffers(); }

        sp<Surface> mSurface;

    return std::make_unique<NativeWindowSurface>(producer);

Take another look at the Surface constructor:

Surface::Surface(const sp<IGraphicBufferProducer>& bufferProducer, bool controlledByApp)
      : mGraphicBufferProducer(bufferProducer),

From this constructor, you can clearly see that the created native window, that is, Surface, assigns the VirtualDisplaySurface created earlier to mGraphicBufferProducer. Please keep this in mind, which will be used later in the data stream transmission.

Then use createDisplayDevice() to create a DisplayDeivce and add it to mDisplays. Only then can the VirtualDisplay be truly created.

Data stream transmission

Then when everything is ready, we finally come to the final data stream transmission.

During each synthesis, SurfaceFlinger calls doDisplayComposition() for each DisplayDevice in turn. In the doDisplayComposition() of VirtualDisplay, dequeueBuffer() will be called to apply for Buffer for the next composition (currently, VirtualDisplay is all GPU composition). The calling process of dequeueBuffer() is worth mentioning:

Recall that we mentioned earlier that createNativeWindow() in setupNewDeviceInternal() assigns VirtualDisplaySurface to its member mGraphicBufferProducer, and in SurfaceFlinger::dequeueBuffer():

    status_t result = mGraphicBufferProducer->dequeueBuffer(&buf, &fence, reqWidth, reqHeight,
                                                            reqFormat, reqUsage, &mBufferAge,
                                                            enableFrameTimestamps ? &frameTimestamps
                                                                                  : nullptr);

It will call mgraphicbufferproducer->dequeuebuffer(), so it will turn to VirtualDisplaySurface::dequeueBuffer():

status_t VirtualDisplaySurface::dequeueBuffer(int* pslot, sp<Fence>* fence, uint32_t w, uint32_t h,
                                              PixelFormat format, uint64_t usage,
                                              uint64_t* outBufferAge,
                                              FrameEventHistoryDelta* outTimestamps) {
    if (!mDisplayId) {
        return mSource[SOURCE_SINK]->dequeueBuffer(pslot,fence, w, h, format, usage,
                                                   outBufferAge, outTimestamps);

Recall the previous content. For VirtualDisplay, the displayId is empty, so it will directly call the dequeueBuffer() of mSource[SOURCE_SINK]. As we mentioned earlier, mSource[SOURCE_SINK] is the BufferProducer from the App.

Therefore, the calling process of the entire dequeueBuffer() is as follows:

 \_ Surface::hook_dequeueBuffer()
     \_ Surface::dequeueBuffer()
         \_ VirtualDisplaySurface::dequeueBuffer()
             \_ Called here Client Terminal BufferProducer of dequeueBuffer()

After a series of dequeueBuffer() calls, SurfaceFlinger finally gets the Buffer applied by the BufferQueue on the App side, performs an independent synthesis for the screen recording App, and renders the synthesized content to the Buffer obtained from the App side. Yes, you are right. In this scenario, SurfaceFlinger is the content producer, and the screen recording App is the content consumer. Finally, the SurfaceFlinger composite returns the rendered Buffer to the screen recording App through queueBuffer():

void SurfaceFlinger::doDisplayComposition(const sp<DisplayDevice>& displayDevice,
                                          const Region& inDirtyRegion) {
    if (!doComposeSurfaces(displayDevice, Region::INVALID_REGION, &readyFence)) return;

    // swap buffers (presentation)

The complete calling process is completely consistent with dequeueBuffer(), so I won't repeat it.

Finally, the App gets the notification of the new Buffer through onFrameAvailable(). It gets the synthesized Buffer (i.e. the content of the current screen) through acquireBuffer((), and then it can start various processing (such as encoding and decoding) on the Buffer. So far, the whole process of data transmission has been explained.


In a word, the principle of screen recording is:

The screen recording software creates a VirtualDisplay, and then each time the SurfaceFlinger performs synthesis, it will perform an independent synthesis on the VirtualDisplay, and render the synthesized results to the Buffer passed by the screen recording software. After receiving the Buffer with the current picture, the screen recording software can further process the Buffer, such as encoding and decoding, so as to achieve the purpose of screen recording.


  1. Changes on the App side, such as how the new VirtualDisplay is known by SurfaceFlinger
  2. How to transfer screen content from SurfaceFlinger to screen recording App

These two points can be summarized in the following figure:


It took a long time to analyze how the contents of DisplayState were transferred to DisplayDeviceState. The reason is that I stubbornly believe that SurfaceFlinger::setTransactionState() can only be called when Display is initialized, and I confidently add the following debug log:

As a result, I was full of simowce: I don;t believe this'll print two or more:


Some readers have asked me why you write so slowly? In fact, the answer is very simple, because I write things that I am not interested in, so I write while learning, and because I have a little paranoia, that is, if I don't understand something, I must understand it, so I write very slowly. But please rest assured that the content you can see will be published only after I have repeatedly confirmed that there is no problem. The quality is absolutely guaranteed. I hope that one day I can proudly say those four words in a certain field:

Subject to me.

If you need me to collect and sort out the Android learning PDF+ architecture Video + interview documents + source code notes, advanced architecture technology brain map, Android development interview special materials, and advanced advanced architecture materials, you can send a private letter or comment on me for free

Finally, if you think this article is well written or helpful to you, please praise it, reprint it, and share it with your circle of friends or related technology groups. Thank you.

Tags: Android Design Pattern

Posted by vic vance on Tue, 31 May 2022 13:17:12 +0530