Coordination and scheduling in react source code_2023-02-21

requestEventTime

In fact, during the execution of React, there will be countless tasks to be executed, but they will have a priority determination. If the priority of two events is the same, how does React determine which of them will execute first?

// packages/react-reconciler/src/ReactFiberWorkLoop.old.js
export function requestEventTime() {
  if ((executionContext & (RenderContext | CommitContext)) !== NoContext) {
    // We're inside React, so it's fine to read the actual time.
    // react event is executing
    // executionContext
    // RenderContext is computing
    // CommitContext is committing
    // export const NoContext = /*             */ 0b0000000;
    // const BatchedContext = /*               */ 0b0000001;
    // const EventContext = /*                 */ 0b0000010;
    // const DiscreteEventContext = /*         */ 0b0000100;
    // const LegacyUnbatchedContext = /*       */ 0b0001000;
    // const RenderContext = /*                */ 0b0010000;
    // const CommitContext = /*                */ 0b0100000;
    // export const RetryAfterError = /*       */ 0b1000000;
    return now();
  }
  // Not executed in react event NoTimestamp === -1
  if (currentEventTime !== NoTimestamp) { 
    // The browser event is executing, returns the last currentEventTime
    return currentEventTime;
  }
  // Recalculate currentEventTime when execution is interrupted
  currentEventTime = now();
  return currentEventTime;
}
copy
  • RenderContext and CommitContext indicate that the update is being calculated and submitted, and returns now().
  • If the browser event is being executed, return the last currentEventTime.
  • If the execution of the react task is terminated or interrupted, the execution time now() is reacquired.
  • The smaller the acquired time, the higher the execution priority.

now() is not simply new Date(), but to determine whether the time between two update tasks is less than 10ms to decide whether to reuse the last update time Scheduler_now.

export const now = initialTimeMs < 10000 ? Scheduler_now : () => Scheduler_now() - initialTimeMs;
copy

In fact, everyone guesses that the task gap time at the 10ms level is almost negligible, so here it can be regarded as the same task, which does not require a large performance overhead and is conducive to batch updates.

requestUpdateLane

requestEventTime marks each task that needs to be executed with a trigger update time tag, then the priority of the task needs to be further established, and requestUpdateLane is used to obtain the priority of each task execution.

// packages/react-reconciler/src/ReactFiberWorkLoop.old.js
export function requestUpdateLane(fiber: Fiber): Lane {
  // Special cases
  const mode = fiber.mode;
  if ((mode & BlockingMode) === NoMode) {
    return (SyncLane: Lane);
  } else if ((mode & ConcurrentMode) === NoMode) {
    return getCurrentPriorityLevel() === ImmediateSchedulerPriority
      ? (SyncLane: Lane)
      : (SyncBatchedLane: Lane);
  } else if (
    !deferRenderPhaseUpdateToNextBatch &&
    (executionContext & RenderContext) !== NoContext &&
    workInProgressRootRenderLanes !== NoLanes
  ) {
    // This is a render phase update. These are not officially supported. The
    // old behavior is to give this the same "thread" (expiration time) as
    // whatever is currently rendering. So if you call `setState` on a component
    // that happens later in the same render, it will flush. Ideally, we want to
    // remove the special case and treat them as if they came from an
    // interleaved event. Regardless, this pattern is not officially supported.
    // This behavior is only a fallback. The flag only exists until we can roll
    // out the setState warning, since existing code might accidentally rely on
    // the current behavior.
    return pickArbitraryLane(workInProgressRootRenderLanes);
  }

  // The algorithm for assigning an update to a lane should be stable for all
  // updates at the same priority within the same event. To do this, the inputs
  // to the algorithm must be the same. For example, we use the `renderLanes`
  // to avoid choosing a lane that is already in the middle of rendering.
  //
  // However, the "included" lanes could be mutated in between updates in the
  // same event, like if you perform an update inside `flushSync`. Or any other
  // code path that might call `prepareFreshStack`.
  //
  // The trick we use is to cache the first of each of these inputs within an
  // event. Then reset the cached values once we can be sure the event is over.
  // Our heuristic for that is whenever we enter a concurrent work loop.
  //
  // We'll do the same for `currentEventPendingLanes` below.
  if (currentEventWipLanes === NoLanes) {
    currentEventWipLanes = workInProgressRootIncludedLanes;
  }

  const isTransition = requestCurrentTransition() !== NoTransition;
  if (isTransition) {
    if (currentEventPendingLanes !== NoLanes) {
      currentEventPendingLanes =
        mostRecentlyUpdatedRoot !== null
          ? mostRecentlyUpdatedRoot.pendingLanes
          : NoLanes;
    }
    return findTransitionLane(currentEventWipLanes, currentEventPendingLanes);
  }

  // TODO: Remove this dependency on the Scheduler priority.
  // To do that, we're replacing it with an update lane priority.

  // Obtain the priority of execution tasks for easy scheduling
  const schedulerPriority = getCurrentPriorityLevel();

  // The old behavior was using the priority level of the Scheduler.
  // This couples React to the Scheduler internals, so we're replacing it
  // with the currentUpdateLanePriority above. As an example of how this
  // could be problematic, if we're not inside `Scheduler.runWithPriority`,
  // then we'll get the priority of the current running Scheduler task,
  // which is probably not what we want.
  let lane;
  if (
    // TODO: Temporary. We're removing the concept of discrete updates.
    (executionContext & DiscreteEventContext) !== NoContext &&

    // Type event of user block
    schedulerPriority === UserBlockingSchedulerPriority
  ) {
    // Recalculate lane by findUpdateLane function
    lane = findUpdateLane(InputDiscreteLanePriority, currentEventWipLanes);
  } else {
    // Calculate the lane according to the priority calculation rule
    const schedulerLanePriority = schedulerPriorityToLanePriority(
      schedulerPriority,
    );

    if (decoupleUpdatePriorityFromScheduler) {
      // In the new strategy, we will track the current update lane priority
      // inside React and use that priority to select a lane for this update.
      // For now, we're just logging when they're different so we can assess.
      const currentUpdateLanePriority = getCurrentUpdateLanePriority();

      if (
        schedulerLanePriority !== currentUpdateLanePriority &&
        currentUpdateLanePriority !== NoLanePriority
      ) {
        if (__DEV__) {
          console.error(
            'Expected current scheduler lane priority %s to match current update lane priority %s',
            schedulerLanePriority,
            currentUpdateLanePriority,
          );
        }
      }
    }
    // According to the calculated schedulerLanePriority, calculate the updated priority lane
    lane = findUpdateLane(schedulerLanePriority, currentEventWipLanes);
  }

  return lane;
}
copy
  • Obtain the scheduling priority schedulerPriority of all execution tasks through getCurrentPriorityLevel.
  • The lane is calculated by findUpdateLane as the priority in the update.

findUpdateLane

export function findUpdateLane(
  lanePriority: LanePriority,  wipLanes: Lanes,
): Lane {
  switch (lanePriority) {
    case NoLanePriority:
      break;
    case SyncLanePriority:
      return SyncLane;
    case SyncBatchedLanePriority:
      return SyncBatchedLane;
    case InputDiscreteLanePriority: {
      const lane = pickArbitraryLane(InputDiscreteLanes & ~wipLanes);
      if (lane === NoLane) {
        // Shift to the next priority level
        return findUpdateLane(InputContinuousLanePriority, wipLanes);
      }
      return lane;
    }
    case InputContinuousLanePriority: {
      const lane = pickArbitraryLane(InputContinuousLanes & ~wipLanes);
      if (lane === NoLane) {
        // Shift to the next priority level
        return findUpdateLane(DefaultLanePriority, wipLanes);
      }
      return lane;
    }
    case DefaultLanePriority: {
      let lane = pickArbitraryLane(DefaultLanes & ~wipLanes);
      if (lane === NoLane) {
        // If all the default lanes are already being worked on, look for a
        // lane in the transition range.
        lane = pickArbitraryLane(TransitionLanes & ~wipLanes);
        if (lane === NoLane) {
          // All the transition lanes are taken, too. This should be very
          // rare, but as a last resort, pick a default lane. This will have
          // the effect of interrupting the current work-in-progress render.
          lane = pickArbitraryLane(DefaultLanes);
        }
      }
      return lane;
    }
    case TransitionPriority: // Should be handled by findTransitionLane instead
    case RetryLanePriority: // Should be handled by findRetryLane instead
      break;
    case IdleLanePriority:
      let lane = pickArbitraryLane(IdleLanes & ~wipLanes);
      if (lane === NoLane) {
        lane = pickArbitraryLane(IdleLanes);
      }
      return lane;
    default:
      // The remaining priorities are not valid for updates
      break;
  }
  invariant(
    false,
    'Invalid update priority: %s. This is a bug in React.',
    lanePriority,
  );
}
copy

lanePriority: LanePriority

export opaque type LanePriority =
  | 0
  | 1
  | 2
  | 3
  | 4
  | 5
  | 6
  | 7
  | 8
  | 9
  | 10
  | 11
  | 12
  | 13
  | 14
  | 15
  | 16
  | 17;
export opaque type Lanes = number;
export opaque type Lane = number;
export opaque type LaneMap<T> = Array<T>;

import {
  ImmediatePriority as ImmediateSchedulerPriority,
  UserBlockingPriority as UserBlockingSchedulerPriority,
  NormalPriority as NormalSchedulerPriority,
  LowPriority as LowSchedulerPriority,
  IdlePriority as IdleSchedulerPriority,
  NoPriority as NoSchedulerPriority,
} from './SchedulerWithReactIntegration.new';

// sync task
export const SyncLanePriority: LanePriority = 15;
export const SyncBatchedLanePriority: LanePriority = 14;

// user event
const InputDiscreteHydrationLanePriority: LanePriority = 13;
export const InputDiscreteLanePriority: LanePriority = 12;

const InputContinuousHydrationLanePriority: LanePriority = 11;
export const InputContinuousLanePriority: LanePriority = 10;

const DefaultHydrationLanePriority: LanePriority = 9;
export const DefaultLanePriority: LanePriority = 8;

const TransitionHydrationPriority: LanePriority = 7;
export const TransitionPriority: LanePriority = 6;

const RetryLanePriority: LanePriority = 5;

const SelectiveHydrationLanePriority: LanePriority = 4;

const IdleHydrationLanePriority: LanePriority = 3;
const IdleLanePriority: LanePriority = 2;

const OffscreenLanePriority: LanePriority = 1;

export const NoLanePriority: LanePriority = 0;
copy

Related reference video explanation: enter study

createUpdate

export function createUpdate(eventTime: number, lane: Lane): Update<*> {
  const update: Update<*> = {
    eventTime, // update time
    lane, // priority

    tag: UpdateState,//renew
    payload: null,// What needs to be updated
    callback: null, // callback after update

    next: null, // point to the next update
  };
  return update;
}
copy

The input parameters of the createUpdate function are eventTime and lane, and output an update object, and the tag in the object indicates what kind of operation this object will perform.

export const UpdateState = 0;// renew
export const ReplaceState = 1;//replace
export const ForceUpdate = 2;//force update
export const CaptureUpdate = 3;//xx update
copy
  • createUpdate simply wraps each task and pushes it into the update queue as an individual.

enqueueUpdate

export function enqueueUpdate<State>(fiber: Fiber, update: Update<State>) {
  // Get the current update queue? why? Because there is no guarantee whether react still has tasks that are being updated or have not been updated
  const updateQueue = fiber.updateQueue;
  //  If the update queue is empty, it means that the fiber has not been rendered yet, so exit directly
  if (updateQueue === null) {
    // Only occurs if the fiber has been unmounted.
    return;
  }

  const sharedQueue: SharedQueue<State> = (updateQueue: any).shared;
  const pending = sharedQueue.pending;
  if (pending === null) {
    // This is the first update. Create a circular list.
     // Remember that update object?  update.next =>
     // If the pedding bit is null, it means the first rendering, then its pointer is update itself
    update.next = update;
  } else {
    // Insert update into the update queue loop
    update.next = pending.next;
    pending.next = update;
  }
  sharedQueue.pending = update;

  if (__DEV__) {
    if (
      currentlyProcessingQueue === sharedQueue &&
      !didWarnUpdateInsideUpdate
    ) {
      console.error(
        'An update (setState, replaceState, or forceUpdate) was scheduled ' +
          'from inside an update function. Update functions should be pure, ' +
          'with zero side-effects. Consider using componentDidUpdate or a ' +
          'callback.',
      );
      didWarnUpdateInsideUpdate = true;
    }
  }
}
copy
  • This step is to associate the object that needs to be updated with the fiber update queue.

Summarize

React obtains the priority of the event, processes events with the same priority, creates an update object and associates it with the fiber's update queue. At this point, the process of updateContainer is complete, and the next step is to start its coordination phase.

Coordination and Scheduling

The process of coordination and scheduling is roughly shown in the figure:

reconciler process

React's reconciler process uses scheduleUpdateOnFiber as the entry point, and handles the nesting levels of task updates in checkForNestedUpdates. If the nesting levels are too large ( >50 ), it will be considered an invalid update and an exception will be thrown. Then, according to markUpdateLaneFromFiberToRoot, the current fiber tree is recursively recursive to the fiber lane from the bottom up, and binary comparison or bit operation is performed according to the lane. Details are as follows:

  • If the priority of the currently executing task is synchronous, it is judged whether there is a React task being executed. If not, execute ensureRootIsScheduled for scheduling processing.
  • If the current task priority is asynchronous execution, execute ensureRootIsScheduled for scheduling.
export function scheduleUpdateOnFiber(
  fiber: Fiber,  lane: Lane,  eventTime: number,
) {
  // Check the number of nesting layers to avoid loops doing invalid operations
  checkForNestedUpdates();
  warnAboutRenderPhaseUpdatesInDEV(fiber);

  // Update the task priority in the current update queue, and update child.fiberLanes from the bottom up
  const root = markUpdateLaneFromFiberToRoot(fiber, lane);
  if (root === null) {
    warnAboutUpdateOnUnmountedFiberInDEV(fiber);
    return null;
  }

  // Mark that the root has a pending update.
  // Mark root as updated, execute it
  markRootUpdated(root, lane, eventTime);

  if (root === workInProgressRoot) {
    // Received an update to a tree that's in the middle of rendering. Mark
    // that there was an interleaved update work on this root. Unless the
    // `deferRenderPhaseUpdateToNextBatch` flag is off and this is a render
    // phase update. In that case, we don't treat render phase updates as if
    // they were interleaved, for backwards compat reasons.
    if (
      deferRenderPhaseUpdateToNextBatch ||
      (executionContext & RenderContext) === NoContext
    ) {
      workInProgressRootUpdatedLanes = mergeLanes(
        workInProgressRootUpdatedLanes,
        lane,
      );
    }
    if (workInProgressRootExitStatus === RootSuspendedWithDelay) {
      // The root already suspended with a delay, which means this render
      // definitely won't finish. Since we have a new update, let's mark it as
      // suspended now, right before marking the incoming update. This has the
      // effect of interrupting the current render and switching to the update.
      // TODO: Make sure this doesn't override pings that happen while we've
      // already started rendering.
      markRootSuspended(root, workInProgressRootRenderLanes);
    }
  }

  // TODO: requestUpdateLanePriority also reads the priority. Pass the
  // priority as an argument to that function and this one.
  // Get the current priority level
  const priorityLevel = getCurrentPriorityLevel();

  // Synchronize tasks, using the method of synchronous update
  if (lane === SyncLane) {
    if (
      // Check if we're inside unbatchedUpdates
      (executionContext & LegacyUnbatchedContext) !== NoContext &&
      // Check if we're not already rendering
      (executionContext & (RenderContext | CommitContext)) === NoContext
    ) {
      // Register pending interactions on the root to avoid losing traced interaction data.
      // Synchronous and no react tasks are executing, call performSyncWorkOnRoot
      schedulePendingInteractions(root, lane);

      // This is a legacy edge case. The initial mount of a ReactDOM.render-ed
      // root inside of batchedUpdates should be synchronous, but layout updates
      // should be deferred until the end of the batch.



      performSyncWorkOnRoot(root);



    } else {
      // If there is a react task being executed, execute it ensureRootIsScheduled to reuse the currently executing task
      // along with this update
      ensureRootIsScheduled(root, eventTime);





      schedulePendingInteractions(root, lane);
      if (executionContext === NoContext) {
        // Flush the synchronous work now, unless we're already working or inside
        // a batch. This is intentionally inside scheduleUpdateOnFiber instead of
        // scheduleCallbackForFiber to preserve the ability to schedule a callback
        // without immediately flushing it. We only do this for user-initiated
        // updates, to preserve historical behavior of legacy mode.
        resetRenderTimer();
        flushSyncCallbackQueue();
      }
    }

  } else {
    // Schedule a discrete update but only if it's not Sync.
    // If this is an asynchronous task
    if (
      (executionContext & DiscreteEventContext) !== NoContext &&
      // Only updates at user-blocking priority or greater are considered
      // discrete, even inside a discrete event.
      (priorityLevel === UserBlockingSchedulerPriority ||
        priorityLevel === ImmediateSchedulerPriority)
    ) {
      // This is the result of a discrete event. Track the lowest priority
      // discrete update per root so we can flush them early, if needed.
      if (rootsWithPendingDiscreteUpdates === null) {
        rootsWithPendingDiscreteUpdates = new Set([root]);
      } else {
        rootsWithPendingDiscreteUpdates.add(root);
      }
    }

    // Schedule other updates after in case the callback is sync.
    // The update can be interrupted, just call ensureRootIsScheduled => performConcurrentWorkOnRoot
    ensureRootIsScheduled(root, eventTime);





    schedulePendingInteractions(root, lane);
  }

  // We use this when assigning a lane for a transition inside
  // `requestUpdateLane`. We assume it's the same as the root being updated,
  // since in the common case of a single root app it probably is. If it's not
  // the same root, then it's not a huge deal, we just might batch more stuff
  // together more than necessary.
  mostRecentlyUpdatedRoot = root;
}
copy

Synchronous task type execution mechanism

When the task type is a synchronous task and the current js main thread is idle, the synchronous task will be executed through the performSyncWorkOnRoot(root) method.

performSyncWorkOnRoot mainly does two things:

  • renderRootSync starts synchronous rendering tasks from the root node
  • commitRoot executes the commit process

When there are tasks being executed in the current js thread, the ensureRootIsScheduled function will be triggered. ensureRootIsScheduled mainly deals with whether the lane of the currently added update task has changed:

  • If there is no change, it means execute with the current schedule.
  • Create a new schedule if there is one.
  • Call performSyncWorkOnRoot to perform synchronization tasks.
function ensureRootIsScheduled(root: FiberRoot, currentTime: number) {
  const existingCallbackNode = root.callbackNode;

  // Check if any lanes are being starved by other work. If so, mark them as
  // expired so we know to work on those next.
  markStarvedLanesAsExpired(root, currentTime);

  // Determine the next lanes to work on, and their priority.
  const nextLanes = getNextLanes(
    root,
    root === workInProgressRoot ? workInProgressRootRenderLanes : NoLanes,
  );
  // This returns the priority level computed during the `getNextLanes` call.
  const newCallbackPriority = returnNextLanesPriority();

  if (nextLanes === NoLanes) {
    // Special case: There's nothing to work on.
    if (existingCallbackNode !== null) {
      cancelCallback(existingCallbackNode);
      root.callbackNode = null;
      root.callbackPriority = NoLanePriority;
    }
    return;
  }

  // Check if there's an existing task. We may be able to reuse it.
  if (existingCallbackNode !== null) {
    const existingCallbackPriority = root.callbackPriority;
    if (existingCallbackPriority === newCallbackPriority) {
      // The priority hasn't changed. We can reuse the existing task. Exit.
      return;
    }
    // The priority changed. Cancel the existing callback. We'll schedule a new
    // one below.
    cancelCallback(existingCallbackNode);
  }

  // Schedule a new callback.
  let newCallbackNode;
  if (newCallbackPriority === SyncLanePriority) {
    // Special case: Sync React callbacks are scheduled on a special
    // internal queue
    // The synchronization task calls performSyncWorkOnRoot
    newCallbackNode = scheduleSyncCallback(
      performSyncWorkOnRoot.bind(null, root),
    );
  } else if (newCallbackPriority === SyncBatchedLanePriority) {
    newCallbackNode = scheduleCallback(
      ImmediateSchedulerPriority,
      performSyncWorkOnRoot.bind(null, root),
    );
  } else {
    // The asynchronous task calls performConcurrentWorkOnRoot
    const schedulerPriorityLevel = lanePriorityToSchedulerPriority(
      newCallbackPriority,
    );
    newCallbackNode = scheduleCallback(
      schedulerPriorityLevel,
      performConcurrentWorkOnRoot.bind(null, root),
    );
  }

  root.callbackPriority = newCallbackPriority;
  root.callbackNode = newCallbackNode;
}
copy

Therefore, when the task type is synchronous, no matter whether the js thread is idle or not, it will go to performSyncWorkOnRoot, and then go to the process of renderRootSync and workLoopSync. In workLoopSync, as long as the workInProgress fiber is not null, it will always execute performUnitOfWork in a loop, and performUnitOfWork will To execute beginWork and completeWork, which is the beginWork process mentioned in the previous chapter to create each fiber node

// packages/react-reconciler/src/ReactFiberWorkLoop.old.js

function workLoopSync() {
  while (workInProgress !== null) {
    performUnitOfWork(workInProgress);
  }
}
copy

Asynchronous task type execution mechanism

Asynchronous tasks will execute performConcurrentWorkOnRoot, and then execute renderRootConcurrent, workLoopConcurrent, but unlike synchronous tasks, asynchronous tasks can be interrupted. The keyword that can be interrupted is shouldYield. Its return value is false, and if it is true Can be interrupted.

// packages/react-reconciler/src/ReactFiberWorkLoop.old.js

function workLoopConcurrent() {
  while (workInProgress !== null && !shouldYield()) {
    performUnitOfWork(workInProgress);
  }
}
copy

Every time before executing performUnitOfWork, you will pay attention to the return value of shouldYield(), which means that the reconciler process can be interrupted.

shouldYield

// packages\scheduler\src\SchedulerPostTask.js
export function unstable_shouldYield() {
  return getCurrentTime() >= deadline;
}
copy

getCurrentTime is new Date(), and deadline is the end timestamp of each frame processed by the browser, so here it means that this task will only be processed when the browser is idle for each frame. If the current task is executed in the browser In a certain frame, the current task will be interrupted, and the browser will wait for the current frame to be executed, and the current task will not be executed until the next frame is free.

So no matter in workLoopConcurrent or workLoopSync, performUnitOfWork will be called cyclically according to whether the current workInProgress fiber is null or not. According to the flow chart and the above, we can see what the process from beginWork to completeUnitOfWork actually does.

These three chapters will explain the reconcileChildren process, completeWork process, and commitMutationEffects..insertOrAppendPlacementNodeIntoContainer(DOM) process of the fiber tree. Here we will explain in detail the diff algorithm of the v17 version of React, the creation of virtual dom to real dom, the execution process of function life hook, etc.

performUnitOfWork

function performUnitOfWork(unitOfWork: Fiber): void {
  // The current, flushed, state of this fiber is the alternate. Ideally
  // nothing should rely on this, but relying on it here means that we don't
  // need an additional field on the work in progress.
  const current = unitOfWork.alternate;
  setCurrentDebugFiberInDEV(unitOfWork);

  let next;
  if (enableProfilerTimer && (unitOfWork.mode & ProfileMode) !== NoMode) {
    startProfilerTimer(unitOfWork);
    next = beginWork(current, unitOfWork, subtreeRenderLanes);
    stopProfilerTimerIfRunningAndRecordDelta(unitOfWork, true);
  } else {
    // beginWork
    next = beginWork(current, unitOfWork, subtreeRenderLanes);
  }

  resetCurrentDebugFiberInDEV();
  unitOfWork.memoizedProps = unitOfWork.pendingProps;
  if (next === null) {
    // If this doesn't spawn new work, complete the current work.
    // completeUnitOfWork
    completeUnitOfWork(unitOfWork);
  } else {
    workInProgress = next;
  }

  ReactCurrentOwner.current = null;
}
copy

Therefore, in performUnitOfWork, every time beginWork is executed, workIngProgress is updated, and completeUnitOfWork is executed after traversing the entire fiber tree.

beginWork

We can see that beginWork is the actual execution of originBeginWork. When we open the source code of beginWork, we can see that it executes processing functions of different component types according to different workInProgress.tag s. Here we will not split them into details. As long as we have ideas, we will write a separate article about this. Details, but in the end will call reconcileChildren.

completeUnitOfWork

When the traversal is completed, beginWork is executed, and completeUnitOfWork is executed after the traversal is completed.

function completeUnitOfWork(unitOfWork: Fiber): void {
  // Attempt to complete the current unit of work, then move to the next
  // sibling. If there are no more siblings, return to the parent fiber.
  let completedWork = unitOfWork;
  do {
    // The current, flushed, state of this fiber is the alternate. Ideally
    // nothing should rely on this, but relying on it here means that we don't
    // need an additional field on the work in progress.
    const current = completedWork.alternate;
    const returnFiber = completedWork.return;

    // Check if the work completed or if something threw.
    if ((completedWork.flags & Incomplete) === NoFlags) {
      setCurrentDebugFiberInDEV(completedWork);
      let next;
      if (
        !enableProfilerTimer ||
        (completedWork.mode & ProfileMode) === NoMode
      ) {
        // Bind events, update props, update dom
        next = completeWork(current, completedWork, subtreeRenderLanes);
      } else {
        startProfilerTimer(completedWork);
        next = completeWork(current, completedWork, subtreeRenderLanes);
        // Update render duration assuming we didn't error.
        stopProfilerTimerIfRunningAndRecordDelta(completedWork, false);
      }
      resetCurrentDebugFiberInDEV();

      if (next !== null) {
        // Completing this fiber spawned new work. Work on that next.
        workInProgress = next;
        return;
      }

      resetChildLanes(completedWork);

      if (
        returnFiber !== null &&
        // Do not append effects to parents if a sibling failed to complete
        (returnFiber.flags & Incomplete) === NoFlags
      ) {
        // Append all the effects of the subtree and this fiber onto the effect
        // list of the parent. The completion order of the children affects the
        // side-effect order.

        // Merge the collected side effects into the parent effect lists
        if (returnFiber.firstEffect === null) {
          returnFiber.firstEffect = completedWork.firstEffect;
        }
        if (completedWork.lastEffect !== null) {
          if (returnFiber.lastEffect !== null) {
            returnFiber.lastEffect.nextEffect = completedWork.firstEffect;
          }
          returnFiber.lastEffect = completedWork.lastEffect;
        }

        // If this fiber had side-effects, we append it AFTER the children's
        // side-effects. We can perform certain side-effects earlier if needed,
        // by doing multiple passes over the effect list. We don't want to
        // schedule our own side-effect on our own list because if end up
        // reusing children we'll schedule this effect onto itself since we're
        // at the end.
        const flags = completedWork.flags;

        // Skip both NoWork and PerformedWork tags when creating the effect
        // list. PerformedWork effect is read by React DevTools but shouldn't be
        // committed.
        // Skip NoWork, PerformedWork is not used in the commit phase

        if (flags > PerformedWork) {
          if (returnFiber.lastEffect !== null) {
            returnFiber.lastEffect.nextEffect = completedWork;
          } else {
            returnFiber.firstEffect = completedWork;
          }
          returnFiber.lastEffect = completedWork;
        }
      }
    } else {
      // This fiber did not complete because something threw. Pop values off
      // the stack without entering the complete phase. If this is a boundary,
      // capture values if possible.
      const next = unwindWork(completedWork, subtreeRenderLanes);

      // Because this fiber did not complete, don't reset its expiration time.

      if (next !== null) {
        // If completing this work spawned new work, do that next. We'll come
        // back here again.
        // Since we're restarting, remove anything that is not a host effect
        // from the effect tag.
        next.flags &= HostEffectMask;
        workInProgress = next;
        return;
      }

      if (
        enableProfilerTimer &&
        (completedWork.mode & ProfileMode) !== NoMode
      ) {
        // Record the render duration for the fiber that errored.
        stopProfilerTimerIfRunningAndRecordDelta(completedWork, false);

        // Include the time spent working on failed children before continuing.
        let actualDuration = completedWork.actualDuration;
        let child = completedWork.child;
        while (child !== null) {
          actualDuration += child.actualDuration;
          child = child.sibling;
        }
        completedWork.actualDuration = actualDuration;
      }

      if (returnFiber !== null) {
        // Mark the parent fiber as incomplete and clear its effect list.
        returnFiber.firstEffect = returnFiber.lastEffect = null;
        returnFiber.flags |= Incomplete;
      }
    }

    // Sibling Pointer
    const siblingFiber = completedWork.sibling;
    if (siblingFiber !== null) {
      // If there is more work to do in this returnFiber, do that next.
      workInProgress = siblingFiber;
      return;
    }
    // Otherwise, return to the parent
    completedWork = returnFiber;
    // Update the next thing we're working on in case something throws.
    workInProgress = completedWork;
  } while (completedWork !== null);

  // We've reached the root.
  if (workInProgressRootExitStatus === RootIncomplete) {
    workInProgressRootExitStatus = RootCompleted;
  }
}
copy

Its role is to collect the flags that have been tagged with side effects on the fiber tree layer by layer, and collect them all the way to the root to facilitate the addition, deletion, and modification of dom during the commit stage.

scheduler process

There should be many people here who don't understand what coordination and scheduling mean. In layman's terms:

  • Coordination is collaboration
  • Scheduling is the execution of orders

Therefore, coordination in React is a js thread, and many modules need to be arranged to complete the entire process, such as: synchronous and asynchronous lane processing, reconcileChildren processing fiber nodes, etc., to ensure the orderly execution of the entire process. Scheduling is to allow idle js threads (frame level) to perform other tasks. This process is called scheduling, so how does it do it?

When we go back to processing asynchronous tasks, we will find that the function performConcurrentWorkOnRoot is wrapped with a layer of scheduleCallback:

newCallbackNode = scheduleCallback(
   schedulerPriorityLevel,
   performConcurrentWorkOnRoot.bind(null, root),
)
copy
export function scheduleCallback(
  reactPriorityLevel: ReactPriorityLevel,  callback: SchedulerCallback,  options: SchedulerCallbackOptions | void | null,
) {
  const priorityLevel = reactPriorityToSchedulerPriority(reactPriorityLevel);
  return Scheduler_scheduleCallback(priorityLevel, callback, options);
}
copy

After a lot of hard work, we found the place where the function is declared

// packages/scheduler/src/Scheduler.js
function unstable_scheduleCallback(priorityLevel, callback, options) {
  var currentTime = getCurrentTime();

  var startTime;
  if (typeof options === 'object' && options !== null) {
    var delay = options.delay;
    if (typeof delay === 'number' && delay > 0) {
      startTime = currentTime + delay;
    } else {
      startTime = currentTime;
    }
  } else {
    startTime = currentTime;
  }

  var timeout;
  switch (priorityLevel) {
    case ImmediatePriority:
      timeout = IMMEDIATE_PRIORITY_TIMEOUT;
      break;
    case UserBlockingPriority:
      timeout = USER_BLOCKING_PRIORITY_TIMEOUT;
      break;
    case IdlePriority:
      timeout = IDLE_PRIORITY_TIMEOUT;
      break;
    case LowPriority:
      timeout = LOW_PRIORITY_TIMEOUT;
      break;
    case NormalPriority:
    default:
      timeout = NORMAL_PRIORITY_TIMEOUT;
      break;
  }

  var expirationTime = startTime + timeout;

  var newTask = {
    id: taskIdCounter++,
    callback,
    priorityLevel,
    startTime,
    expirationTime,
    sortIndex: -1,
  };
  if (enableProfiling) {
    newTask.isQueued = false;
  }

  if (startTime > currentTime) {
    // This is a delayed task.
    newTask.sortIndex = startTime;
    push(timerQueue, newTask);
    if (peek(taskQueue) === null && newTask === peek(timerQueue)) {
      // All tasks are delayed, and this is the task with the earliest delay.
      if (isHostTimeoutScheduled) {
        // Cancel an existing timeout.
        cancelHostTimeout();
      } else {
        isHostTimeoutScheduled = true;
      }
      // Schedule a timeout.
      requestHostTimeout(handleTimeout, startTime - currentTime);
    }
  } else {
    newTask.sortIndex = expirationTime;
    push(taskQueue, newTask);
    if (enableProfiling) {
      markTaskStart(newTask, currentTime);
      newTask.isQueued = true;
    }
    // Schedule a host callback, if needed. If we're already performing work,
    // wait until the next time we yield.
    if (!isHostCallbackScheduled && !isPerformingWork) {
      isHostCallbackScheduled = true;
      requestHostCallback(flushWork);
    }
  }

  return newTask;
}
copy
  • When starttime > currentTime, it means that the task has timed out and will be inserted into the timeout queue.
  • The task has not timed out and is inserted into the scheduling queue
  • Execute the requestHostCallback scheduling task.
  // Create a message channel
  const channel = new MessageChannel();
  const port = channel.port2;
  channel.port1.onmessage = performWorkUntilDeadline;

  // Tell the scheduler to start scheduling
  requestHostCallback = function(callback) {
    scheduledHostCallback = callback;
    if (!isMessageLoopRunning) {
      isMessageLoopRunning = true;
      port.postMessage(null);
    }
  };
copy

react creates a message channel through new MessageChannel(), and when it finds that the js thread is idle, it notifies the scheduler to start scheduling through postMessage. The function of performWorkUntilDeadline is to process the update from the start time to the end time of the react scheduling.

Here we want to focus on the device frame rate.

  forceFrameRate = function(fps) {
    if (fps < 0 || fps > 125) {
      // Using console['error'] to evade Babel and ESLint
      console['error'](
        'forceFrameRate takes a positive int between 0 and 125, ' +
          'forcing frame rates higher than 125 fps is not supported',
      );
      return;
    }
    if (fps > 0) {
      yieldInterval = Math.floor(1000 / fps);
    } else {
      // reset the framerate
      yieldInterval = 5;
    }
  };
copy

performWorkUntilDeadline

  const performWorkUntilDeadline = () => {
    if (scheduledHostCallback !== null) {
      const currentTime = getCurrentTime();
      // Yield after `yieldInterval` ms, regardless of where we are in the vsync
      // cycle. This means there's always time remaining at the beginning of
      // the message event.
      // Update current frame end time
      deadline = currentTime + yieldInterval;
      const hasTimeRemaining = true;
      try {
        const hasMoreWork = scheduledHostCallback(
          hasTimeRemaining,
          currentTime,
        );
        // Continue with tasks
        if (!hasMoreWork) {
          isMessageLoopRunning = false;
          scheduledHostCallback = null;
        } else {
          // If there's more work, schedule the next message event at the end
          // of the preceding one.
          // If not, postMessage
          port.postMessage(null);
        }
      } catch (error) {
        // If a scheduler task throws, exit the current browser task so the
        // error can be observed.
        port.postMessage(null);
        throw error;
      }
    } else {
      isMessageLoopRunning = false;
    }
    // Yielding to the browser will give it a chance to paint, so we can
    // reset this.
    needsPaint = false;
  };
copy

Summarize

This article talks about how React will create a workInProgress fiber linked list tree according to the current task priority and other operations when the state changes. In the coordination phase, it will compare each frame of the browser. If the browser executes each frame If the timestamp is higher than the current time, it means that the current frame has no idle time, and the current task must wait until the next idle frame to execute an interruptible strategy. There is also the traversal of beginWork to update the node of the fiber. So here is the end of this chapter, and the next chapter will talk about React's diff algorithm

Tags: Unix React

Posted by SemiApocalyptic on Tue, 21 Feb 2023 09:51:18 +0530