[Shanda conference] application setting module

preface

In this article, I will introduce my design for the setup page of the conference client of Yamada.

Overall structure

The whole Setting module is encapsulated in a Setting module, which will be displayed to the user in the form of Modal screen in the client. Its overall structure is divided into four parts:

  • General settings
  • Audio and video equipment
  • Attendance status
  • about

Each part is subdivided into independent modules for easy maintenance.

General settings

Let's first introduce the general setting module, which is responsible for managing some general functions of the application. Including whether the application needs to be automatically logged in when it is started, whether the application is allowed to be automatically started when it is started, and whether encryption is enabled for private video calls.
The module code of the whole general setting is as follows:

import { AlertOutlined, LogoutOutlined, QuestionCircleFilled } from '@ant-design/icons';
import { Button, Checkbox, Modal, Tooltip } from 'antd';
import React, { useEffect, useState } from 'react';
import { getMainContent } from 'Utils/Global';
import { eWindow } from 'Utils/Types';

export default function General() {
	const [autoLogin, setAutoLogin] = useState(localStorage.getItem('autoLogin') === 'true');
	const [autoOpen, setAutoOpen] = useState(false);
	const [securityPrivateWebrtc, setSecurityPrivateWebrtc] = useState(
		localStorage.getItem('securityPrivateWebrtc') === 'true'
	);
	useEffect(() => {
		eWindow.ipc.invoke('GET_OPEN_AFTER_START_STATUS').then((status: boolean) => {
			setAutoOpen(status);
		});
	}, []);

	return (
		<>
			<div>
				<Checkbox
					checked={autoLogin}
					onChange={(e) => {
						setAutoLogin(e.target.checked);
						localStorage.setItem('autoLogin', `${e.target.checked}`);
					}}>
					automatic logon
				</Checkbox>
			</div>
			<div>
				<Checkbox
					checked={autoOpen}
					onChange={(e) => {
						setAutoOpen(e.target.checked);
						eWindow.ipc.send('EXCHANGE_OPEN_AFTER_START_STATUS', e.target.checked);
					}}>
					Start on startup
				</Checkbox>
			</div>
			<div style={{ display: 'flex' }}>
				<Checkbox
					checked={securityPrivateWebrtc}
					onChange={(e) => {
						if (e.target.checked) {
							Modal.confirm({
								icon: <AlertOutlined />,
								content:
									'Enabling encryption will greatly improve the CPU Occupied, please confirm again whether the function needs to be enabled!',
								cancelText: 'Not open for the time being',
								okText: 'Confirm opening',
								onCancel: () => {},
								onOk: () => {
									setSecurityPrivateWebrtc(true);
									localStorage.setItem('securityPrivateWebrtc', `${true}`);
								},
							});
						} else {
							setSecurityPrivateWebrtc(false);
							localStorage.setItem('securityPrivateWebrtc', `${false}`);
						}
					}}>
					Private encrypted call
				</Checkbox>
				<Tooltip placement='right' overlay={'Enabling encryption will greatly improve CPU Occupied and will not open GPU accelerate'}>
					<QuestionCircleFilled style={{ color: 'gray', transform: 'translateY(25%)' }} />
				</Tooltip>
			</div>
			<div style={{ marginTop: '5px' }}>
				<Button
					icon={<LogoutOutlined />}
					danger
					type='primary'
					onClick={() => {
						Modal.confirm({
							title: 'cancellation',
							content: 'Are you sure you want to exit the current user login?',
							icon: <LogoutOutlined />,
							cancelText: 'cancel',
							okText: 'confirm',
							okButtonProps: {
								danger: true,
							},
							onOk: () => {
								eWindow.ipc.send('LOG_OUT');
							},
							getContainer: getMainContent,
						});
					}}>
					Log out
				</Button>
			</div>
		</>
	);
}

Among them, the automatic login function is relatively simple. I will focus on the implementation of the boot auto start function.

Start on startup

To implement this function, you need to modify the user's registry. The front end does not have the ability to modify the user registry, so we need to call node through electron JS module to implement the operation of the user registry.
In the main process part of electron, we add the following event handles to ipcMain:

const { app } = require('electron');
const ipc = require('electron').ipcMain;
const cp = require('child_process');

ipc.on('EXCHANGE_OPEN_AFTER_START_STATUS', (evt, openAtLogin) => {
	if (app.isPackaged) {
		if (openAtLogin) {
			cp.exec(
				`REG ADD HKLM\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Run /v SduMeeting /t REG_SZ /d "${process.execPath}" /f`,
				(err) => {
					console.log(err);
				}
			);
		} else {
			cp.exec(
				`REG DELETE HKLM\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Run /v SduMeeting /f`,
				(err) => {
					console.log(err);
				}
			);
		}
	}
});

ipc.handle('GET_OPEN_AFTER_START_STATUS', () => {
	return new Promise((resolve) => {
		cp.exec(
			`REG QUERY HKLM\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Run /v SduMeeting`,
			(err, stdout, stderr) => {
				if (err) {
					resolve(false);
				}
				resolve(stdout.indexOf('SduMeeting') >= 0);
			}
		);
	});
});

The two event handles correspond to modifying the startup status and obtaining the startup status respectively. We call node JS' child_ The process module, through the COMMAND statement, implements the addition, deletion, modification and query of the registry on the Windows system, and thus realizes the ability to modify the self startup of the application when it starts up.
It should be noted that in the production environment, because modifying the registry requires administrator permission, you need to apply for administrator permission for the application during packaging. Because I use electronic packager for packaging, I need to add an additional parameter --win32metadata requested-execution-level=requireAdministrator .

Audio and video equipment

Since the purpose of this project is to enable multiple users to conduct online video conference, we must maintain the processing of audio and video equipment for users. To facilitate maintenance, I split the audio device and video device into two modules for management. On them, a multimedia device module is responsible for managing shared data (such as the current multimedia device list and the device Id currently in use).

Multimedia devices (MediaDevices.tsx)

In this module, we first need to extract all multimedia devices connected to the user's current device. To achieve this, you can use our previous articles [Shanda conference] acquisition of user media based on WebRTC Content in.
Let's first implement a function to obtain the user's multimedia device:

/**
 * Get user multimedia devices
 */
function getUserMediaDevices() {
	return new Promise((resolve, reject) => {
		try {
			navigator.mediaDevices.enumerateDevices().then((devices) => {
				const generateDeviceJson = (device: MediaDeviceInfo) => {
					const formerIndex = device.label.indexOf(' (');
					const latterIndex = device.label.lastIndexOf(' (');
					const { label, webLabel } = ((label, deviceId) => {
						switch (deviceId) {
							case 'default':
								return {
									label: label.replace('Default - ', ''),
									webLabel: label.replace('Default - ', 'default - '),
								};
							case 'communications':
								return {
									label: label.replace('Communications - ', ''),
									webLabel: label.replace('Communications - ', 'Communication equipment - '),
								};
							default:
								return { label, webLabel: label };
						}
					})(
						formerIndex === latterIndex
							? device.label
							: device.label.substring(0, latterIndex),
						device.deviceId
					);
					return { label, webLabel, deviceId: device.deviceId };
				};
				let videoDevices = [],
					audioDevices = [];
				for (const index in devices) {
					const device = devices[index];
					if (device.kind === 'videoinput') {
						videoDevices.push(generateDeviceJson(device));
					} else if (device.kind === 'audioinput') {
						audioDevices.push(generateDeviceJson(device));
					}
				}
				store.dispatch(updateAvailableDevices(DEVICE_TYPE.VIDEO_DEVICE, videoDevices));
				store.dispatch(updateAvailableDevices(DEVICE_TYPE.AUDIO_DEVICE, audioDevices));
				resolve({ video: videoDevices, audio: audioDevices });
			});
		} catch (error) {
			console.warn('Error getting device');
			reject(error);
		}
	});
}

By calling this function, we will get the current multimedia device information and send it to Redux for status update.
The codes of the whole multimedia device module are as follows:

import { CustomerServiceOutlined } from '@ant-design/icons';
import { Button } from 'antd';
import { globalMessage } from 'Components/GlobalMessage/GlobalMessage';
import React, { useEffect, useState } from 'react';
import { DEVICE_TYPE } from 'Utils/Constraints';
import { updateAvailableDevices } from 'Utils/Store/actions';
import store from 'Utils/Store/store';
import { DeviceInfo } from 'Utils/Types';
import AudioDevices from './AudioDevices';
import VideoDevices from './VideoDevices';

export default function MediaDevices() {
	const [videoDevices, setVideoDevices] = useState(store.getState().availableVideoDevices);
	const [audioDevices, setAudioDevices] = useState(store.getState().availableAudioDevices);
	const [usingVideoDevice, setUsingVideoDevice] = useState('');
	const [usingAudioDevice, setUsingAudioDevice] = useState('');
	useEffect(
		() =>
			store.subscribe(() => {
				const storeState = store.getState();
				setVideoDevices(storeState.availableVideoDevices);
				setAudioDevices(storeState.availableAudioDevices);
				setUsingVideoDevice(`${(storeState.usingVideoDevice as DeviceInfo).webLabel}`);
				setUsingAudioDevice(`${(storeState.usingAudioDevice as DeviceInfo).webLabel}`);
			}),
		[]
	);

	useEffect(() => {
		getUserMediaDevices();
	}, []);

	return (
		<>
			<AudioDevices
				audioDevices={audioDevices}
				usingAudioDevice={usingAudioDevice}
				setUsingAudioDevice={setUsingAudioDevice}
			/>
			<VideoDevices
				videoDevices={videoDevices}
				usingVideoDevice={usingVideoDevice}
				setUsingVideoDevice={setUsingVideoDevice}
			/>
			<Button
				type='link'
				style={{ fontSize: '0.9em' }}
				icon={<CustomerServiceOutlined />}
				onClick={() => {
					getUserMediaDevices().then(() => {
						globalMessage.success('Equipment information update completed', 0.5);
					});
				}}>
				No suitable equipment found? Click me to re acquire the device
			</Button>
		</>
	);
}

/**
 * Get user multimedia devices
 */
function getUserMediaDevices() {
	return new Promise((resolve, reject) => {
		try {
			navigator.mediaDevices.enumerateDevices().then((devices) => {
				const generateDeviceJson = (device: MediaDeviceInfo) => {
					const formerIndex = device.label.indexOf(' (');
					const latterIndex = device.label.lastIndexOf(' (');
					const { label, webLabel } = ((label, deviceId) => {
						switch (deviceId) {
							case 'default':
								return {
									label: label.replace('Default - ', ''),
									webLabel: label.replace('Default - ', 'default - '),
								};
							case 'communications':
								return {
									label: label.replace('Communications - ', ''),
									webLabel: label.replace('Communications - ', 'Communication equipment - '),
								};
							default:
								return { label, webLabel: label };
						}
					})(
						formerIndex === latterIndex
							? device.label
							: device.label.substring(0, latterIndex),
						device.deviceId
					);
					return { label, webLabel, deviceId: device.deviceId };
				};
				let videoDevices = [],
					audioDevices = [];
				for (const index in devices) {
					const device = devices[index];
					if (device.kind === 'videoinput') {
						videoDevices.push(generateDeviceJson(device));
					} else if (device.kind === 'audioinput') {
						audioDevices.push(generateDeviceJson(device));
					}
				}
				store.dispatch(updateAvailableDevices(DEVICE_TYPE.VIDEO_DEVICE, videoDevices));
				store.dispatch(updateAvailableDevices(DEVICE_TYPE.AUDIO_DEVICE, audioDevices));
				resolve({ video: videoDevices, audio: audioDevices });
			});
		} catch (error) {
			console.warn('Error getting device');
			reject(error);
		}
	});
}

Video devices (VideoDevices.tsx)

Adhering to the principle of "easy before difficult", let's bypass the audio device module and talk about the video device module. The code of the whole module is as follows:

import { Button, Select } from 'antd';
import React, { useEffect, useRef, useState } from 'react';
import { DEVICE_TYPE } from 'Utils/Constraints';
import eventBus from 'Utils/EventBus/EventBus';
import { getDeviceStream } from 'Utils/Global';
import { exchangeMediaDevice } from 'Utils/Store/actions';
import store from 'Utils/Store/store';
import { DeviceInfo } from 'Utils/Types';

interface VideoDevicesProps {
	videoDevices: Array<DeviceInfo>;
	usingVideoDevice: string;
	setUsingVideoDevice: React.Dispatch<React.SetStateAction<string>>;
}

export default function VideoDevices(props: VideoDevicesProps) {
	const [isExamingCamera, setIsExamingCamera] = useState(false);
	const examCameraRef = useRef<HTMLVideoElement>(null);
	useEffect(() => {
		if (isExamingCamera) {
			videoConnect(examCameraRef);
		} else {
			const examCameraDOM = examCameraRef.current as HTMLVideoElement;
			examCameraDOM.pause();
			examCameraDOM.srcObject = null;
		}
	}, [isExamingCamera]);

	useEffect(() => {
		const onCloseSettingModal = function () {
			setIsExamingCamera(false);
		};
		eventBus.on('CLOSE_SETTING_MODAL', onCloseSettingModal);
		return () => {
			eventBus.off('CLOSE_SETTING_MODAL', onCloseSettingModal);
		};
	}, []);

	return (
		<div>
			Please select a video device:
			<Select
				placeholder='Please select a video device'
				style={{ width: '100%' }}
				onSelect={(
					label: string,
					option: { key: string; value: string; children: string }
				) => {
					props.setUsingVideoDevice(label);
					store.dispatch(
						exchangeMediaDevice(DEVICE_TYPE.VIDEO_DEVICE, {
							deviceId: option.key,
							label: option.value,
							webLabel: option.children,
						})
					);
					if (isExamingCamera) {
						videoConnect(examCameraRef);
					}
				}}
				value={props.usingVideoDevice}>
				{props.videoDevices.map((device) => (
					<Select.Option value={device.label} key={device.deviceId}>
						{device.webLabel}
					</Select.Option>
				))}
			</Select>
			<div style={{ margin: '0.25rem' }}>
				<Button
					style={{ width: '7em' }}
					onClick={() => {
						setIsExamingCamera(!isExamingCamera);
					}}>
					{isExamingCamera ? 'Stop checking' : 'Check the camera'}
				</Button>
			</div>
			<div
				style={{
					width: '100%',
					display: 'flex',
					justifyContent: 'center',
				}}>
				<video
					ref={examCameraRef}
					style={{
						background: 'black',
						width: '40vw',
						height: 'calc(40vw / 1920 * 1080)',
					}}
				/>
			</div>
		</div>
	);
}

async function videoConnect(examCameraRef: React.RefObject<HTMLVideoElement>) {
	const videoStream = await getDeviceStream(DEVICE_TYPE.VIDEO_DEVICE);
	const examCameraDOM = examCameraRef.current as HTMLVideoElement;
	examCameraDOM.srcObject = videoStream;
	examCameraDOM.play();
}

The user can use this module to replace the required camera and test it.

AudioDevices

The audio device module provides roughly the same functions as the video device module, but it mostly includes the function of testing the microphone volume. In this application, I realized the microphone volume test through audio worklet node. First, define a workflow script registration process under public:

// \public\electronAssets\worklet\volumeMeter.js
/* eslint-disable no-underscore-dangle */
const SMOOTHING_FACTOR = 0.8;
// eslint-disable-next-line no-unused-vars
const MINIMUM_VALUE = 0.00001;
registerProcessor(
	'vumeter',
	class extends AudioWorkletProcessor {
		_volume;
		_updateIntervalInMS;
		_nextUpdateFrame;
		_currentTime;

		constructor() {
			super();
			this._volume = 0;
			this._updateIntervalInMS = 50;
			this._nextUpdateFrame = this._updateIntervalInMS;
			this._currentTime = 0;
			this.port.onmessage = (event) => {
				if (event.data.updateIntervalInMS) {
					this._updateIntervalInMS = event.data.updateIntervalInMS;
					// console.log(event.data.updateIntervalInMS);
				}
			};
		}

		get intervalInFrames() {
			// eslint-disable-next-line no-undef
			return (this._updateIntervalInMS / 1000) * sampleRate;
		}

		process(inputs, outputs, parameters) {
			const input = inputs[0];

			// Note that the input will be down-mixed to mono; however, if no inputs are
			// connected then zero channels will be passed in.
			if (0 < input.length) {
				const samples = input[0];
				let sum = 0;

				// Calculated the squared-sum.
				for (const sample of samples) {
					sum += sample ** 2;
				}

				// Calculate the RMS level and update the volume.
				const rms = Math.sqrt(sum / samples.length);
				this._volume = Math.max(rms, this._volume * SMOOTHING_FACTOR);

				// Update and sync the volume property with the main thread.
				this._nextUpdateFrame -= samples.length;
				if (this._nextUpdateFrame < 0) {
					this._nextUpdateFrame += this.intervalInFrames;
					// const currentTime = currentTime ;
					// eslint-disable-next-line no-undef
					if (!this._currentTime || 0.125 < currentTime - this._currentTime) {
						// eslint-disable-next-line no-undef
						this._currentTime = currentTime;
						// console.log(`currentTime: ${currentTime}`);
						this.port.postMessage({ volume: this._volume });
					}
				}
			}

			return true;
		}
	}
);

In the React project, I use a custom Hook to call the workflow script to test the volume:

/**
 * [Customize Hooks] monitor the media stream volume
 * @returns Volume, connection stream function, disconnection function
 */
const useVolume = () => {
	const [volume, setVolume] = useState(0);
	const ref = useRef({});

	const onmessage = useCallback((evt) => {
		if (!ref.current.audioContext) {
			return;
		}
		if (evt.data.volume) {
			setVolume(Math.round(evt.data.volume * 200));
		}
	}, []);

	const disconnectAudioContext = useCallback(() => {
		if (ref.current.node) {
			try {
				ref.current.node.disconnect();
			} catch (err) {}
		}
		if (ref.current.source) {
			try {
				ref.current.source.disconnect();
			} catch (err) {}
		}
		ref.current.node = null;
		ref.current.source = null;
		ref.current.audioContext = null;
		setVolume(0);
	}, []);

	const connectAudioContext = useCallback(
		async (mediaStream: MediaStream) => {
			if (ref.current.audioContext) {
				disconnectAudioContext();
			}
			try {
				ref.current.audioContext = new AudioContext();
				await ref.current.audioContext.audioWorklet.addModule(
					'../electronAssets/worklet/volumeMeter.js'
				);
				if (!ref.current.audioContext) {
					return;
				}
				ref.current.source = ref.current.audioContext.createMediaStreamSource(mediaStream);
				ref.current.node = new AudioWorkletNode(ref.current.audioContext, 'vumeter');
				ref.current.node.port.onmessage = onmessage;
				ref.current.source
					.connect(ref.current.node)
					.connect(ref.current.audioContext.destination);
			} catch (errMsg) {
				disconnectAudioContext();
			}
		},
		[disconnectAudioContext, onmessage]
	);

	return [volume, connectAudioContext, disconnectAudioContext];
};

The source code of the entire audio equipment module is as follows:

import { Button, Checkbox, Progress, Select } from 'antd';
import { globalMessage } from 'Components/GlobalMessage/GlobalMessage';
import React, { useEffect, useRef, useState } from 'react';
import { DEVICE_TYPE } from 'Utils/Constraints';
import eventBus from 'Utils/EventBus/EventBus';
import { getDeviceStream } from 'Utils/Global';
import { useVolume } from 'Utils/MyHooks/MyHooks';
import { exchangeMediaDevice } from 'Utils/Store/actions';
import store from 'Utils/Store/store';
import { DeviceInfo } from 'Utils/Types';

interface AudioDevicesProps {
	audioDevices: Array<DeviceInfo>;
	usingAudioDevice: string;
	setUsingAudioDevice: React.Dispatch<React.SetStateAction<string>>;
}

export default function AudioDevices(props: AudioDevicesProps) {
	const [isExamingMicroPhone, setIsExamingMicroPhone] = useState(false);
	const [isSoundMeterConnecting, setIsSoundMeterConnecting] = useState(false);
	const examMicroPhoneRef = useRef<HTMLAudioElement>(null);

	const [volume, connectStream, disconnectStream] = useVolume();

	useEffect(() => {
		const examMicroPhoneDOM = examMicroPhoneRef.current as HTMLAudioElement;
		if (isExamingMicroPhone) {
			getDeviceStream(DEVICE_TYPE.AUDIO_DEVICE).then((stream) => {
				connectStream(stream).then(() => {
					globalMessage.success('Complete audio device connection');
					setIsSoundMeterConnecting(false);
				});
				examMicroPhoneDOM.srcObject = stream;
				examMicroPhoneDOM.play();
			});
		} else {
			disconnectStream();
			examMicroPhoneDOM.pause();
		}
	}, [isExamingMicroPhone]);

	useEffect(() => {
		const onCloseSettingModal = function () {
			setIsExamingMicroPhone(false);
			setIsSoundMeterConnecting(false);
		};
		eventBus.on('CLOSE_SETTING_MODAL', onCloseSettingModal);
		return () => {
			eventBus.off('CLOSE_SETTING_MODAL', onCloseSettingModal);
		};
	}, []);

	const [noiseSuppression, setNoiseSuppression] = useState(
		localStorage.getItem('noiseSuppression') !== 'false'
	);
	const [echoCancellation, setEchoCancellation] = useState(
		localStorage.getItem('echoCancellation') !== 'false'
	);

	return (
		<div>
			Please select a recording device:
			<Select
				placeholder='Please select a recording device'
				style={{ width: '100%' }}
				onSelect={(
					label: string,
					option: { key: string; value: string; children: string }
				) => {
					props.setUsingAudioDevice(label);
					store.dispatch(
						exchangeMediaDevice(DEVICE_TYPE.AUDIO_DEVICE, {
							deviceId: option.key,
							label: option.value,
							webLabel: option.children,
						})
					);
					if (isExamingMicroPhone) {
						getDeviceStream(DEVICE_TYPE.AUDIO_DEVICE).then((stream) => {
							connectStream(stream).then(() => {
								globalMessage.success('Complete audio device connection');
								setIsSoundMeterConnecting(false);
							});
							const examMicroPhoneDOM = examMicroPhoneRef.current as HTMLAudioElement;
							examMicroPhoneDOM.pause();
							examMicroPhoneDOM.srcObject = stream;
							examMicroPhoneDOM.play();
						});
					}
				}}
				value={props.usingAudioDevice}>
				{props.audioDevices.map((device) => (
					<Select.Option value={device.label} key={device.deviceId}>
						{device.webLabel}
					</Select.Option>
				))}
			</Select>
			<div style={{ marginTop: '0.25rem', display: 'flex' }}>
				<div style={{ height: '1.2rem' }}>
					<Button
						style={{ width: '7em' }}
						onClick={() => {
							if (!isExamingMicroPhone) setIsSoundMeterConnecting(true);
							setIsExamingMicroPhone(!isExamingMicroPhone);
						}}
						loading={isSoundMeterConnecting}>
						{isExamingMicroPhone ? 'Stop checking' : 'Check microphone'}
					</Button>
				</div>
				<div style={{ width: '50%', margin: '0.25rem' }}>
					<Progress
						percent={volume}
						showInfo={false}
						strokeColor={
							isExamingMicroPhone ? (volume > 70 ? '#e91013' : '#108ee9') : 'gray'
						}
						size='small'
					/>
				</div>
				<audio ref={examMicroPhoneRef} />
			</div>
			<div style={{ display: 'flex', marginTop: '0.5em' }}>
				<div style={{ fontWeight: 'bold' }}>Audio options:</div>
				<div
					style={{
						display: 'flex',
						justifyContent: 'center',
					}}>
					<Checkbox
						checked={noiseSuppression}
						onChange={(evt) => {
							setNoiseSuppression(evt.target.checked);
							localStorage.setItem('noiseSuppression', `${evt.target.checked}`);
						}}>
						noise suppression
					</Checkbox>
					<Checkbox
						checked={echoCancellation}
						onChange={(evt) => {
							setEchoCancellation(evt.target.checked);
							localStorage.setItem('echoCancellation', `${evt.target.checked}`);
						}}>
						Echo cancellation
					</Checkbox>
				</div>
			</div>
		</div>
	);
}

In addition to replacing the test microphone and monitoring the volume, it also allows users to choose whether to use noise suppression and echo cancellation when connecting.

Attendance status

The attendance status module is relatively simple. It only maintains whether the microphone and camera are turned on by default for users to join the conference. The codes are as follows:

import { Checkbox } from 'antd';
import React, { useState } from 'react';

export default function MeetingStatus() {
    const [autoOpenMicroPhone, setAutoOpenMicroPhone] = useState(
        localStorage.getItem('autoOpenMicroPhone') === 'true'
    );
    const [autoOpenCamera, setAutoOpenCamera] = useState(
        localStorage.getItem('autoOpenCamera') === 'true'
    );

    return (
        <>
            <Checkbox
                checked={autoOpenMicroPhone}
                onChange={(e) => {
                    setAutoOpenMicroPhone(e.target.checked);
                    localStorage.setItem('autoOpenMicroPhone', `${e.target.checked}`);
                }}>
                Turn on the microphone during the meeting
            </Checkbox>
            <Checkbox
                checked={autoOpenCamera}
                onChange={(e) => {
                    setAutoOpenCamera(e.target.checked);
                    localStorage.setItem('autoOpenCamera', `${e.target.checked}`);
                }}>
                Turn on the camera during the meeting
            </Checkbox>
        </>
    );
}

about

The last module will show the application information. The core part is to detect whether the application needs to be updated. To achieve this, I first wrote a simple function to compare version numbers.

function needUpdate(nowVersion: string, targetVersion: string) {
	const nowArr = nowVersion.split('.').map((i) => Number(i));
	const newArr = targetVersion.split('.').map((i) => Number(i));
	const lessLength = Math.min(nowArr.length, newArr.length);
	for (let i = 0; i < lessLength; i++) {
		if (nowArr[i] < newArr[i]) {
			return true;
		} else if (nowArr[i] > newArr[i]) {
			return false;
		}
	}
	if (nowArr.length < newArr.length) return true;
	return false;
}

The code of the whole module is as follows:

import { Button, Image, Progress } from 'antd';
import axios from 'axios';
import { globalMessage } from 'Components/GlobalMessage/GlobalMessage';
import React, { useEffect, useMemo, useState } from 'react';
import { eWindow } from 'Utils/Types';
import './style.scss';

function needUpdate(nowVersion: string, targetVersion: string) {
	const nowArr = nowVersion.split('.').map((i) => Number(i));
	const newArr = targetVersion.split('.').map((i) => Number(i));
	const lessLength = Math.min(nowArr.length, newArr.length);
	for (let i = 0; i < lessLength; i++) {
		if (nowArr[i] < newArr[i]) {
			return true;
		} else if (nowArr[i] > newArr[i]) {
			return false;
		}
	}
	if (nowArr.length < newArr.length) return true;
	return false;
}

export default function About() {
	const [appVersion, setAppVersion] = useState<string | undefined>(undefined);
	useEffect(() => {
		eWindow.ipc.invoke('APP_VERSION').then((version: string) => {
			setAppVersion(version);
		});
	}, []);

	const thisYear = useMemo(() => new Date().getFullYear(), []);

	const [latestVersion, setLatestVersion] = useState(false);
	const [checking, setChecking] = useState(false);
	const checkForUpdate = () => {
		setChecking(true);
		axios
			.get('https://assets.aiolia.top/ElectronApps/SduMeeting/manifest.json', {
				headers: {
					'Cache-Control': 'no-cache',
				},
			})
			.then((res) => {
				const { latest } = res.data;
				if (needUpdate(appVersion as string, latest)) setLatestVersion(latest);
				else globalMessage.success({ content: 'Currently, it is the latest version, and no update is required' });
			})
			.catch(() => {
				globalMessage.error({
					content: 'Check for updates failed',
				});
			})
			.finally(() => {
				setChecking(false);
			});
	};

	const [total, setTotal] = useState(Infinity);
	const [loaded, setLoaded] = useState(0);
	const [updating, setUpdating] = useState(false);
	const update = () => {
		setUpdating(true);
		axios
			.get(`https://assets.aiolia.top/ElectronApps/SduMeeting/${latestVersion}/update.zip`, {
				responseType: 'blob',
				onDownloadProgress: (evt) => {
					const { loaded, total } = evt;
					setTotal(total);
					setLoaded(loaded);
				},
				headers: {
					'Cache-Control': 'no-cache',
				},
			})
			.then((res) => {
				const fr = new FileReader();
				fr.onload = () => {
					eWindow.ipc.invoke('DOWNLOADED_UPDATE_ZIP', fr.result).then(() => {
						setTimeout(() => {
							eWindow.ipc.send('READY_TO_UPDATE');
						}, 500);
					});
				};
				fr.readAsBinaryString(res.data);
				globalMessage.success({ content: 'The update package has been downloaded and the application will be restarted...' });
			});
	};

	return (
		<div id='settingAboutContainer'>
			<div>
				<Image
					src={'../electronAssets/favicon177x128.ico'}
					preview={false}
					width={'25%'}
					height={'25%'}
				/>
			</div>
			<div className='settingAboutFaviconText'>Yamada Conference</div>
			<div className='settingAboutFaviconText'>SDU Meeting</div>
			<div id='settingVersionText'>V {appVersion}</div>
			{latestVersion ? (
				<>
					<div>Check for new available versions: V {latestVersion}´╝îUpdate?</div>
					{updating ? (
						<>
							<Progress
								percent={Number(((loaded / total) * 100).toFixed(0))}
								status={loaded === total ? 'success' : 'active'}
							/>
						</>
					) : (
						<Button onClick={update}>Start download</Button>
					)}
				</>
			) : (
				<Button type='primary' onClick={checkForUpdate} loading={checking}>
					Check for updates
				</Button>
			)}
			<div id='copyright'>Copyright (c) 2021{thisYear ? ` - ${thisYear}` : ''} De broyu</div>
		</div>
	);
}


When the application detects a new version, it will download the latest version update package in the form of Blob. After downloading, it will save the update package in a specific location through the function I wrote in electron.

const ipc = require('electron').ipcMain;
const fs = require('fs-extra');

ipc.handle('DOWNLOADED_UPDATE_ZIP', (evt, data) => {
	fs.writeFileSync(path.join(EXEPATH, 'resources', 'update.zip'), data, 'binary');
	return true;
});

Since some files to be replaced in the update package are occupied when the application is started, I wrote another function in electron to start a sub process independent of the Shanda conference application itself. After the Shanda conference is automatically closed, I call an update (decompression) program I wrote in C++ to extract the contents of the update package and overwrite the old files, so as to realize the update of the application.

// Update process in electron
const { app } = require('electron');
const cp = require('child_process');

function readyToUpdate() {
	const { spawn } = cp;
	const child = spawn(
		path.join(EXEPATH, 'resources/ReadyUpdater.exe'),
		['YES_I_WANNA_UPDATE_ASAR'],
		{
			detached: true,
			shell: true,
		}
	);
	if (mainWindow) mainWindow.close();
	child.unref();
	app.quit();
}
// ReadyUpdater.cpp

#include <iostream>
#include <stdlib.h>
#include <tchar.h>
#include <Windows.h>
#include "unzip.h"
using namespace std;

int main(int argc, char* argv[])
{
	Sleep(300);
	if (argc < 2) {
		cout << "You are running the program in an improper manner" << endl;
	}
	else {
		char* safetyKey = argv[1];
		if (strcmp("YES_I_WANNA_UPDATE_ASAR", safetyKey) != 0) {
			cout << "You should not perform this procedure" << endl;
		}
		else {
			HZIP hz = OpenZip(_T(".\\resources\\update.zip"), 0);
			SetUnzipBaseDir(hz, _T(".\\resources"));
			ZIPENTRY ze;
			GetZipItem(hz, -1, &ze);
			int numitems = ze.index;
			// -1 gives overall information about the zipfile
			for (int zi = 0; zi < numitems; zi++)
			{
				ZIPENTRY ze;
				GetZipItem(hz, zi, &ze); // fetch individual details
				UnzipItem(hz, zi, ze.name);         // e.g. the item's name.
			}
			CloseZip(hz);
			system("del .\\resources\\update.zip");
			cout << "Update complete" << endl;
			cout << "Please restart the app" << endl;
		}
	}
	system("pause");
	return 0;
}

Tags: Javascript Front-end React

Posted by xeross on Sat, 04 Jun 2022 03:56:17 +0530