Location based speech recognition of applet = > plug-in: wechat simultaneous interpretation

Note: some contents and pictures of the article are from the Internet. If there is any infringement, please contact me (the home page has a official account: the front end of small siege lion Science)

Author: a small front-end siege lion
Home page: Home page of small front-end siege lion,
Starting: Nuggets

GitHub: P-J27,
CSDN: PJ wants to be a front-end siege lion

The copyright belongs to the author. For commercial reprint, please contact the author for authorization. For non-commercial reprint, please indicate the source.

introduction

This is a function point involved when I finished the design. It is used to do voice input & Recognition on the small program. Here is how to achieve to share with you. You should know that you have done wx applets. What the official documents say = = nothing. If someone needs to do similar functions in the future, I hope it will be helpful. Avoid wasting a lot of time on official documents.

The main purpose of this function is to obtain the user's voice, transfer the text, and then upload the content to the web page on the PC in real time. Here, I mentioned the previous voice acquisition and voice translation, but I won't mention the following content on the wall to the PC in real time (technical solution: WebSocket). I will write a special article later when I have time.

Technical proposal

The small program plug-in: wechat simultaneous interpretation is mainly used here. Plug in document location

Specific steps

Add plug-in = > applet background
  1. Log in to the applet background first (do not make a mistake about the project): Official website transmission

  2. Then set = > third party settings = > Add plug-ins

  3. Search wechat simultaneous interpretation and select Add.

  4. If you can't find it, don't panic. There are two solutions.

    1. get into Plug in document location , click Add. Don't panic if adding fails. There are also the following solutions.
    2. None of the above can work, so let's ignore it first, write the code directly, wait for the error in the development tool, and click to add it.
  5. Get the Appid and version number in the plug-in document (select the latest one)

Global configuration app json

On app The appid and version number of the plug-in configured in JSON (use the latest one, otherwise the control will keep warning), obtained above.

  "plugins":{
    "WechatSI": {
      "version":"0.3.5",
      "provider":"wx069ba97219f66d99"
    }
  },
Case code implementation
index.wxml
<view class="videoWrap">
  <textarea class='videoCon' bindinput="conInput" placeholder='Waiting to speak...' value='{{content}}'></textarea>
</view>
<view class='video-konw'>
  <button class="videoBtn {{recordState == true ? 'videoBtnBg':''}}" bindtouchstart="touchStart" bindtouchend="touchEnd">
    <text wx:if="{{recordState == false}}">Hold to talk</text>
    <text wx:else>Release end</text>
  </button>
</view>
<view class='send-btn'>
  <button class="videoBtn sendBtn" bindtap="sendBarrage" >
  Content on Wall
  </button>
</view>
<!-- Start voice pop up voice icon indicates recording -->
<cover-view class="startYuyinImage" wx:if="{{recordState == true}}">
  <cover-icon></cover-icon>
  <cover-image src="../../images/video.png"></cover-image>
  <cover-view>Start voice</cover-view>
</cover-view>
index.js
const app = getApp();
//Plug in: wechat simultaneous interpretation
const plugin = requirePlugin('WechatSI');
//Get the globally unique voice recognition manager recordRecoManager
const manager = plugin.getRecordRecognitionManager();
 
Page({
 
  /**
   * Initial data of the page
   */
  data: {
    //voice
    recordState: false, //Recording status
    content:'',//content
  },
  /**
   * Life cycle function -- listening for page loading
   */
  onLoad: function (options) {
      /**ws Files used, which can be deleted here
    this.setData({
      dataPacker:JSON.parse(options.param)
    })
    */
    //Speech recognition
    this.initRecord();
  },
  // Manually enter content
  conInput: function (e) {
    this.setData({
      content:e.detail.value,
    })
  },
  //Speech recognition - initialization
  initRecord: function () {
    const that = this;
    // This event will be called if a new identification content is returned
    manager.onRecognize = function (res) {
      console.log(res)
    }
    // This event will be called when recording recognition starts normally
    manager.onStart = function (res) {
      console.log("Recording recognition started successfully", res)
    }
    // Identify error events
    manager.onError = function (res) {
      console.error("error msg", res)
    }
    //Identify end events
    manager.onStop = function (res) {
      console.log('..............End recording')
      console.log('Recording temporary file address -->' + res.tempFilePath); 
      console.log('Total recording time -->' + res.duration + 'ms'); 
      console.log('file size --> ' + res.fileSize + 'B');
      console.log('Voice content --> ' + res.result);
      if (res.result == '') {
        wx.showModal({
          title: 'Tips',
          content: 'I can't hear you clearly. Please say it again!',
          showCancel: false,
          success: function (res) {}
        })
        return;
      }
      // var text = that.data.content + res.result;
      that.setData({
        content: res.result
      })
    }
  },
  //Voice - press and hold to speak
  touchStart: function (e) {
    this.setData({
      recordState: true  //Recording status
    })
    // Speech start recognition
    manager.start({
      lang: 'zh_CN',// The recognized language currently supports zh_CN en_US zh_HK sichuanhua
    })
  },
  //Voice - release end
  touchEnd: function (e) {
    this.setData({
      recordState: false
    })
    // Speech end recognition
    manager.stop();
  },
    
    /**ws Files used, which can be deleted here
   
  sendBarrage(){
    if(this.data.content?.length>0){
      let data = this.data.dataPacker
      data.content = this.data.content
     this.sendDataToWS(data)
    }else{
      wx.showToast({
        title: 'The content of the upper wall is empty ',
        icon:'error'
      })
    }

  },
  sendDataToWS(data){
    wx.sendSocketMessage({
      data: JSON.stringify(data),
    })
  },*/
})
index.wxss

I use colorUI here. You can reprocess the style yourself

page {
  height: 100%;
  width: 100%;
  background: #FAFAFA;
  background-image: url(https://uploads-ssl.webflow.com/60e3d74.../60e3d74..._60e3a35..._Mesh%252087-p-500.jpeg);
  background-size: cover;
}

.videoWrap {
  margin-top: 150rpx;
  word-wrap: break-word;
  position: relative;
  width: 80%;
  margin-left: 10%;
  background: #FFFFFF;
  box-shadow: 0 2px 16px 2px rgba(0, 0, 0, 0.1);
  padding: 50rpx 60rpx;
  box-sizing: border-box;
  min-height: 260rpx;
}

.videoCon {
  width: 100%;
  margin: 0 auto;
  padding: 10rpx;
  background: #fff;
}

.videos {
  position: absolute;
  bottom: 0;
  left: 48rpx;
  font-size: 36rpx;
  color: #999;
  padding-bottom: 10rpx;
}

.videos icon.iconfont {
  font-size: 34rpx;
  padding: 0 17rpx 15rpx;
  border-radius: 50%;
  background: #73dbef;
  margin-right: 14rpx;
  color: #fff;
}

.consultYuyin {
  height: 100%;
  width: 90%;
}

.icon-jianpan1 {
  position: absolute;
  left: 10rpx;
  bottom: 6px;
  color: #606267;
  font-size: 60rpx;
}

.videoBtn {
  width: 50%;
  height: 80rpx;
  line-height: 80rpx;
  background-image: var(--gradualBlue);
  color: var(--white);
  border-radius: 8px;
  margin-top: 150rpx;
}
.sendBtn{
  margin-top: 80rpx;
  background-image: var(--gradualPink);
  color: var(--white);

}
.videoBtnBg {
  background: #bdb4b4;
}

.videoBtn::after {
  /* background: #fff; */
  /* color: #000; */
  border-radius: 0;
  border: none;
}

.startYuyinImage {
  position: fixed;
  top: 216rpx;
  left: 50%;
  width: 240rpx;
  height: 300rpx;
  background: rgba(0, 0, 0, 0.6);
  border-radius: 20rpx;
  color: #fff;
  text-align: center;
  margin-left: -120rpx;
}

.startYuyinImage cover-image {
  margin: 60rpx auto;
  width: 100rpx;
  height: 100rpx;
}

.startYuyinImage cover-view {
  margin-top: 25rpx;
}
explain

The above code can be directly copied. Normal as follows

  1. If the above does not add successfully, you can compile the code directly after copy ing. The console will report errors like the following (because I have added it, it is not easy to demonstrate. Anyway, it means that you have not added the simultaneous interpretation plug-in, and then you can directly click it to let you add it. You can add it directly.)

  1. The recording file format of the developer tool is different from that of the mobile terminal, so the developer tool recording configured here cannot be recognized. If you want to test whether the function is normal, you can use demonstration or real machine debugging

Thank you for reading. I hope it can help you. If there is any error or infringement in this article, you can leave a message in the comment area or add a official account to my home page to contact me.

Writing is not easy. If you feel good, you can "like" + "comment". Thank you for your support ❤

Tags: Mini Program Front-end wechat Voice recognition

Posted by dlcmpls on Thu, 02 Jun 2022 07:54:30 +0530