Inference is performed by a WebAssembly module, to produce a segmentation alpha mask. We resize the image down to 256x144px, to make it suitable for the ML model we use for segmentation. Individual video frame data can be read directly from the stream using the experimental Insertable Streams API, which exposes the media stream as a readable stream of video frames.Īfter reading data from the input streams, a render loop copies the video into a manipulatable source such as a canvas or ImageBitmap. We read from webcam and screen capture video feeds using the Media Streams API. This makes the rendering pipeline less prone to frame drops due to background activity, such as message processing, in the main thread. To achieve this, we’re leveraging experimental technologies to offload the processing workload to a worker thread. ![]() Pipeline introductionīecause of the nature of real-time video processing, we need to ensure that video frames are rendered with minimal latency and without interruption. We’ve used a variety of web technologies, including WebGL and WebAssembly, to make background effects as performant as possible on our desktop platforms. This blog post provides a deep dive into our implementation of background effects (background blur and background image replacement) for browsers and the desktop client. ![]() We’ve continued iterating on Clips since its release, adding thumbnail selection, background blur, and most recently, background image replacement. Last September, Slack released Clips, allowing users to capture video, audio, and screen recordings in messages to help distributed teams connect and share their work.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |