client-sdk-react-native icon indicating copy to clipboard operation
client-sdk-react-native copied to clipboard

add video processor support for react native app(include android and ios), just like web

Open Genening opened this issue 10 months ago • 2 comments

Is your feature request related to a problem? Please describe. I am building a react native app, I want to add my customer processor, however I can't get the video frame for processing, after searching from the web, It seems like no one share the solution to build the react native app with custom processor. I really need this support!!!

Describe the solution you'd like I want to use @livekit/react-native to connect the livekit room, and I can get the trackID, and maybe I can use this trackID to get the video frame in the origin app code (include ios and android), then I can add my custom processor inside the video frame, after that publish the frame back the livekit room. Actually, I find out that the web sdk support this feature because the web component provide the tag, but the android (or ios) don't have this kind of tag. But I really want to use react native to build the app and use the android (or ios) code to add custom processor. Please let me know if this pipeline is reasonable or not.

Additional context Add any other context or screenshots about the feature request here.

  • android code
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
import com.facebook.react.bridge.ReactMethod;
import com.mibaiapp.videoprocessing.GestureAndPoseProcessor;
import com.mibaiapp.videoprocessing.RecognitionResultListener;
import com.oney.WebRTCModule.GetUserMediaImpl;
import com.oney.WebRTCModule.WebRTCModule;
import com.oney.WebRTCModule.videoEffects.ProcessorProvider;

public class WebRTCModule extends ReactContextBaseJavaModule {
    private final ReactApplicationContext reactContext;
    private final GetUserMediaImpl getUserMediaImpl;

    public WebRTCModule(ReactApplicationContext context) {
        super(context);
        this.reactContext = context;
        this.getUserMediaImpl = new GetUserMediaImpl(new WebRTCModule(context), context);
    }

    @Override
    public String getName() {
        return "WebRTCModule";
    }

    @ReactMethod
    public void setupVideoProcessor(String trackId) {
        RecognitionResultListener listener = new RecognitionResultListener(reactContext);
        GestureAndPoseProcessor processor = new GestureAndPoseProcessor(
                reactContext,
                listener,
                listener
        );
        ProcessorProvider.addProcessor("gestureAndPose", () -> processor);
        getUserMediaImpl.setVideoEffect(trackId, "gestureAndPose");
    }

    @ReactMethod
    public void logRemoteData(String data) {
        android.util.Log.d("WebRTCModule", "Remote data received: " + data);
    }
}
  • react native code
import { useEffect } from 'react';
import { DeviceEventEmitter } from 'react-native';
import { Room } from '@livekit/react-native';
import { DataPacket_Kind } from 'livekit-client';
import { VideoProcessorService } from '@/implements/services/livekit/VideoProcessorService';

interface UseVideoProcessorProps {
  room: Room | null;
  trackId: string | undefined;
}

export const useVideoProcessor = ({ room, trackId }: UseVideoProcessorProps): void => {
  useEffect(() => {
    if (!room || !trackId) return;

    VideoProcessorService.setupVideoProcessor(trackId);

    const handleGestureDetected = (gesture: string) => {
      console.log('Gesture detected:', gesture);
      const data = new TextEncoder().encode(JSON.stringify({ type: 'gesture', value: gesture }));
      room.localParticipant.publishData(data, DataPacket_Kind.RELIABLE);
    };

    const handlePoseDetected = (poseData: number[]) => {
      console.log('Pose data:', poseData);
      const data = new TextEncoder().encode(JSON.stringify({ type: 'pose', value: poseData }));
      room.localParticipant.publishData(data, DataPacket_Kind.RELIABLE);
    };

    const gestureListener = DeviceEventEmitter.addListener('onGestureDetected', handleGestureDetected);
    const poseListener = DeviceEventEmitter.addListener('onPoseDetected', handlePoseDetected);

    return () => {
      gestureListener.remove();
      poseListener.remove();
    };
  }, [room, trackId]);
};

Genening avatar Apr 08 '25 10:04 Genening

This is related to adding custom filters to video frames, like DeepAR, right

KalanaPerera avatar Apr 17 '25 09:04 KalanaPerera

@davidliu Is this being taken?

Arjit0762 avatar Aug 11 '25 14:08 Arjit0762