add video processor support for react native app(include android and ios), just like web
Is your feature request related to a problem? Please describe. I am building a react native app, I want to add my customer processor, however I can't get the video frame for processing, after searching from the web, It seems like no one share the solution to build the react native app with custom processor. I really need this support!!!
Describe the solution you'd like
I want to use @livekit/react-native to connect the livekit room, and I can get the trackID, and maybe I can use this trackID to get the video frame in the origin app code (include ios and android), then I can add my custom processor inside the video frame, after that publish the frame back the livekit room. Actually, I find out that the web sdk support this feature because the web component provide the
Additional context Add any other context or screenshots about the feature request here.
- android code
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
import com.facebook.react.bridge.ReactMethod;
import com.mibaiapp.videoprocessing.GestureAndPoseProcessor;
import com.mibaiapp.videoprocessing.RecognitionResultListener;
import com.oney.WebRTCModule.GetUserMediaImpl;
import com.oney.WebRTCModule.WebRTCModule;
import com.oney.WebRTCModule.videoEffects.ProcessorProvider;
public class WebRTCModule extends ReactContextBaseJavaModule {
private final ReactApplicationContext reactContext;
private final GetUserMediaImpl getUserMediaImpl;
public WebRTCModule(ReactApplicationContext context) {
super(context);
this.reactContext = context;
this.getUserMediaImpl = new GetUserMediaImpl(new WebRTCModule(context), context);
}
@Override
public String getName() {
return "WebRTCModule";
}
@ReactMethod
public void setupVideoProcessor(String trackId) {
RecognitionResultListener listener = new RecognitionResultListener(reactContext);
GestureAndPoseProcessor processor = new GestureAndPoseProcessor(
reactContext,
listener,
listener
);
ProcessorProvider.addProcessor("gestureAndPose", () -> processor);
getUserMediaImpl.setVideoEffect(trackId, "gestureAndPose");
}
@ReactMethod
public void logRemoteData(String data) {
android.util.Log.d("WebRTCModule", "Remote data received: " + data);
}
}
- react native code
import { useEffect } from 'react';
import { DeviceEventEmitter } from 'react-native';
import { Room } from '@livekit/react-native';
import { DataPacket_Kind } from 'livekit-client';
import { VideoProcessorService } from '@/implements/services/livekit/VideoProcessorService';
interface UseVideoProcessorProps {
room: Room | null;
trackId: string | undefined;
}
export const useVideoProcessor = ({ room, trackId }: UseVideoProcessorProps): void => {
useEffect(() => {
if (!room || !trackId) return;
VideoProcessorService.setupVideoProcessor(trackId);
const handleGestureDetected = (gesture: string) => {
console.log('Gesture detected:', gesture);
const data = new TextEncoder().encode(JSON.stringify({ type: 'gesture', value: gesture }));
room.localParticipant.publishData(data, DataPacket_Kind.RELIABLE);
};
const handlePoseDetected = (poseData: number[]) => {
console.log('Pose data:', poseData);
const data = new TextEncoder().encode(JSON.stringify({ type: 'pose', value: poseData }));
room.localParticipant.publishData(data, DataPacket_Kind.RELIABLE);
};
const gestureListener = DeviceEventEmitter.addListener('onGestureDetected', handleGestureDetected);
const poseListener = DeviceEventEmitter.addListener('onPoseDetected', handlePoseDetected);
return () => {
gestureListener.remove();
poseListener.remove();
};
}, [room, trackId]);
};
This is related to adding custom filters to video frames, like DeepAR, right
@davidliu Is this being taken?