PaddleSharp icon indicating copy to clipboard operation
PaddleSharp copied to clipboard

Add official Android/iOS support for .NET MAUI and Xamarin

Open ertan2002 opened this issue 5 months ago • 0 comments

Feature request type

Enhancement

Is your feature request related to a problem? Please describe

I’m building mobile apps with .NET MAUI/Xamarin that require on-device OCR (PaddleOCR). PaddleSharp works well on desktop (Windows/macOS/Linux) thanks to the provided native inference runtimes, but there’s no turnkey path for Android/iOS. This creates several issues:

  • No prebuilt native inference binaries for Android/iOS that PaddleSharp can consume.
  • OpenCV/OpenCvSharp image pipeline on mobile is non-trivial to package and maintain.
  • Developers must custom-build Paddle inference for mobile (often via Paddle Lite), create native bindings for both Android and iOS, and maintain the interop layer themselves.
  • App size, performance, and power constraints require mobile-optimized builds and accelerators (NNAPI/Metal), which aren’t available out of the box.

As a result, MAUI/Xamarin developers can’t easily use PaddleOCR on mobile, and must either abandon Paddle on mobile or invest heavily in native builds and bindings.

Describe the solution you'd like

Official, first-class mobile support for Android and iOS in PaddleSharp, ideally with:

  • Prebuilt native mobile runtimes
    • Android: AAR or .so packages (arm64-v8a; optionally armeabi-v7a), optimized for on-device inference (e.g., via Paddle Lite) with NNAPI options.
    • iOS: XCFramework or .framework (arm64), leveraging Metal where possible.
  • .NET bindings and NuGet packaging
    • Sdcb.PaddleInference.runtime.android-arm64 and Sdcb.PaddleInference.runtime.ios-arm64 (or equivalent naming), versioned consistently with the desktop packages.
    • Clear target frameworks (net8.0-android, net8.0-ios) and runtime identifiers so MAUI/Xamarin projects restore the correct native assets automatically.
  • Model compatibility and samples
    • Guidance and scripts for preparing PaddleOCR models for mobile (quantization/optimization).
    • Minimal MAUI samples for Android and iOS:
      • Image acquisition (camera/gallery) -> preprocessing -> detection -> recognition -> rendering results.
      • Benchmarks and memory guidance (cold/warm start, batch size).
  • Image pipeline support
    • Either mobile-ready OpenCV bindings guidance, or a lightweight image conversion utility that avoids heavy dependencies on mobile (e.g., pixel buffers to tensor with native intrinsics).
  • Hardware acceleration toggles
    • Android NNAPI and iOS Metal delegates where applicable, with straightforward C# options (e.g., DeviceOptions with Cpu/Gpu/Nnapi/Metal).
  • App size and deployment guidance
    • Trimmed/stripped native libraries, ProGuard/Linker settings, and known-good configurations to minimize APK/IPA size and avoid runtime linking errors.

Impact This would unlock PaddleOCR for a large segment of .NET mobile developers who use MAUI/Xamarin and need reliable, on-device OCR with modern accuracy. It lowers integration cost, improves app performance and size, and aligns PaddleSharp with real-world cross-platform scenarios that include mobile devices.

ertan2002 avatar Aug 20 '25 11:08 ertan2002