Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

[Tutorial] Using C++ OpenCV within Unity

Discussion in 'Scripting' started by Thomas-Mountainborn, Mar 5, 2017.

  1. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    Hi there,

    I required some OpenCV image processing for a project a couple of months ago, and didn't really find a lot of information on that subject. I also did not want to buy an asset for a free library. In an effort to help other developers in the same situation, I decided to write a couple of tutorials to kick-start their implementation of OpenCV within Unity:
    Note that these are not OpenCV tutorials, but focus only on getting OpenCV to communicate with Unity.

    Hopefully this'll be of use to some of you! Let me know if you have any questions on the matter at hand.
     
    Last edited: Mar 15, 2017
  2. pirou

    pirou

    Joined:
    Jan 29, 2015
    Posts:
    6
    Hi thomas, i was asked to investigate existing solutions about openCV and Unity3D.
    I found your tutorials and i followed them, encountered some trouble at some point but that were my mistakes and your instructions were clear.
    I finally got to the end, created the two scripts, put them on a gameObject andi tried on the camera but i can't see anything happening on my screen. Maybe you can tell me, what should i do when i created the two scripts? i pluged in a webcam and unplugged it, i can't see any camera flux. I guess i am not understanding something.
    I just followed everything quite blindly to be frank as i don't know anything on openCV itself. I am a unity3D developper.
    I created the two scripts. I had an error on the dll import because my dll was not named "UnityOpenCVSample" so i replaced by mine and it worked.

    I only have this log now when i play the scene. I took the "lbpcascade_frontalface.xml" from the source files and copied it at the root, i tried putting it in the StreamingAsset folder but it doesn't change
    "[OpenCVFaceDetection] Failed to find cascades definition.
    UnityEngine.Debug:LogWarningFormat(String, Object[])
    OpenCVFaceDetection:Start() (at Assets/OpenCVFaceDetection.cs:28)"

    Hope you can answer me !
    Great work !
     
  3. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    Hey Pirou,

    In Part 3, the face tracking is only used as an input to drive a normalized screen position, which can then be used to move objects around. There is only a debug camera output displayed in an OpenCV imshow() window, the camera output is not displayed within Unity. The OpenCVFaceDetection script goes onto an empty game object, and the PositionAtFaceScreenSpace sample script goes onto a mesh in front of the camera (down the z axis).

    However, in your case, nothing is happening because OpenCV did not find the cascade defintion. Did you place it in the root project folder (NOT the root Assets folder)?

    In the upcoming part 4, I will show how you can use the same fixed pointer technique to pass an array of pixels to Unity so you can have a texture with the OpenCV camera output.
     
  4. pirou

    pirou

    Joined:
    Jan 29, 2015
    Posts:
    6
    Hi thomas, thanks for the quick reply,
    on my own i solved the file not found and it's true that i put it in the assets folder and not the root project (my bad, i thought the root folder meant the asset folder)
    i search online and from what i found, i guessed the path was wrong so i replaced the name of the file by my own path in the code in visual studio, compiled it again and replaced the dll. The xml was found but i launched the unity scene,a small window is launched named "Unity OpenCv Interop Sample" and i see a black screen. i guess i didn't get everything or i am not supposed to see anything !
     

    Attached Files:

  5. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    That window is the OpenCV imshow() window (it's the last line of the OpenCV code). It should contain your camera output and the face detection. It seems however that the camera stream is not working on your end - can you try just having a bare bones OpenCV camera application to see if it is able to read your camera? You can use this sample code in a new C++ project, set up as shown in part 2.
     
  6. pirou

    pirou

    Joined:
    Jan 29, 2015
    Posts:
    6
    i am gonna try. Keep you posted

    Edit: can't seem to make it work. I created a new project, set up exactly like the other following the part 2 but when i copy the sample code, i got some errors with the library so i modified and now it doesn't recognize VideoCapture type

    Edit2: i don't know why but i had some huge trouble launching on VS studio the code you told me to test but i finally made it work. I guess i corrupted something during my tests on VS. I generated everything again with cmake and it works on VS ! I am gonna try to call it from unity3D

    Edit3: i can't make the call from unity3D,i am told the dll is not found but it's in the plugins folder.
    Maybe i need to change the code and not try to call the main?
     
    Last edited: Mar 16, 2017
  7. pirou

    pirou

    Joined:
    Jan 29, 2015
    Posts:
    6
    I did it ! I was able to call functions from the dll inside unity and run the simple example displaying the camera.
    I made some tests and i can only call functions returning void or int but when i tried with string, the unity crashed !
    My friends worked with openCv before so i am gonna try to run more complex examples !
     
  8. DanielBernier

    DanielBernier

    Joined:
    Mar 23, 2017
    Posts:
    1
    Great tutorial. Thank you Thomas. I wasn't getting frames displayed either.
    To fix that, I changed _capture.open(-1) to _capture.open(0) in the Init native function. That did the trick. Works like a charm.
     
    pirou and Thomas-Mountainborn like this.
  9. jsfledd

    jsfledd

    Joined:
    Mar 14, 2013
    Posts:
    3
    Great tutorial @Thomas-Mountainborn and thanks for the tip @DanielBernier !

    I am close to getting this working correctly. I can get the capture feed to display perfectly without doing any facial detection. When I start attempting to do the detection, however, Unity crashes. I've narrowed it down to this line:

    _faceCascade.detectMultiScale(resizedGray, faces);

    If I comment this out, everything works fine ... not sure what's going on exactly. The error log says:

    opencv_objdetect320d.dll caused an Access Violation (0xc0000005)
    in module opencv_objdetect320d.dll at 0033:5d494ce7.

    I'm going to keep looking into it but if you have any suggestions, please advise. Thanks again for taking the time to make the tutorial(s)!
     
  10. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    Hey jsfledd, I don't know why you're getting that error there. I do notice that you're using the debug .dll, whereas the tutorial is using the release versions - make sure that you're either using all debug .dll's and build your package as Debug, or all release .dll's (no "d" suffix in the OpenCV filenames) and build your project as Release.

    Also Daniel, I updated the tutorial to your feedback.
     
    jsfledd likes this.
  11. jsfledd

    jsfledd

    Joined:
    Mar 14, 2013
    Posts:
    3
    That was it! Thanks Thomas. I had both the debug and release dlls in there at the same time.
     
    Thomas-Mountainborn likes this.
  12. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    (this is my first post on the forum, please bear with me :) )

    Amazing tutorial @Thomas-Mountainborn, thanks so much for taking the time to put it together!

    I wanted to ask you whether it's possible to send an image (or texture) to OpenCV, and receive the image back in Unity after doing some processing. I'm using Vuforia within Unity so I need to make sure the image processed by OpenCV is exactly what's captured and displayed by Vuforia's AR camera, that's why I don't want to do the image capture inside OpenCV.

    You mentioned in a previous answer that you will be showing in part 4 how to send a pixel array from OpenCV to Unity, so perhaps that'll answer my question :)

    Thanks so much for your help again, it's super appreciated!!
     
    Thomas-Mountainborn likes this.
  13. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    Hey martejpad, that's indeed the thing that will be discussed in the next part. The exact same technique will be used: a pointer to an array of pixels is passed in between Unity and OpenCV.
     
    martejpad likes this.
  14. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Many thanks for the super quick response! Really looking forward to part 4 and learning how to do it - when can we expect it? (no pressure :rolleyes:)
     
  15. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    Well, I start every weekend with the intention of writing it, but you know how it goes. I'll see if I can dig up some snippets to set you on your way sooner.
     
    martejpad, chelnok and jsfledd like this.
  16. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Thanks a million Thomas, that would be super helpful!!

    After a lot of sweating I've finally managed to get your sample to work. I've followed your instructions but adapted to MacOS, Monodevelop and Eclipse (I'm brave I know...!). Everything works fine although every time I stop playing the scene Unity crashes, but for now I'm going to ignore that...

    The next natural step would be to make this work on Android or iOS. Could you possibly shed some light on how to do this? I've tried Building and Running this example for Android but I get a NullReferenceException error (I imagine VideoCapture just doesn't work).

    Apologies if this is super trivial or I'm talking nonsense, I'm really new to this!

    Thank you so much again for willing to help all of us and being so generous with your time!!
     
  17. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Hi! Just wanted to share that I've managed to use OpenCV (natively in C++, not Java) in Unity from an Android app :D I used Android Studio to compile an .so library, if anyone is interested in the details of the process let me know and I'm happy to share everything!

    Thanks!!
     
  18. inder2

    inder2

    Joined:
    Apr 13, 2017
    Posts:
    9
    Please share tutorial unity andriod opencv
     
  19. TheFlaxias

    TheFlaxias

    Joined:
    Sep 14, 2014
    Posts:
    5
    how to use OpenCV in Unity to create an Android App?
     
  20. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Hey guys,

    I'm going to share here my personal write-up of the process I followed to develop natively (C++) using OpenCV, then use this code to build an Android app from Unity. I use MacOS. Big disclaimer: I don't have a background in software development so I can't guarantee the correctness of any of this. All I know it's that it's working for me right now :) I also recommend you have a read of all the resources I link, they helped me a lot.

    The process Thomas shared is valid to compile a native (C++) OpenCV code into a shared library to be read from the Unity Editor and in a Windows machine, although it can be easily adapted for MacOS, but that's a different story. If you want something that works on Android, you instead need to cross compile your code in the form of an .so library.

    Note: In this SO post (Answer by Bilelmnasser) there’s a brief explanation of how the libraries need to be loaded from Unity according to the target platform. And here’s the information Unity provides regarding developing native plugins for Android.

    The main thing to understand (sorry if this is too basic for you) is that when one downloads the regular OpenCV version, this is meant to be compiled for your machine's OS (Windows, Linux, MacOS). For example in my case (MacOS) this means that the compilation results in a set of .dylib libraries that are installed in /usr/local/include and /usr/local/lib, and can be used as explained by Thomas in his tutorials. If instead we want to be able to run OpenCV from an Android platform, we need to use OpenCV4Android, which allows development of OpenCV code for Android, both natively in C++ and in Java (by using the provided wrapper). OpenCV4Android can be downloaded from their website (I specifically downloaded this package - 3.2.0 version).

    If like me you're interested in developing C++ OpenCV code, you can follow the process I'm about to share. If instead you want to use Java OpenCV, this SO post (don’t read the verified answer, but instead the one that follows) contains a brilliant step by step for setting up Android Studio for developing Java OpenCV. I tried it and it works.

    Android allows the use of native code in its apps by means of their NDK toolset. This toolset is available to download on its own, here (and then it can be used in any IDE or through command line), or it can be downloaded through the SDK manager inside Android Studio, as explained here. I chose to use the NDK within Android Studio as it seems to be the recommended way. This NDK essentially allows you to cross compile native code that can be run on an Android platform, which is exactly what we need.

    OpenCV offers very messy tutorials of how to use OpenCV4Android. Available resources are gathered on this page. The main problem is that the tutorials are outdated. In the case of developing in Java it’s all based on the deprecated ADT plugin for Eclipse, which gave GUI-based access to many of the command-line Android SDK tools. On the other hand, in the case of native OpenCV development, these tutorials offer instructions on how to use the NDK within Eclipse and use ndk-build, which is slowly being replaced by CMake (I think NDK support is now directly built in CMake, see here). I think these instructions are still valid, but I preferred to opt for the “recommended” way of using the NDK directly from Android Studio and using CMake.

    The key things that are inside OpenCV4Android folder (info from this tutorial from OpenCV’s website):

    sdk/java folder is key if you're going to use Java OpenCV code, since it contains the Java wrapper. It can be found in sdk/java/src/org/opencv. We're not going to use the contents of this folder since we're going to use C++ instead.

    sdk/native folder contains OpenCV C++ headers (for JNI code) and native Android libraries (*.so and *.a) for ARM-v5, ARM-v7a and x86 architectures. Header files are in sdk/native/jni/include and libraries in sdk/native/libs.

    Assuming you have now have OpenCV4Android and Android Studio + NDK, now we can move on to how I’ve done the whole process in Android Studio. Reading these instructions really helped me to figure this out.

    ⇒ Create project with C/C++ support

    ⇒ Check “Phone and tablet”, leave the recommended minimum SDK version.

    ⇒ Include an Empty Activity (this will create files necessary for the C++ functionality)

    ⇒ Keep the rest of the options as they come by default.

    ⇒ Android Studio has now created a project folder. Inside it, in app/src/main/cpp/, Android Studio will have created a file called native-lib.cpp, which contains an example function called stringfromJNI() that we can ignore. However, you can write your C++ OpenCV code in this file.

    ⇒ In the app folder inside your project folder, a file called CMakeLists.txt will be created. This is the file with all the instructions on how to compile your native C++ code. Leave it as it is right now, we'll be modifying it in a bit.

    ⇒ Copy the libraries of OpenCV4Android folder, contained in sdk/native/libs (all folders for different architectures) in app/main inside your Android Studio project. Change the name of the parent folder from libs to jniLibs. Delete all static libraries (.a) from all folders, only leaving libopencv_java3.so in each of the architecture's folders. The other ones are not needed.

    ⇒ Copy the header files in OpenCV4Android (sdk/native/jni/include) to a folder in app/src/main/cpp. I opted to name the parent folder include as well. I think this folder could be anywhere, the only important thing is that it needs to be in a location accessible by Android Studio.

    ⇒ Edit the CMakeLists.txt file so that 1) it imports the OpenCV library (libopencv_java3.so) as a shared library; 2) it adds the library as a target; 3) it includes the path to OpenCV’s header files. This is a copy of the file I’m using, which does all of this.


    # For more information about using CMake with Android Studio, read the
    # documentation: https://d.android.com/studio/projects/add-native-code.html

    # Sets the minimum version of CMake required to build the native library.

    cmake_minimum_required(VERSION 3.4.1)

    # Creates and names a library, sets it as either STATIC
    # or SHARED, and provides the relative paths to its source code.
    # You can define multiple libraries, and CMake builds them for you.
    # Gradle automatically packages shared libraries with your APK.

    add_library( # Sets the name of the library.
    native-lib

    # Sets the library as a shared library.
    SHARED

    # Provides a relative path to your source file(s).
    src/main/cpp/native-lib.cpp )


    add_library( test-lib SHARED IMPORTED )

    set_target_properties(test-lib PROPERTIES IMPORTED_LOCATION

    ${CMAKE_SOURCE_DIR}/src/main/jniLibs/${ANDROID_ABI}/libopencv_java3.so)

    include_directories(src/main/cpp/include)


    # Searches for a specified prebuilt library and stores the path as a
    # variable. Because CMake includes system libraries in the search path by
    # default, you only need to specify the name of the public NDK library
    # you want to add. CMake verifies that the library exists before
    # completing its build.

    find_library( # Sets the name of the path variable.
    log-lib

    # Specifies the name of the NDK library that
    # you want CMake to locate.
    log )


    # Specifies libraries CMake should link to your target library. You
    # can link multiple libraries, such as libraries you define in this
    # build script, prebuilt third-party libraries, or system libraries.


    target_link_libraries( # Specifies the target library.

    native-lib test-lib

    # Links the target library to the log library
    # included in the NDK.

    ${log-lib} )


    Note: You can ignore the log-lib stuff, it's just there because it was in the sample of Android studio.

    ⇒ Now the project is ready to be built in Android Studio: Build->Make project. This will generate a shared library (.so file, in my case called libnative-lib.so) for our native code for each architecture in jniLibs. The generated libraries can be found in the folder app/build/intermediates/cmake/debug/obj.

    ⇒ Now in Unity! Create a folder called Plugins inside the Assets folder. Then another called Android inside Plugins, and another called libs inside Android. Copy the folders “x86” and “armeabi-v7a” from app/build/intermediates/cmake/debug/obj. These are the processor architectures that Android supports (ARMv7 and x86), see here for more info. Android also supports MIPS but it’s the least popular and not supported by Unity. Also, the 64 counterparts of ARM and x86 are not supported by Unity either. When we later build the app in Unity, it generates a FAT APK by default, which works in both architectures. This setting can be changed in Build Settings->Player Settings ->Android->Other settings->Device filter. IMPORTANT EDIT: Also copy inside the corresponding architecture folder in Plugins the file libopencv_java3.so that can be found in OpenCV4Android/OpenCV-android-sdk/sdk/native/libs.

    Once you've copied your .so in this folder, Unity will treat them as plugins → see here for details on the Plugin inspector and the different file extensions that are treated as plugins by Unity. The folder structure is very important as detailed in the Deployment section of this article, so Unity picks up the right library according to the target architecture (mmm).

    Important resources provided by Unity:
    - Native plugins
    - Building plugins for Android
    - Mono Interop with native libraries


    => Finally, to use the C++ functionality from Unity, you can follow Thomas' tutorials, in particular Part 3.


    Note: If you don't need to develop natively, I believe there's also the possibility to use OpenCV directly from Unity with Emgu CV, which is a cross platform .Net wrapper to OpenCV, allowing OpenCV functions to be called from .NET compatible languages such as C#, VB, VC++, IronPython etc. To make this work follow this SO post (haven’t tried it).

    Let me know if this works for you!
     
    Last edited: Jun 14, 2017
  21. inder2

    inder2

    Joined:
    Apr 13, 2017
    Posts:
    9
    I tried and i get this error

    Error:error: '../../../../src/main/jniLibs/mips64/libopencv_java3.so', needed by '../../../../build/intermediates/cmake/debug/obj/mips64/libnative-lib.so', missing and no known rule to make it

    after Make Project

    it is possible to add as attachment zipped complete project?
     
    Last edited: May 9, 2017
  22. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Hey @inder2, have you followed these instructions?

    It's very important that you copy the .so files to exactly that location, otherwise Android Studio won't find them.
     
  23. inder2

    inder2

    Joined:
    Apr 13, 2017
    Posts:
    9
    thanks now working i had lib in root not in src
     
    martejpad likes this.
  24. dani-dev

    dani-dev

    Joined:
    May 10, 2017
    Posts:
    2
    thank you for your tutorial @martejpad !

    I'm trying yours, and I get "undefined reference on VideoCapture". It was detected by IDE but seems like it couldn't be linked.
    do you need to include Android.mk & Application.mk as mentioned in OpenCV tutorials? if yes, then how to combine it with build.cradle?




    any help appreciated! :)
     
  25. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    @dani-dev Can you post us the content of your CMakeLists.txt?
    I had a similar issue just a few days back and managed to sovle it with marte :D

    Here is my slightly modified file:
    Code (CSharp):
    1. # For more information about using CMake with Android Studio, read the
    2. # documentation: https://d.android.com/studio/projects/add-native-code.html
    3.  
    4. # Sets the minimum version of CMake required to build the native library.
    5.  
    6. cmake_minimum_required(VERSION 3.4.1)
    7.  
    8. # Creates and names a library, sets it as either STATIC
    9. # or SHARED, and provides the relative paths to its source code.
    10. # You can define multiple libraries, and CMake builds them for you.
    11. # Gradle automatically packages shared libraries with your APK.
    12.  
    13. add_library( # Sets the name of the library.
    14. native-lib
    15.  
    16. # Sets the library as a shared library.
    17. SHARED
    18.  
    19. # Provides a relative path to your source file(s).
    20. src/main/cpp/native-lib.cpp )
    21.  
    22. find_library( # Defines the name of the path variable that stores the
    23.               # location of the NDK library.
    24.               log-lib
    25.  
    26.               # Specifies the name of the NDK library that
    27.               # CMake needs to locate.
    28.               log )
    29.  
    30. # Links your native library against one or more other native libraries.
    31. target_link_libraries( # Specifies the target library.
    32.                        native-lib test-lib
    33. #lib_opencv
    34.                        # Links the log library to the target library.
    35.                        ${log-lib} )
    36.  
    37. add_library( test-lib SHARED IMPORTED )
    38.  
    39. set_target_properties(test-lib PROPERTIES IMPORTED_LOCATION
    40.  
    41. ${CMAKE_SOURCE_DIR}/src/main/jniLibs/${ANDROID_ABI}/libopencv_java3.so)
    42.  
    43. include_directories(src/main/cpp/include/)
    44.  
    45.  
    46.  
    47.  
    48.  
    If you copy this and want to use it, make sure to rename your jnilibs folder to jniLibs with a capital "L".
     
    Last edited: May 10, 2017
    dani-dev likes this.
  26. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Hey @dani-dev, my guess is that Android Studio is not finding your libopencv_java3.so files, and it's probably a problem in your CMakeLists.txt file as @Desoxi mentioned. Other option is that you haven't copied the files to the right location.

    Let us know :)
     
    dani-dev likes this.
  27. dani-dev

    dani-dev

    Joined:
    May 10, 2017
    Posts:
    2
    @martejpad @Desoxi

    I'm trying to declare another library (objdet-lib) but it seems didn't work..


    add_library( objdet-lib SHARED src/main/cpp/objdet-lib.cpp )
    target_link_libraries( native-lib test-lib
    objdet-lib
    ${log-lib} )


    okay, I'm using native-lib instead. and it is running.
    now the problem is, my videocapture didn't work

    std::string hello = "Hello from C++";

    cv::VideoCapture captest = cv::VideoCapture();
    captest.open(0);
    captest.open(CV_CAP_ANDROID);
    if(captest.isOpened()){
    hello += "; video loaded."; //trying to debug it with example
    }

    do I need to declare OPENCV_CAMERA_MODULES:=ON somewhere? or because I'm using opencv 2.4.11, is it differ from opencv3 on declaring camera?

    thanks!




     
    Last edited: May 11, 2017
  28. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Hey @dani-dev, I'm really sorry but I haven't tried running Thomas' example on Android, only on the Editor, so unfortunately I don't know how to help. I think @Desoxi was also fighting with the VideoCapture part, hopefully he can give you some guidance!
     
  29. inder2

    inder2

    Joined:
    Apr 13, 2017
    Posts:
    9
    I have working native c opencv code to unity, example is published on Android Play store:
    the main problem is release memory from pointer because memory leaks from c plugin profiler dont detect this (currently solved I hope)...

    https://play.google.com/store/apps/details?id=com.isenstec.vrbox3

    I need some testing on other devices if it is stable, there is image grab from unity send as char array to c opencv -> opencv do simple threshold and send back to unity as return char array pointer
    after some cleanup i publish whole plugin, later i publish code...

    I tested on Samsung S3 working fast and stable... please some tests and compatible devices feedback :)




    according opencv documentation native grab image by opencv in c code is supported to android 4.X version, some test this way to get image from camera?

    if someone need latest library of opencvlib_java3.so for x86 and arm there are i use these:
    libopencv_java3 compiled 30.5.2017 from git, opencv download section use old :-(

    The code will by published there later after some cleanup:
    http://193.87.95.129/openvision2/index.php/en/

    BRG
    Kamil
     
    Last edited: Jun 2, 2017
  30. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Hi @inder2, congrats on the progress! I'll test your app later today and let you know how it goes :)

    Regarding the transfer of images between Unity and C OpenCV, what I do to manage the memory is use system pointers like this:

    Code (CSharp):
    1. // Allocate memory. ImagePixels is a byte array that contains the pixel values of your image
    2. System.IntPtr pointerIn = Marshal.AllocHGlobal(imagePixels.Length);
    3.  
    4. // Copy the contents of your byte array to the memory allocated
    5. Marshal.Copy (imagePixels, 0, pointerIn , imagePixels.Length);
    6.  
    7. // Allocate memory for the image from OpenCV's processing
    8. System.IntPtr pointerOut = Marshal.AllocHGlobal(outImagePixels.Length);
    9.  
    10. // Call your OpenCV function which looks something like: YourOpenCVFunction (unsigned char * inImage, unsigned char * outImage) on the C side.
    11. YourOpenCVFunction (pointerIn, pointerOut);
    12.  
    13. // Copy the result of the processing in a byte array
    14. Marshal.Copy (pointerOut, outImagePixels, 0, outImagePixels.Length);
    15.  
    16. // Free system pointers
    17. Marshal.FreeHGlobal(pointerIn);
    18. Marshal.FreeHGlobal (pointerOut);
    This is the way I've found the memory handling works for me, but I'm sure there must be more efficient ways to do it. How do you currently do it?
     
  31. inder2

    inder2

    Joined:
    Apr 13, 2017
    Posts:
    9
    almost same but i get image back without out function by return:)
    ptr2 = ocv_get_image(resx, resy, pixelsHandle.AddrOfPinnedObject());

    i had memory leaks i used separate function release_memory from c;
    i will try your way Marshal.FreeHGlobal(pointerIn); i think is better :p
     
    Last edited: Jun 2, 2017
    martejpad likes this.
  32. alemfi538

    alemfi538

    Joined:
    Apr 8, 2014
    Posts:
    4
    Stumbled across your guy's work just now, you guys are amazing! Tested it just now on my Android Nexus 6P and it's functional. Since you were asking about feedback, I couldn't get the "overlay","change", or "test vr2" buttons to do anything. Also, I assume this is just not implemented yet, but the feed does not (counter) rotate to accommodate landscape or profile rotations.

    @martejpad : what are these two lines in the cmake file doing?
    add_library( test-lib SHARED IMPORTED )
    set_target_properties(test-lib PROPERTIES IMPORTED_LOCATION

    and since you named your so file as:
    "libnative-lib.so"

    are the methods accessed using:
    [DllImport("libnative-lib")]
    internal static extern <RETURNTYPE> FUNCTION_NAME();

    Thanks
     
    Last edited: Jun 5, 2017
  33. inder2

    inder2

    Joined:
    Apr 13, 2017
    Posts:
    9
    "test vr2" buttons to do anything. Also, I assume this is just not implemented yet, but the feed does not (counter) rotate to accommodate landscape or profile rotations.

    test vr2 currently not working i want implement camera video to vr google cardboard but there some problem (i solving this now)...
    application crash... if i enable vr :(
     
  34. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Hi @alemfi538! Those two lines are basically telling the compiler to add OpenCV's SO library. "test-lib" was a really poor choice for the name of the lib, it should be something like:

    Code (CSharp):
    1. add_library( opencv-lib SHARED IMPORTED )
    2.  
    3. set_target_properties(opencv-lib PROPERTIES IMPORTED_LOCATION
    4.      ${CMAKE_SOURCE_DIR}/src/main/jniLibs/${ANDROID_ABI}/libopencv_java3.so)
    With regards to the DllImport, you can use [DllImport("native-lib")], without the "lib" prefix. Have a look at Thomas' tutorials for a few good examples of usage.

    Hope this helps!
     
  35. alemfi538

    alemfi538

    Joined:
    Apr 8, 2014
    Posts:
    4
    Thanks for the response @martejpad, I did read up that I need to remove the lib prefix, however I'm still encountering some difficulties accessing the library (or perhaps the functions?)

    Uploading an image with the hierarchy and inspector of the .so files on the left, in the middle are some attempts to declare exports of the C++ code, and on the right hand side is attempts to access those functions from the C# code. Not sure if I'm overlooking something as it is currently 3am my time and I should really be getting some shut-eye
     

    Attached Files:

  36. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    Hey @alemfi538! Could you please elaborate a bit more on the difficulties you're encountering? Do you get any error messages? Are you getting any image back from OpenCV?

    Looking at your code I'm not sure if the way you handle the memory (returning a char * from OpenCV) is the safest, @inder2 was having some memory leaks doing it like this. Maybe it's worth trying pre-allocating the memory in C# beforehand and see if that's the problem? Can you show us how you're calling the function?

    If you give me more details regarding your issues I'll try to think of possible solutions. But go get some sleep! I'm sure thinks will be clearer later :)

    Regards
     
  37. alemfi538

    alemfi538

    Joined:
    Apr 8, 2014
    Posts:
    4
    Oh yes, no problem. In android studio, after performing the attempt to call the Hello World Function I am getting an error/exception:

    "06-05 02:57:04.149: E/Unity(10932): Unable to find native-lib"

    Running it outside of the try allows me to get a more detailed stacktrace:
    06-05 13:24:30.214: I/Unity(20290): DllNotFoundException: native-lib
    06-05 13:24:30.214: I/Unity(20290): at (wrapper managed-to-native) DisplayFeed:HelloWorld ()
    06-05 13:24:30.214: I/Unity(20290): at DisplayFeed.Start () [0x0006f] in D:\Documents\Interviews\SnapChatTest\SnapChatInteractivityTest\Assets\Scripts\DisplayFeed.cs:47


    I am wondering, since the ".so" files were built as "Debug" and not "Release", did you do a development build in Unity? or should that not matter?

    Oh no. I think I just realized... did you need to include the : "libopencv_java3.so" built previously into the unity project as well? I believe I only added the newly built libnative-lib.so libraries. I think I might have fixed the issue. Possibly.

    Edit: Got it to work! it was indeed because I did not include the previously built opencv libraries. One thing to note is that the camera appears to be rotated 90 degrees (for me) Thanks a bunch!

    Has anyone figured out where to store the cascade files when building to android? just hit a small snag where it can't locate the files.
     
    Last edited: Jun 6, 2017
  38. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    So glad it's working @alemfi538!
    With regards to the cascade file, in theory if you put it under Unity's Streaming Assets it should get copied to the target device, have you tried that? I know that @Desoxi had a bit of an issue getting it to work, I think he ended up just copying it manually to the device and hard coding the path, but I can imagine that's not an ideal solution :) I haven't tried myself, sorry for not being more helpful!
     
  39. alemfi538

    alemfi538

    Joined:
    Apr 8, 2014
    Posts:
    4
    @martejpad
    So it turns out that on Android the streaming assets get stored in a compressed format (https://docs.unity3d.com/Manual/StreamingAssets.html) so I had to extract them from Unity, write into a new file at Application.persistentDataPath so that the new file's path can be accessed by opencv (opencv cannot read from the compressed file).

    Still need to figure out the 90 degree issue, but I think that's most of the technical leg work done. For now.
     
  40. martejpad

    martejpad

    Joined:
    May 24, 2015
    Posts:
    23
    @alemfi538 Good to know that detail about the Streaming Assets folder, thanks a lot for the info. I'm glad you found a workaround! Please keep letting us know how everything goes, we're all learning here :)
     
  41. inder2

    inder2

    Joined:
    Apr 13, 2017
    Posts:
    9
    New demo version for tests: Threshold, Canny Edge, Grayscale
    https://play.google.com/store/apps/details?id=com.isenstec.vrbox3

    SO compiled file of plugin for android:
    http://193.87.95.148/opencv2unity.zip

    C# code how to use plugin
    Code (CSharp):
    1.  
    2.         pixels = webcamscreen.GetPixels32();
    3.         pixelsHandle = GCHandle.Alloc(pixels, GCHandleType.Pinned);
    4.         IntPtr delay = IntPtr.Zero;
    5.         IntPtr result2 = IntPtr.Zero;
    6.         double delay1 =0;
    7.         // img result                  resolution x, y,  pointer to input image,   delay of filters, selected filter,  second output
    8.         ptr2 = ocv_get_image(resx, resy, pixelsHandle.AddrOfPinnedObject(), out delay, filt, out result2);
    9.             delay1 = delay.ToInt32();
    10.        
    11. // First result
    12.         imgdata = new byte[resx * resy * 4];
    13.         Marshal.Copy(ptr2, imgdata, 0, resx * resy * 4);
    14.         tex.LoadRawTextureData(imgdata);
    15.  
    16. // Second result
    17.         byte[] imgdata2 = new byte[resx*resy*4];
    18.         Marshal.Copy(result2, imgdata2, 0, resx * resy * 4);
    19.         tex2.LoadRawTextureData(imgdata2);
    20.  
    21. tex.Apply();
    22.         rend.material.mainTexture = tex;
    23.    
    24.  tex2.Apply();
    25.         rend.material.mainTexture = tex2;
    26.  
    27.         //Release memory
    28.         imgdata = null;
    29.         pixelsHandle.Free();
    30.         ReleaseMemory(ptr2);
    31.         ptr2 = IntPtr.Zero;
     
    Last edited: Jun 9, 2017
    martejpad and wahyuway like this.
  42. Bibzball

    Bibzball

    Joined:
    Sep 26, 2015
    Posts:
    21
    Thanks everyone for everything said in this thread. I've been struggling to find such a tutorial to get OpenCV to natively work in Unity with an Android app, and this thread saved me from my misery.

    Edit: Removed dumb question.

    Thanks again, great work everyone!
     
    Last edited: Jun 19, 2017
  43. inder2

    inder2

    Joined:
    Apr 13, 2017
    Posts:
    9
    Code cleanup and fixes :) now in emulator image grab image and c++ about 6ms :) very nice in comparison to test before with java plus opencv.
    https://play.google.com/store/apps/details?id=com.isenstec.vrbox3
    VR fix ask for permision of camera

    next time publish code on gitlab

    C++ code example
    Code (CSharp):
    1. //include
    2. #include "opencv2unity.h"
    3.  
    4. //NATIVE
    5. #include "opencv2/opencv.hpp"
    6.  
    7. using namespace std;
    8. using namespace cv;
    9.  
    10.                 //resolution cols,     rows     input image   delay   filter select  output image
    11. extern "C" int ocv_get_image(int xres, int yres, uint8_t* z,int *delay, int filt, void **result2)
    12. {
    13.     double tg1 = (double)getTickCount();
    14.     //input frame     rows  cols
    15.     Mat framein = Mat(yres, xres, CV_8UC4, z);
    16.     //output frame
    17.     Mat frameout(framein.rows, framein.cols, CV_8UC4, Scalar(238,244,66));
    18.     //processing frame
    19.     Mat frameproc;
    20.     // Filter select
    21.     cvtColor(framein,frameproc,CV_RGBA2BGR);
    22.     switch ( filt ) {
    23.         case 1: {
    24.             // Simple threshold
    25.             threshold(frameproc,frameproc,128,128,THRESH_BINARY);
    26.             Point textplace(0, 50);
    27.             stringstream ss;
    28.             ss << "C" << frameproc.cols << "R" << frameproc.rows;
    29.                                                                 // scale  color     thickness  ciara
    30.             putText(frameproc, ss.str() , textplace ,CV_FONT_NORMAL,1,Scalar(255,255,0), 1, LINE_AA);
    31.             rectangle(frameproc, Point( 0,60 ), Point( 100, 100), Scalar( 255,255,0 ), -1, 8 );
    32.             flip(frameproc,frameproc,1);
    33.             blur(frameproc,frameproc,Size(3,3));
    34.             cvtColor(frameproc,frameout, CV_BGR2RGBA,4);
    35.             break;}
    36.         case 2: {
    37.             // Canny Edge
    38.             GaussianBlur(frameproc, frameproc, Size(7,7), 1.5, 1.5);
    39.             cvtColor(frameproc, frameproc, COLOR_BGR2GRAY);
    40.             Canny(frameproc, frameproc, 0, 30, 3);
    41.             cvtColor(frameproc,frameout, CV_GRAY2RGBA,4);
    42.             break;}
    43.         case 3: {
    44.             // Gausian + Gray
    45.             cvtColor(frameproc, frameproc, COLOR_BGR2GRAY);
    46.             cvtColor(frameproc, frameout,CV_GRAY2RGBA,4);
    47.             break;}
    48.         case 4: {
    49.             // Face detection WIP
    50.             cvtColor(frameproc, frameout,CV_BGR2RGBA,4);
    51.             break;}
    52.         default:
    53.         {}
    54.     }
    55.     // Image result
    56.     uint8_t result2data[framein.cols*framein.rows*4];
    57.     memcpy(result2data, frameout.data, framein.cols*framein.rows*4);
    58.     *result2 = result2data;
    59.  
    60.     //Memory release
    61.     framein.release();
    62.     frameout.release();
    63.     frameproc.release();
    64.  
    65.     double fpsg = 1/(((double)getTickCount()-tg1)/(getTickFrequency()));
    66.     int delay1 = 0;
    67.     *delay = fpsg;
    68.     return 1;
    69. }
     
    Last edited: Jun 29, 2017
    Gruffy and martejpad like this.
  44. Bibzball

    Bibzball

    Joined:
    Sep 26, 2015
    Posts:
    21
    The result is great :)
    I'm indeed experiencing the same performance increase since using native instead of javacv!

    I am having issues finding forums where people can help me, and since people in here seem interested in OpenCV I'm going to go out on a limb here and try my luck : does anyone in here ever tried to stitch a full 360x180 spherical panorama from a lot (~50) images using OpenCV? Please PM me if so :)
     
  45. mcelroy-jon

    mcelroy-jon

    Joined:
    Nov 26, 2012
    Posts:
    3
    Thanks everyone for really amazing work. This post really helped getting things running for me. I was hoping someone would get a git repo set up for this to help others but I wrapped up the other day so I started a git repository that provides an example Android studio project and matching Unity project. If there is a better place to store these or there are things to be added let me know.

    I've got an xcode project for building to the editor on MacOS that is working well that I'll be adding soon since it seems like it'd be nice to have examples of building for all platforms in one place but either way I hope this helps someone out as much as this whole post and Thomas' original tutorial has helped me.
     
  46. mcelroy-jon

    mcelroy-jon

    Joined:
    Nov 26, 2012
    Posts:
    3
  47. inder2

    inder2

    Joined:
    Apr 13, 2017
    Posts:
    9
    it is not necessary restart unity only copy .so files to plugins... working without restart
    for Android tested
     
  48. VRao

    VRao

    Joined:
    Jul 1, 2017
    Posts:
    1
    @Thomas-Mountainborn Thanks for the great tutorial, hadn't found such a well written and detailed tutorial before. I am new to unity and c++ and I followed all the steps mentioned by you, however when click run on unity, i get a error such as this -
    "NullReferenceException: Object reference not set to an instance of an object
    PositionAtFaceScreenSpace.Update () (at Assets/PositionAtFaceScreenSpace.cs:16)"


    I could not find the <AllowUnsafeBlocks>false</AllowUnsafeBlocks> lines anywhere so I could not set this to true. Could this be causing the problem? if so where exactly can i change it?
    My game object in unity is a cube named 'cube'. I am using unity 5 and visual studio 2017. everything compiled properly. just that i am getting that error when i run it.
    Please suggest any possible solutions to this.

    Also, I am actually trying to implement a simpler program where i just need the coordinates from the detected object through open cv, I have written the code but I am not sure where/ how exactly i should include it in place of the facedetection source code in c++. Any help will be appreciated as i have just started out with unity and c++.

    Thanks again.
     
  49. mcelroy-jon

    mcelroy-jon

    Joined:
    Nov 26, 2012
    Posts:
    3
    I meant specifically for testing in editor. It works for me on Android this way too but if you're iterating on a CV function in editor you'll need to force it to unload before seeing your native changes.
     
  50. Oiram

    Oiram

    Joined:
    Jun 20, 2017
    Posts:
    9
    Hello @Thomas-Mountainborn ,

    I followed your tutorial about using opencv within unity3d, you are very great!! I was very interested to it.
    Now I'm working on a project with microsoft hololens. What I've to do is a facial detection with the microsoft hololens. Thanks to your tutorial, I managed opencv within unity and I studied your scripts very well in such a way that I edit them for my project, but my real project is a bit different. My question to you is: is it possible to use opencv within unity to make a facial detection with the hololens? I tried to do that, but when I build the app and I start it, it gave me some errors. Maybe there's a problem with the dlls (The hololens platform architecture is x86 but the dlls are x64). I hope you will answer me, sorry for my bad english.
    Thanks anyway.