如何在 Unity Android 应用程序中使用 OpenCV 中的 ARCore 相机图像?

我正在尝试在我的 Unity ARCore 游戏中使用 OpenCV 进行手势识别。但是,随着 TextureReaderAPI 的弃用,从相机捕获图像的唯一方法是使用Frame.CameraImage.AcquireCameraImageBytes(). 问题不仅在于图像的分辨率为 640x480(这无法更改 AFAIK),而且还是 YUV_420_888 格式。似乎这还不够,OpenCV 没有免费的 C#/Unity 包,所以如果我不想为付费包支付 20 美元,我需要使用可用的 C++ 或 python 版本。如何将 YUV 图像移动到 OpenCV,将其转换为 RGB(或 HSV)色彩空间,然后对其进行一些处理或将其返回到 Unity?



喵喔喔
浏览 150回答 5
5回答

绝地无双

在此示例中,我将使用 C++ OpenCV 库和 Visual Studio 2017,我将尝试捕获 ARCore 相机图像,将其移动到 OpenCV(尽可能高效),将其转换为 RGB 颜色空间,然后将其移回 Unity C# 代码并将其保存在手机的内存中。首先,我们必须创建一个 C++ 动态库项目以与 OpenCV 一起使用。为此,我强烈建议遵循Pierre Baret 和 Ninjaman494 对这个问题的回答:OpenCV + Android + Unity。这个过程相当简单,如果你不会过多地偏离他们的答案(即你可以安全地下载比 3.3.1 版本更新的 OpenCV,但在为 ARM64 而不是 ARM 等编译时要小心),你应该能够从 C# 调用 C++ 函数。根据我的经验,我必须解决两个问题 - 首先,如果您将项目作为 C# 解决方案的一部分而不是创建新的解决方案,Visual Studio 将继续扰乱您的配置,例如尝试编译 x86 版本而不是 ARM版本。为了省去麻烦,创建一个完全独立的解决方案。另一个问题是某些函数无法为我链接,从而引发未定义的引用链接器错误(undefined reference to 'cv::error(int, std::string const&, char const*, char const*, int准确地说)。如果发生这种情况并且问题出在您并不真正需要的函数上,只需在您的代码中重新创建该函数 - 例如,如果您遇到问题cv::error,请将此代码添加到您的 .cpp 文件的末尾:namespace cv {&nbsp; &nbsp; __noreturn void error(int a, const String & b, const char * c, const char * d, int e) {&nbsp; &nbsp; &nbsp; &nbsp; throw std::string(b);&nbsp; &nbsp; }}当然,这是丑陋和肮脏的做事方式,所以如果您知道如何修复链接器错误,请这样做并告诉我。现在,您应该有一个可以编译并可以从 Unity Android 应用程序运行的工作 C++ 代码。但是,我们希望 OpenCV 不返回数字,而是转换图像。因此,将您的代码更改为:.h文件extern "C" {&nbsp; &nbsp; namespace YOUR_OWN_NAMESPACE&nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; int ConvertYUV2RGBA(unsigned char *, unsigned char *, int, int);&nbsp; &nbsp; }}.cpp 文件extern "C" {&nbsp; &nbsp; int YOUR_OWN_NAMESPACE::ConvertYUV2RGBA(unsigned char * inputPtr, unsigned char * outputPtr, int width, int height) {&nbsp; &nbsp; &nbsp; &nbsp; // Create Mat objects for the YUV and RGB images. For YUV, we need a&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; // height*1.5 x width image, that has one 8-bit channel. We can also tell&nbsp; &nbsp; &nbsp; &nbsp; // OpenCV to have this Mat object "encapsulate" an existing array,&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; // which is inputPtr.&nbsp; &nbsp; &nbsp; &nbsp; // For RGB image, we need a height x width image, that has three 8-bit&nbsp; &nbsp; &nbsp; &nbsp; // channels. Again, we tell OpenCV to encapsulate the outputPtr array.&nbsp; &nbsp; &nbsp; &nbsp; // Thanks to specifying existing arrays as data sources, no copying&nbsp; &nbsp; &nbsp; &nbsp; // or memory allocation has to be done, and the process is highly&nbsp; &nbsp; &nbsp; &nbsp; // effective.&nbsp; &nbsp; &nbsp; &nbsp; cv::Mat input_image(height + height / 2, width, CV_8UC1, inputPtr);&nbsp; &nbsp; &nbsp; &nbsp; cv::Mat output_image(height, width, CV_8UC3, outputPtr);&nbsp; &nbsp; &nbsp; &nbsp; // If any of the images has not loaded, return 1 to signal an error.&nbsp; &nbsp; &nbsp; &nbsp; if (input_image.empty() || output_image.empty()) {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return 1;&nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; // Convert the image. Now you might have seen people telling you to use&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; // NV21 or 420sp instead of NV12, and BGR instead of RGB. I do not&nbsp; &nbsp; &nbsp; &nbsp; // understand why, but this was the correct conversion for me.&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; // If you have any problems with the color in the output image,&nbsp; &nbsp; &nbsp; &nbsp; // they are probably caused by incorrect conversion. In that case,&nbsp; &nbsp; &nbsp; &nbsp; // I can only recommend you the trial and error method.&nbsp; &nbsp; &nbsp; &nbsp; cv::cvtColor(input_image, output_image, cv::COLOR_YUV2RGB_NV12);&nbsp; &nbsp; &nbsp; &nbsp; // Now that the result is safely saved in outputPtr, we can return 0.&nbsp; &nbsp; &nbsp; &nbsp; return 0;&nbsp; &nbsp; }}现在,重建解决方案 ( Ctrl + Shift + B) 并将libProjectName.so文件复制到 Unity 的Plugins/Android文件夹,如链接答案中所示。下一步是从 ARCore 保存图像,将其移动到 C++ 代码,然后取回它。让我们在 C# 脚本的类中添加它:[DllImport("YOUR_OWN_NAMESPACE")]&nbsp; &nbsp; public static extern int ConvertYUV2RGBA(IntPtr input, IntPtr output, int width, int height);Visual Studio 将提示您添加System.Runtime.InteropServicesusing 子句 - 这样做。这允许我们在 C# 代码中使用 C++ 函数。现在,让我们将这个函数添加到我们的 C# 组件中:public Texture2D CameraToTexture()&nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; // Create the object for the result - this has to be done before the&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; // using {} clause.&nbsp; &nbsp; &nbsp; &nbsp; Texture2D result;&nbsp; &nbsp; &nbsp; &nbsp; // Use using to make sure that C# disposes of the CameraImageBytes afterwards&nbsp; &nbsp; &nbsp; &nbsp; using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())&nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // If acquiring failed, return null&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (!camBytes.IsAvailable)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Debug.LogWarning("camBytes not available");&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return null;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // To save a YUV_420_888 image, you need 1.5*pixelCount bytes.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // I will explain later, why.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // As CameraImageBytes keep the Y, U and V data in three separate&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // arrays, we need to put them in a single array. This is done using&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // native pointers, which are considered unsafe in C#.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; unsafe&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; for (int i = 0; i < camBytes.Width * camBytes.Height; i++)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // Create the output byte array. RGB is three channels, therefore&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // we need 3 times the pixel count&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // GCHandles help us "pin" the arrays in the memory, so that we can&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // pass them to the C++ code.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; GCHandle YUVhandle = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; GCHandle RGBhandle = GCHandle.Alloc(RGBimage, GCHandleType.Pinned);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // Call the C++ function that we created.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; int k = ConvertYUV2RGBA(YUVhandle.AddrOfPinnedObject(), RGBhandle.AddrOfPinnedObject(), camBytes.Width, camBytes.Height);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // If OpenCV conversion failed, return null&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (k != 0)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Debug.LogWarning("Color conversion - k != 0");&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return null;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // Create a new texture object&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; result = new Texture2D(camBytes.Width, camBytes.Height, TextureFormat.RGB24, false);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // Load the RGB array to the texture, send it to GPU&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; result.LoadRawTextureData(RGBimage);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; result.Apply();&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // Save the texture as an PNG file. End the using {} clause to&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // dispose of the CameraImageBytes.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; File.WriteAllBytes(Application.persistentDataPath + "/tex.png", result.EncodeToPNG());&nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; // Return the texture.&nbsp; &nbsp; &nbsp; &nbsp; return result;&nbsp; &nbsp; }为了能够运行unsafe代码,您还需要在 Unity 中允许它。转到播放器设置(Edit > Project Settings > Player Settings并选中Allow unsafe code复选框。)现在,您可以调用 CameraToTexture() 函数,比方说,每 5 秒从 Update() 调用一次,相机图像应保存为/Android/data/YOUR_APPLICATION_PACKAGE/files/tex.png. 图像可能是横向的,即使您将手机置于纵向模式,但这也不再那么难以修复。此外,您可能会注意到每次保存图像时都会冻结 - 因此,我建议在单独的线程中调用此函数。此外,这里最苛刻的操作是将图像保存为 PNG 文件,因此如果您出于任何其他原因需要它,应该没问题(不过仍然使用单独的线程)。如果您想了解 YUV_420_888 格式,为什么需要 1.5*pixelCount 数组,以及为什么我们按照我们的方式修改数组,请阅读https://wiki.videolan.org/YUV/#NV12。其他网站似乎没有关于此格式如何工作的不正确信息。另外,如果您有任何问题,请随时给我评论,我会尽力帮助解决这些问题,以及对代码和答案的任何反馈。附录 1:根据https://docs.unity3d.com/ScriptReference/Texture2D.LoadRawTextureData.html,您应该使用 GetRawTextureData 而不是 LoadRawTextureData,以防止复制。为此,只需固定 GetRawTextureData 返回的数组而不是 RGBimage 数组(您可以将其删除)。另外,不要忘记调用 result.Apply(); 然后。附录 2:不要忘记在使用完两个 GCHandle 时调用 Free()。

慕斯王

这是一个仅使用免费插件 OpenCV Plus Unity 的实现。如果您熟悉 OpenCV,则设置非常简单,文档也很棒。此实现使用 OpenCV 正确旋转图像,将它们存储到内存中,并在退出应用程序时将它们保存到文件中。我试图从代码中剥离所有 Unity 方面,以便函数 GetCameraImage() 可以在单独的线程上运行。我可以确认它可以在 Andoird (GS7) 上运行,我认为它可以普遍运行。&nbsp; &nbsp; &nbsp; &nbsp; using System;&nbsp; &nbsp; &nbsp; &nbsp; using System.Collections.Generic;&nbsp; &nbsp; &nbsp; &nbsp; using GoogleARCore;&nbsp; &nbsp; &nbsp; &nbsp; using UnityEngine;&nbsp; &nbsp; &nbsp; &nbsp; using OpenCvSharp;&nbsp; &nbsp; &nbsp; &nbsp; using System.Runtime.InteropServices;&nbsp; &nbsp; &nbsp; &nbsp; public class CamImage : MonoBehaviour&nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; public static List<Mat> AllData = new List<Mat>();&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; public static void GetCameraImage()&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // Use using to make sure that C# disposes of the CameraImageBytes afterwards&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // If acquiring failed, return null&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (!camBytes.IsAvailable)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // To save a YUV_420_888 image, you need 1.5*pixelCount bytes.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // I will explain later, why.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // As CameraImageBytes keep the Y, U and V data in three separate&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // arrays, we need to put them in a single array. This is done using&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // native pointers, which are considered unsafe in C#.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; unsafe&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; for (int i = 0; i < camBytes.Width * camBytes.Height; i++)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // GCHandles help us "pin" the arrays in the memory, so that we can&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // pass them to the C++ code.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; IntPtr pointerYUV = pinnedArray.AddrOfPinnedObject();&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, MatType.CV_8UC1, pointerYUV);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Mat output = new Mat(camBytes.Height, camBytes.Width, MatType.CV_8UC3);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Cv2.CvtColor(input, output, ColorConversionCodes.YUV2BGR_NV12);// YUV2RGB_NV12);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // FLIP AND TRANPOSE TO VERTICAL&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Cv2.Transpose(output, output);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Cv2.Flip(output, output, FlipMode.Y);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;AllData.Add(output);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;pinnedArray.Free();&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; }然后我在退出程序时调用 ExportImages() 以保存到文件。&nbsp; &nbsp; private void ExportImages()&nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; /// Write Camera intrinsics to text file&nbsp; &nbsp; &nbsp; &nbsp; var path = Application.persistentDataPath;&nbsp; &nbsp; &nbsp; &nbsp; StreamWriter sr = new StreamWriter(path + @"/intrinsics.txt");&nbsp; &nbsp; &nbsp; &nbsp; sr.WriteLine(CameraIntrinsicsOutput.text);&nbsp; &nbsp; &nbsp; &nbsp; Debug.Log(CameraIntrinsicsOutput.text);&nbsp; &nbsp; &nbsp; &nbsp; sr.Close();&nbsp; &nbsp; &nbsp; &nbsp; // Loop through Mat List, Add to Texture and Save.&nbsp; &nbsp; &nbsp; &nbsp; for (var i = 0; i < CamImage.AllData.Count; i++)&nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Mat imOut = CamImage.AllData[i];&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Texture2D result = Unity.MatToTexture(imOut);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; result.Apply();&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; byte[] im = result.EncodeToJPG(100);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; string fileName = "/IMG" + i + ".jpg";&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; File.WriteAllBytes(path + fileName, im);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; string messge = "Succesfully Saved Image To " + path + "\n";&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Debug.Log(messge);&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Destroy(result);&nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; }

FFIVE

我想出了如何在 Arcore 1.8 中获得全分辨率 CPU 图像。我现在可以使用 cameraimagebytes 获得完整的相机分辨率。把这个放在你的类变量中:private&nbsp;ARCoreSession.OnChooseCameraConfigurationDelegate&nbsp;m_OnChoseCameraConfiguration&nbsp;=&nbsp;null;把这个放在 Start()m_OnChoseCameraConfiguration&nbsp;=&nbsp;_ChooseCameraConfiguration;&nbsp;ARSessionManager.RegisterChooseCameraConfigurationCallback(m_OnChoseCameraConfiguration);&nbsp;ARSessionManager.enabled&nbsp;=&nbsp;false;&nbsp;ARSessionManager.enabled&nbsp;=&nbsp;true;将此回调添加到类中:private&nbsp;int&nbsp;_ChooseCameraConfiguration(List<CameraConfig>&nbsp;supportedConfigurations)&nbsp;{&nbsp;return&nbsp;supportedConfigurations.Count&nbsp;-&nbsp;1;&nbsp;}一旦你添加了这些,你应该有 cameraimagebytes 返回相机的完整分辨率。

汪汪一只猫

对于想要使用 OpencvForUnity 尝试此操作的每个人:public Mat getCameraImage(){&nbsp; &nbsp; // Use using to make sure that C# disposes of the CameraImageBytes afterwards&nbsp; &nbsp; using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())&nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; // If acquiring failed, return null&nbsp; &nbsp; &nbsp; &nbsp; if (!camBytes.IsAvailable)&nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Debug.LogWarning("camBytes not available");&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return null;&nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; // To save a YUV_420_888 image, you need 1.5*pixelCount bytes.&nbsp; &nbsp; &nbsp; &nbsp; // I will explain later, why.&nbsp; &nbsp; &nbsp; &nbsp; byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];&nbsp; &nbsp; &nbsp; &nbsp; // As CameraImageBytes keep the Y, U and V data in three separate&nbsp; &nbsp; &nbsp; &nbsp; // arrays, we need to put them in a single array. This is done using&nbsp; &nbsp; &nbsp; &nbsp; // native pointers, which are considered unsafe in C#.&nbsp; &nbsp; &nbsp; &nbsp; unsafe&nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; for (int i = 0; i < camBytes.Width * camBytes.Height; i++)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; {&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; }&nbsp; &nbsp; &nbsp; &nbsp; // Create the output byte array. RGB is three channels, therefore&nbsp; &nbsp; &nbsp; &nbsp; // we need 3 times the pixel count&nbsp; &nbsp; &nbsp; &nbsp; byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];&nbsp; &nbsp; &nbsp; &nbsp; // GCHandles help us "pin" the arrays in the memory, so that we can&nbsp; &nbsp; &nbsp; &nbsp; // pass them to the C++ code.&nbsp; &nbsp; &nbsp; &nbsp; GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);&nbsp; &nbsp; &nbsp; &nbsp; IntPtr pointer = pinnedArray.AddrOfPinnedObject();&nbsp; &nbsp; &nbsp; &nbsp; Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, CvType.CV_8UC1);&nbsp; &nbsp; &nbsp; &nbsp; Mat output = new Mat(camBytes.Height, camBytes.Width, CvType.CV_8UC3);&nbsp; &nbsp; &nbsp; &nbsp; Utils.copyToMat(pointer, input);&nbsp; &nbsp; &nbsp; &nbsp; Imgproc.cvtColor(input, output, Imgproc.COLOR_YUV2RGB_NV12);&nbsp; &nbsp; &nbsp; &nbsp; pinnedArray.Free();&nbsp; &nbsp; &nbsp; &nbsp; return output;&nbsp; &nbsp; }

慕斯709654

看来你已经解决了这个问题。但对于任何想要将 AR 与手势识别和跟踪相结合的人,请尝试 Manomotion:https ://www.manomotion.com/免费 SDK 并在 12/2020 中完美运行。使用SDK社区版和下载ARFoundation版本
打开App,查看更多内容
随时随地看视频慕课网APP