diff --git a/.clang-tidy b/.clang-tidy index 2ce4236e..edcfb787 100644 --- a/.clang-tidy +++ b/.clang-tidy @@ -29,7 +29,16 @@ Checks: > -bugprone-easily-swappable-parameters, -concurrency-mt-unsafe, -abseil-*, - -google-build-using-namespace + -google-build-using-namespace, + -misc-include-cleaner, + -performance-avoid-endl, + -misc-const-correctness, + -llvm-prefer-static-over-anonymous-namespace, + -boost-use-ranges, + -portability-template-virtual-member-function, + -cppcoreguidelines-avoid-do-while, + -performance-enum-size, + -readability-math-missing-parentheses, WarningsAsErrors: '*' HeaderFilterRegex: '.*' CheckOptions: diff --git a/README.md b/README.md index fde6fc89..edab6057 100644 --- a/README.md +++ b/README.md @@ -117,10 +117,10 @@ from the camera can be used. - [CreateDepthMap](https://github.com/zivid/zivid-cpp-samples/tree/master/source/Applications/Advanced/CreateDepthMap/CreateDepthMap.cpp) - Convert point cloud from a ZDF file to OpenCV format, extract depth map and visualize it. - [Downsample](https://github.com/zivid/zivid-cpp-samples/tree/master/source/Applications/Advanced/Downsample/Downsample.cpp) - Downsample point cloud from a ZDF file. + - [ExploreSettingsMetaData](https://github.com/zivid/zivid-cpp-samples/tree/master/source/Applications/Advanced/ExploreSettingsMetaData/ExploreSettingsMetaData.cpp) - Recursively iterates through all leaf parameters in a + Zivid camera’s settings. - [GammaCorrection](https://github.com/zivid/zivid-cpp-samples/tree/master/source/Applications/Advanced/GammaCorrection/GammaCorrection.cpp) - Capture 2D image with gamma correction. - [HandEyeCalibration](https://github.com/zivid/zivid-cpp-samples/tree/master/source/Applications/Advanced/HandEyeCalibration/HandEyeCalibration/HandEyeCalibration.cpp) - Perform Hand-Eye calibration. - - [MaskPointCloud](https://github.com/zivid/zivid-cpp-samples/tree/master/source/Applications/Advanced/MaskPointCloud/MaskPointCloud.cpp) - Mask point cloud from a ZDF file and convert to PCL - format, extract depth map and visualize it. - [ProjectAndFindMarker](https://github.com/zivid/zivid-cpp-samples/tree/master/source/Applications/Advanced/ProjectAndFindMarker/ProjectAndFindMarker.cpp) - Show a marker using the projector, capture a set of 2D images to find the marker coordinates (2D and 3D). - [ReadProjectAndCaptureImage](https://github.com/zivid/zivid-cpp-samples/tree/master/source/Applications/Advanced/ReadProjectAndCaptureImage/ReadProjectAndCaptureImage.cpp) - Read a 2D image from file and project it using the camera @@ -169,9 +169,9 @@ from the camera can be used. ## Installation 1. [Install Zivid - Software](https://support.zivid.com/latest//getting-started/software-installation.html) + Software](https://support.zivid.com/en/latest//camera/getting-started/software-installation.html) 2. [Download Zivid Sample - Data](https://support.zivid.com/latest//api-reference/samples/sample-data.html) + Data](https://support.zivid.com/en/latest//camera/api-reference/samples/sample-data.html) **Windows** @@ -188,7 +188,7 @@ git clone https://github.com/zivid/zivid-cpp-samples Configure the sample solution with CMake, open it in Visual Studio, build it, run it. For more information see [Configure C++ Samples With CMake and Build Them in Visual Studio in -Windows](https://support.zivid.com/latest/api-reference/samples/cpp/configure-cpp-samples-with-cmake-and-build-them-in-visual-studio-on-windows.html). +Windows](https://support.zivid.com/en/latest/camera/api-reference/samples/cpp/configure-cpp-samples-with-cmake-and-build-them-in-visual-studio-on-windows.html). **Ubuntu** @@ -218,7 +218,7 @@ respectively, to `cmake`: `-DUSE_EIGEN3=OFF`, `-DUSE_OPENCV=OFF`, `-DUSE_PCL=OFF`, `-DUSE_HALCON=OFF`. See [Configure C++ Samples With Optional -Dependencies](https://support.zivid.com/latest/api-reference/samples/cpp/configure-cpp-samples-with-optional-dependencies.html) +Dependencies](https://support.zivid.com/en/latest/camera/api-reference/samples/cpp/configure-cpp-samples-with-optional-dependencies.html) for instructions on how to install the optional dependencies and configure the samples to use them. @@ -235,12 +235,12 @@ Zivid offers two ways of interfacing with HALCON: 1. Through the Zivid SDK, utilizing the C++/C\# libraries available for HALCON. We provide samples for both - [C++](https://support.zivid.com/latest//api-reference/samples/cpp.html) + [C++](https://support.zivid.com/en/latest//camera/api-reference/samples/cpp.html) and - [C\#](https://support.zivid.com/latest//api-reference/samples/csharp.html). + [C\#](https://support.zivid.com/en/latest//camera/api-reference/samples/csharp.html). (**Recommended**) 2. Directly through a GenICam GenTL producer that comes with the [Zivid - Software](https://support.zivid.com/latest//getting-started/software-installation.html). + Software](https://support.zivid.com/en/latest//camera/getting-started/software-installation.html). Zivid and HALCON are compatible with Windows 10 and 11, and Ubuntu 20.04, 22.04, 24.04. @@ -257,17 +257,17 @@ To set up and use Zivid in one of these operating systems, please follow their respective instructions on the following pages: - [Install Zivid + HALCON for - Windows](https://support.zivid.com/latest/api-reference/samples/halcon/install-zivid-halcon-for-windows.html) + Windows](https://support.zivid.com/en/latest/camera/api-reference/samples/halcon/install-zivid-halcon-for-windows.html) - [Install Zivid + HALCON for - LINUX](https://support.zivid.com/latest/api-reference/samples/halcon/install-zivid-halcon-for-linux.html) + LINUX](https://support.zivid.com/en/latest/camera/api-reference/samples/halcon/install-zivid-halcon-for-linux.html) - [Create a HALCON "Hello World" - Program](https://support.zivid.com/latest/api-reference/samples/halcon/create-a-halcon-hello-world.html) + Program](https://support.zivid.com/en/latest/camera/api-reference/samples/halcon/create-a-halcon-hello-world.html) - [How to Run a HALCON - Sample](https://support.zivid.com/latest/api-reference/samples/halcon/how-to-run-a-halcon-sample.html) + Sample](https://support.zivid.com/en/latest/camera/api-reference/samples/halcon/how-to-run-a-halcon-sample.html) - [Debug in - HALCON](https://support.zivid.com/latest/api-reference/samples/halcon/halcon-debug.html) + HALCON](https://support.zivid.com/en/latest/camera/api-reference/samples/halcon/halcon-debug.html) - [HALCON Sample - Videos](https://support.zivid.com/latest/api-reference/samples/halcon/halcon-sample-videos.html) + Videos](https://support.zivid.com/en/latest/camera/api-reference/samples/halcon/halcon-sample-videos.html) The following HALCON versions have been tested and confirmed to work with Zivid cameras: @@ -280,9 +280,9 @@ We recommend using one of the HALCON versions we have tested. ## Support For more information about the Zivid cameras, please visit our -[Knowledge Base](https://support.zivid.com/latest). If you run into any -issues please check out -[Troubleshooting](https://support.zivid.com/latest/support/troubleshooting.html). +[Knowledge Base](https://support.zivid.com/en/latest). If you run into +any issues please check out +[Troubleshooting](https://support.zivid.com/en/latest/camera/support/troubleshooting.html). ## License diff --git a/source/Applications/Advanced/ExploreSettingsMetaData/ExploreSettingsMetaData.cpp b/source/Applications/Advanced/ExploreSettingsMetaData/ExploreSettingsMetaData.cpp new file mode 100644 index 00000000..33f78815 --- /dev/null +++ b/source/Applications/Advanced/ExploreSettingsMetaData/ExploreSettingsMetaData.cpp @@ -0,0 +1,172 @@ + +/* +Recursively iterates through all leaf parameters in a Zivid camera’s settings. + +This sample walks the entire settings tree and inspects each leaf parameter. +For every parameter, it prints: + - the current value if explicitly set, + - otherwise the camera’s default value, + - and any available metadata (valid ranges or discrete allowed values) +*/ + +#include +#include +#include + +namespace +{ + template + void printValidRange(const Zivid::CameraInfo &cameraInfo) + { + const auto range = Zivid::Experimental::SettingsInfo::validRange(cameraInfo); + + std::cout << " ValidRange: [" << Node{ range.min() } << " , " << Node{ range.max() } << "]\n"; + } + + template + void printValidValues(const Zivid::CameraInfo &cameraInfo) + { + const auto validValues = Zivid::Experimental::SettingsInfo::validValues(cameraInfo); + + std::cout << " ValidValues: "; + for(const auto &value : validValues) + { + std::cout << value << " "; + } + std::cout << "\n"; + } + + template + void inspectLeaf(const Node &node, const Zivid::CameraInfo &cameraInfo) + { + using DecayedNode = std::decay_t; + + if constexpr(DecayedNode::nodeType == Zivid::DataModel::NodeType::leafValue) + { + std::cout << "Parameter: " << DecayedNode::path << "\n"; + + if constexpr(Zivid::DataModel::IsOptional::value) + { + std::cout << " CurrentValue: "; + if(node.hasValue()) + { + std::cout << node.toString() << "\n"; + } + else + { + std::cout << "(unset)\n"; + + const auto defaultValue = Zivid::Experimental::SettingsInfo::defaultValue(cameraInfo); + std::cout << " DefaultValue: "; + std::cout << defaultValue.toString() << "\n"; + } + } + else + { + std::cout << " CurrentValue: " << node.toString() << "\n"; + } + + constexpr bool hasRange = Zivid::DataModel::HasValidRange::value; + constexpr bool hasValues = Zivid::DataModel::HasValidValues::value; + + if constexpr(hasRange) + { + printValidRange(cameraInfo); + } + + if constexpr(hasValues) + { + printValidValues(cameraInfo); + } + + if constexpr(!hasRange && !hasValues) + { + std::cout << " No predefined valid range or discrete values.\n"; + } + + std::cout << "\n"; + } + } + + // Recursively traverses nested data model leaves and lists + template + void traverseLeavesIntoNestedDataModelsAndLists(Node &node, const LeafVisitor &leafVisitor) + { + using DecayedNode = std::decay_t; + + if constexpr( + DecayedNode::nodeType == Zivid::DataModel::NodeType::group + || DecayedNode::nodeType == Zivid::DataModel::NodeType::leafDataModelList) + { + std::forward(node).forEach([&](auto &&childNode) { + traverseLeavesIntoNestedDataModelsAndLists(std::forward(childNode), leafVisitor); + }); + } + else if constexpr( + DecayedNode::nodeType == Zivid::DataModel::NodeType::leafValue + && Zivid::DataModel::IsNestedDataModelLeaf::value) + { + leafVisitor(node); + + if constexpr(Zivid::DataModel::IsOptional::value) + { + if(node.hasValue()) + { + traverseLeavesIntoNestedDataModelsAndLists(node.value(), leafVisitor); + } + } + else + { + traverseLeavesIntoNestedDataModelsAndLists(node.value(), leafVisitor); + } + } + else + { + leafVisitor(std::forward(node)); + } + } + + // Serves as the entry point to traverse a Settings object and print all leaves + template + void traverseAndPrint(const SettingsType &settings, const Zivid::CameraInfo &cameraInfo) + { + traverseLeavesIntoNestedDataModelsAndLists(settings, [&](const auto &node) { inspectLeaf(node, cameraInfo); }); + } +} // namespace + +int main() +{ + try + { + Zivid::Application app; + + std::cout << "Connecting to camera...\n"; + auto camera = app.connectCamera(); + const auto cameraInfo = camera.info(); + + std::cout << "Camera model: " << cameraInfo.modelName() << "\n"; + std::cout << "Serial number: " << cameraInfo.serialNumber() << "\n"; + + // Default 2D Settings (mostly set) + auto defaultSettings2DWithAcquisition = + Zivid::Experimental::SettingsInfo::defaultValue(cameraInfo) + .copyWith( + Zivid::Settings2D::Acquisitions{ + Zivid::Experimental::SettingsInfo::defaultValue(cameraInfo) }); + + // Minimal 3D Settings (mostly unset) + Zivid::Settings settings; + settings.set(Zivid::Settings::Acquisitions{ Zivid::Settings::Acquisition{} }); + settings.set(Zivid::Settings::Color{ defaultSettings2DWithAcquisition }); + + std::cout << "---- Traversing Settings ----\n"; + traverseAndPrint(settings, cameraInfo); + } + catch(const std::exception &e) + { + std::cerr << "Error: " << e.what() << std::endl; + return EXIT_FAILURE; + } + + return EXIT_SUCCESS; +} diff --git a/source/Applications/Advanced/HandEyeCalibration/PoseConversions/PoseConversions.cpp b/source/Applications/Advanced/HandEyeCalibration/PoseConversions/PoseConversions.cpp index b44bb7f2..0be3dade 100644 --- a/source/Applications/Advanced/HandEyeCalibration/PoseConversions/PoseConversions.cpp +++ b/source/Applications/Advanced/HandEyeCalibration/PoseConversions/PoseConversions.cpp @@ -253,3 +253,4 @@ int main() return EXIT_SUCCESS; } + diff --git a/source/Applications/Advanced/MaskPointCloud/MaskPointCloud.cpp b/source/Applications/Advanced/MaskPointCloud/MaskPointCloud.cpp deleted file mode 100644 index 1b1642cb..00000000 --- a/source/Applications/Advanced/MaskPointCloud/MaskPointCloud.cpp +++ /dev/null @@ -1,226 +0,0 @@ -/* -Mask point cloud from a ZDF file and convert to PCL format, extract depth map and visualize it. - -This example shows how to mask a point cloud from a ZDF file and store the resulting point cloud in PCL format. -The ZDF file for this sample can be found under the main instructions for Zivid samples. -*/ - -#include - -#include - -#include -#include -#include - -#include -#include - -namespace -{ - void visualizePointCloud(const pcl::PointCloud::ConstPtr &pointCloud) - { - auto viewer = pcl::visualization::PCLVisualizer("Viewer"); - - viewer.addPointCloud(pointCloud); - - viewer.setCameraPosition(0, 0, -100, 0, 0, 1000, 0, -1, 0); - - std::cout << "Press r to centre and zoom the viewer so that the entire cloud is visible" << std::endl; - std::cout << "Press q to exit the viewer application" << std::endl; - while(!viewer.wasStopped()) - { - viewer.spinOnce(100); - std::this_thread::sleep_for(std::chrono::milliseconds(100)); - } - } - - cv::Mat pointCloudToCvZ(const pcl::PointCloud &pointCloud) - { - // Getting min and max values for X, Y, Z images - float zMax = -1; - float zMin = 1000000; - for(size_t i = 0; i < pointCloud.height; i++) - { - for(size_t j = 0; j < pointCloud.width; j++) - { - zMax = std::max(zMax, pointCloud(j, i).z); - zMin = std::min(zMin, pointCloud(j, i).z); - } - } - - // Filling in OpenCV matrix with the cloud data - cv::Mat z(pointCloud.height, pointCloud.width, CV_8UC1, cv::Scalar(0)); // NOLINT(hicpp-signed-bitwise) - for(size_t i = 0; i < pointCloud.height; i++) - { - for(size_t j = 0; j < pointCloud.width; j++) - { - if(std::isnan(pointCloud(j, i).z)) - { - z.at(i, j) = 0; - } - else - { - // If few points are captured resulting in zMin == zMax, this will throw an division-by-zero - // exception. - z.at(i, j) = static_cast((255.0F * (pointCloud(j, i).z - zMin) / (zMax - zMin))); - } - } - } - - // Applying color map - cv::Mat zColorMap; - cv::applyColorMap(z, zColorMap, cv::COLORMAP_VIRIDIS); - - // Setting invalid points (nan) to black - for(size_t i = 0; i < pointCloud.height; i++) - { - for(size_t j = 0; j < pointCloud.width; j++) - { - if(std::isnan(pointCloud(j, i).z)) - { - auto &zRGB = zColorMap.at(i, j); - zRGB[0] = 0; - zRGB[1] = 0; - zRGB[2] = 0; - } - } - } - return zColorMap; - } - - pcl::PointCloud maskPointCloud(const Zivid::PointCloud &pointCloud, const cv::Mat &mask) - { - const auto data = pointCloud.copyPointsXYZColorsRGBA_SRGB(); - const int height = data.height(); - const int width = data.width(); - - // Creating point cloud structure - pcl::PointCloud maskedPointCloud(width, height); - maskedPointCloud.is_dense = false; - maskedPointCloud.points.resize(height * width); - - // Copying data points within the mask. Rest is set to NaN - for(int i = 0; i < height; i++) - { - for(int j = 0; j < width; j++) - { - if(mask.at(i, j) > 0) - { - maskedPointCloud(j, i).r = data(i, j).color.r; - maskedPointCloud(j, i).g = data(i, j).color.g; - maskedPointCloud(j, i).b = data(i, j).color.b; - maskedPointCloud(j, i).x = data(i, j).point.x; - maskedPointCloud(j, i).y = data(i, j).point.y; - maskedPointCloud(j, i).z = data(i, j).point.z; - } - else - { - maskedPointCloud(j, i).r = 0; - maskedPointCloud(j, i).g = 0; - maskedPointCloud(j, i).b = 0; - maskedPointCloud(j, i).x = NAN; - maskedPointCloud(j, i).y = NAN; - maskedPointCloud(j, i).z = NAN; - } - } - } - - return maskedPointCloud; - } - - void visualizeDepthMap(const pcl::PointCloud &pointCloud) - { - // Converting to Depth map in OpenCV format - cv::Mat zColorMap = pointCloudToCvZ(pointCloud); - // Visualizing Depth map - cv::namedWindow("Depth map", cv::WINDOW_AUTOSIZE); - cv::imshow("Depth map", zColorMap); - cv::waitKey(CI_WAITKEY_TIMEOUT_IN_MS); - } - - pcl::PointCloud convertToPCLPointCloud(const Zivid::PointCloud &pointCloud) - { - const auto data = pointCloud.copyData(); - - // Creating PCL point cloud structure - pcl::PointCloud pointCloudPCL; - pointCloudPCL.width = pointCloud.width(); - pointCloudPCL.height = pointCloud.height(); - pointCloudPCL.is_dense = false; - pointCloudPCL.points.resize(pointCloudPCL.width * pointCloudPCL.height); - - // Filling in point cloud data - for(size_t i = 0; i < pointCloudPCL.points.size(); ++i) - { - pointCloudPCL.points[i].x = data(i).point.x; // NOLINT(cppcoreguidelines-pro-type-union-access) - pointCloudPCL.points[i].y = data(i).point.y; // NOLINT(cppcoreguidelines-pro-type-union-access) - pointCloudPCL.points[i].z = data(i).point.z; // NOLINT(cppcoreguidelines-pro-type-union-access) - pointCloudPCL.points[i].r = data(i).color.r; // NOLINT(cppcoreguidelines-pro-type-union-access) - pointCloudPCL.points[i].g = data(i).color.g; // NOLINT(cppcoreguidelines-pro-type-union-access) - pointCloudPCL.points[i].b = data(i).color.b; // NOLINT(cppcoreguidelines-pro-type-union-access) - } - return pointCloudPCL; - } - -} // namespace - -int main() -{ - try - { - Zivid::Application zivid; - - std::string fileName = std::string(ZIVID_SAMPLE_DATA_DIR) + "/Zivid3D.zdf"; - std::cout << "Reading ZDF frame from file: " << fileName << std::endl; - const auto frame = Zivid::Frame(fileName); - - std::cout << "Getting point cloud from frame" << std::endl; - const auto pointCloud = frame.pointCloud(); - - const int pixelsToDisplay = 300; - std::cout << "Generating binary mask of central " << pixelsToDisplay << " x " << pixelsToDisplay << "pixels." - << std::endl; - const int height = pointCloud.height(); - const int width = pointCloud.width(); - const int heightMin = (height - pixelsToDisplay) / 2; - const int heightMax = (height + pixelsToDisplay) / 2; - const int widthMin = (width - pixelsToDisplay) / 2; - const int widthMax = (width + pixelsToDisplay) / 2; - cv::Mat mask = cv::Mat::zeros(height, width, CV_8U); - cv::rectangle( - mask, - cv::Point(widthMin, heightMin), - cv::Point(widthMax, heightMax), - cv::Scalar(255, 255, 255), - cv::FILLED); - - std::cout << "Converting to PCL point cloud" << std::endl; - const auto pointCloudPCL = convertToPCLPointCloud(pointCloud); - - std::cout << "Displaying point cloud before masking" << std::endl; - visualizePointCloud(pointCloudPCL.makeShared()); - - std::cout << "Displaying depth map before masking" << std::endl; - visualizeDepthMap(pointCloudPCL); - - std::cout << "Masking point cloud" << std::endl; - const pcl::PointCloud maskedPointCloudPCL = maskPointCloud(pointCloud, mask); - - std::cout << "Displaying point cloud after masking" << std::endl; - visualizePointCloud(maskedPointCloudPCL.makeShared()); - - std::cout << "Displaying depth map after masking" << std::endl; - visualizeDepthMap(maskedPointCloudPCL); - } - - catch(const std::exception &e) - { - std::cerr << "Error: " << Zivid::toString(e) << std::endl; - std::cout << "Press enter to exit." << std::endl; - std::cin.get(); - return EXIT_FAILURE; - } - - return EXIT_SUCCESS; -} diff --git a/source/Applications/Advanced/MultiCamera/StitchByTransformationFromZDF/StitchByTransformationFromZDF.cpp b/source/Applications/Advanced/MultiCamera/StitchByTransformationFromZDF/StitchByTransformationFromZDF.cpp index c9c4f407..df74d8d3 100644 --- a/source/Applications/Advanced/MultiCamera/StitchByTransformationFromZDF/StitchByTransformationFromZDF.cpp +++ b/source/Applications/Advanced/MultiCamera/StitchByTransformationFromZDF/StitchByTransformationFromZDF.cpp @@ -21,7 +21,6 @@ namespace const std::vector &zdfFileList, const std::vector &transformationMatrixFilesList) { - std::string fileExtension; std::string serialNumber; Zivid::UnorganizedPointCloud stitchedPointCloud; diff --git a/source/Applications/Basic/FileFormats/ConvertZDF/ConvertZDF.cpp b/source/Applications/Basic/FileFormats/ConvertZDF/ConvertZDF.cpp index 928214a5..d10900c3 100644 --- a/source/Applications/Basic/FileFormats/ConvertZDF/ConvertZDF.cpp +++ b/source/Applications/Basic/FileFormats/ConvertZDF/ConvertZDF.cpp @@ -22,9 +22,6 @@ Available formats: namespace { - using ColorSpace = Zivid::Experimental::PointCloudExport::ColorSpace; - using namespace Zivid::Experimental::PointCloudExport::FileFormat; - std::string toLower(std::string str) { std::transform(str.begin(), str.end(), str.begin(), [](unsigned char c) { return std::tolower(c); }); @@ -104,38 +101,46 @@ namespace const std::filesystem::path &filePath, const std::vector &fileFormats, bool linearRgb, - bool unordered) + bool unordered, + bool includeNormals) { for(const auto &format : fileFormats) { const auto fileNameWithExtension = filePath.parent_path() / (filePath.stem().string() + "." + format); - const auto colorSpace = linearRgb ? ColorSpace::linearRGB : ColorSpace::sRGB; + const auto colorSpace = linearRgb ? Zivid::Experimental::PointCloudExport::ColorSpace::linearRGB + : Zivid::Experimental::PointCloudExport::ColorSpace::sRGB; + const auto includeNorm = includeNormals ? Zivid::Experimental::PointCloudExport::IncludeNormals::yes + : Zivid::Experimental::PointCloudExport::IncludeNormals::no; std::cout << "Saving the frame to " << fileNameWithExtension << std::endl; if(format == "ply") { - const auto layout = unordered ? PLY::Layout::unordered : PLY::Layout::ordered; + const auto layout = unordered + ? Zivid::Experimental::PointCloudExport::FileFormat::PLY::Layout::unordered + : Zivid::Experimental::PointCloudExport::FileFormat::PLY::Layout::ordered; + Zivid::Experimental::PointCloudExport::exportFrame( - frame, PLY{ fileNameWithExtension.string(), layout, colorSpace }); + frame, + Zivid::Experimental::PointCloudExport::FileFormat::PLY{ + fileNameWithExtension.string(), layout, colorSpace, includeNorm }); } else if(format == "pcd") { - if(!unordered) - { - std::cout - << "NOTE: If you have configured the config file for PCD, points will be ordered. " - << "If not they will be unordered. See " - << "https://support.zivid.com/en/latest/reference-articles/point-cloud-structure-and-output-formats.html#organized-pcd-format" - << " for more information." << std::endl; - } + const auto layout = unordered + ? Zivid::Experimental::PointCloudExport::FileFormat::PCD::Layout::unorganized + : Zivid::Experimental::PointCloudExport::FileFormat::PCD::Layout::organized; Zivid::Experimental::PointCloudExport::exportFrame( - frame, PCD{ fileNameWithExtension.string(), colorSpace }); + frame, + Zivid::Experimental::PointCloudExport::FileFormat::PCD{ + fileNameWithExtension.string(), colorSpace, includeNorm, layout }); } else if(format == "xyz") { Zivid::Experimental::PointCloudExport::exportFrame( - frame, XYZ{ fileNameWithExtension.string(), colorSpace }); + frame, + Zivid::Experimental::PointCloudExport::FileFormat::XYZ{ fileNameWithExtension.string(), + colorSpace }); } else if(format == "csv" || format == "txt") { @@ -159,7 +164,7 @@ namespace if(linearRgb) { save2DImage( - frame.frame2D()->imageRGBA(), + frame.frame2D() ? frame.frame2D()->imageRGBA() : frame.pointCloud().copyImageRGBA(), frame.pointCloud().copyImageRGBA(), fileName, fileNamePointCloudResolution); @@ -167,7 +172,7 @@ namespace else { save2DImage( - frame.frame2D()->imageRGBA_SRGB(), + frame.frame2D() ? frame.frame2D()->imageRGBA_SRGB() : frame.pointCloud().copyImageRGBA_SRGB(), frame.pointCloud().copyImageRGBA_SRGB(), fileName, fileNamePointCloudResolution); @@ -187,6 +192,7 @@ int main(int argc, char **argv) bool linearRgb = false; bool unordered = false; bool showHelp = false; + bool includeNormals = false; const std::vector formats3D = { "ply", "pcd", "xyz", "csv", "txt" }; const std::vector formats2D = { "jpg", "png", "bmp" }; @@ -202,7 +208,9 @@ int main(int argc, char **argv) clipp::option("--linearRGB").set(linearRgb) % "Use linear RGB color space instead of sRGB for selected format(s)", clipp::option("--unordered").set(unordered) - % "Save point clouds as unordered instead of ordered (PLY, PCD)"); + % "Save point clouds as unordered instead of ordered (PLY, PCD)", + clipp::option("--includeNormals").set(includeNormals) + % "Whether to include normals in the exported 3D files (PLY,PCD)"); if(!clipp::parse(argc, argv, cli) || showHelp || inputPath.empty() || !contains(formats3D, formats3DSelected) || !contains(formats2D, formats2DSelected)) @@ -215,7 +223,11 @@ int main(int argc, char **argv) std::cout << clipp::documentation(cli) << "\n"; std::cout << "\nExample:\n"; std::cout << " ConvertZDF Zivid3D.zdf --3d ply xyz csv --2d jpg png\n"; - return showHelp ? EXIT_FAILURE : EXIT_SUCCESS; + if(showHelp) + { + return EXIT_SUCCESS; + } + throw std::runtime_error("Invalid command line arguments"); } const std::filesystem::path path(inputPath); @@ -261,7 +273,7 @@ int main(int argc, char **argv) { if(!formats3DSelected.empty()) { - convertTo3D(frame, filePath, formats3DSelected, linearRgb, unordered); + convertTo3D(frame, filePath, formats3DSelected, linearRgb, unordered, includeNormals); } if(!formats2DSelected.empty()) diff --git a/source/Applications/PointCloudTutorial.md b/source/Applications/PointCloudTutorial.md index e04f9aae..3b1969ac 100644 --- a/source/Applications/PointCloudTutorial.md +++ b/source/Applications/PointCloudTutorial.md @@ -2,7 +2,7 @@ Note\! This tutorial has been generated for use on Github. For original tutorial see: -[PointCloudTutorial](https://support.zivid.com/latest/academy/applications/point-cloud-tutorial.html) +[PointCloudTutorial](https://support.zivid.com/en/latest/camera/academy/applications/point-cloud-tutorial.html) @@ -27,7 +27,7 @@ tutorial see: ## Introduction This tutorial describes how to use Zivid SDK to work with [Point -Cloud](https://support.zivid.com/latest//reference-articles/point-cloud-structure-and-output-formats.html) +Cloud](https://support.zivid.com/en/latest//camera/reference-articles/point-cloud-structure-and-output-formats.html) data. ----- @@ -42,7 +42,7 @@ Tip: **Prerequisites** - Install [Zivid - Software](https://support.zivid.com/latest//getting-started/software-installation.html). + Software](https://support.zivid.com/en/latest//camera/getting-started/software-installation.html). - For Python: install [zivid-python](https://github.com/zivid/zivid-python#installation) @@ -116,7 +116,7 @@ const auto pointCloud = frame.pointCloud(); Point cloud contains XYZ, RGB, and SNR, laid out on a 2D grid. For more info check out [Point Cloud -Structure](https://support.zivid.com/latest//reference-articles/point-cloud-structure-and-output-formats.html). +Structure](https://support.zivid.com/en/latest//camera/reference-articles/point-cloud-structure-and-output-formats.html). The method `Zivid::Frame::pointCloud()` does not perform any copying from GPU memory. @@ -134,7 +134,7 @@ functions (section below) will block and wait for processing to finish before proceeding with the requested copy operation. For detailed explanation, see [Point Cloud Capture -Process](https://support.zivid.com/latest/academy/camera/point-cloud-capture-process.html). +Process](https://support.zivid.com/en/latest/camera/academy/camera/point-cloud-capture-process.html). ----- @@ -157,7 +157,7 @@ The unorganized point cloud can be extended with additional unorganized point clouds. ([go to -source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Applications/Advanced/MultiCamera/StitchByTransformationFromZDF/StitchByTransformationFromZDF.cpp#L46)) +source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Applications/Advanced/MultiCamera/StitchByTransformationFromZDF/StitchByTransformationFromZDF.cpp#L45)) ``` sourceCode cpp stitchedPointCloud.extend(currentPointCloud.transform(transformationMatrixZivid)); @@ -207,11 +207,15 @@ If you are only concerned about e.g. RGB color data of the point cloud, you can copy only that data to the CPU memory. ([go to -source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Advanced/AllocateMemoryForPointCloudData/AllocateMemoryForPointCloudData.cpp#L73-L95)) +source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Advanced/AllocateMemoryForPointCloudData/AllocateMemoryForPointCloudData.cpp#L73-L99)) ``` sourceCode cpp std::cout << "Capturing frame" << std::endl; frame = camera.capture2D3D(settings); +if(!frame.frame2D().has_value()) +{ + throw std::runtime_error("Captured frame does not contain a 2D image."); +} std::cout << "Copying colors with Zivid API from GPU to CPU" << std::endl; auto colors = frame.frame2D().value().imageBGRA_SRGB(); @@ -261,10 +265,10 @@ cv::waitKey(CI_WAITKEY_TIMEOUT_IN_MS); ## Transform You may want to -[transform](https://support.zivid.com/latest//academy/applications/transform.html) +[transform](https://support.zivid.com/en/latest//camera/academy/applications/transform.html) the point cloud to change its origin from the camera to the robot base frame or, e.g., [scale the point cloud by transforming it from mm to -m](https://support.zivid.com/latest//academy/applications/transform/transform-millimeters-to-meters.html). +m](https://support.zivid.com/en/latest//camera/academy/applications/transform/transform-millimeters-to-meters.html). ([go to source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Applications/Advanced/HandEyeCalibration/UtilizeHandEyeCalibration/UtilizeHandEyeCalibration.cpp#L236)) @@ -299,7 +303,7 @@ Even the in-place API returns the transformed point cloud, so you can use it directly, as in the example below. ([go to -source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Applications/Advanced/MultiCamera/StitchByTransformationFromZDF/StitchByTransformationFromZDF.cpp#L46)) +source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Applications/Advanced/MultiCamera/StitchByTransformationFromZDF/StitchByTransformationFromZDF.cpp#L45)) ``` sourceCode cpp stitchedPointCloud.extend(currentPointCloud.transform(transformationMatrixZivid)); @@ -310,7 +314,7 @@ stitchedPointCloud.extend(currentPointCloud.transform(transformationMatrixZivid) Sometimes you might not need a point cloud with as `high spatial resolution (High spatial resolution means more detail and less distance between points)` as given from the camera. You may then -[downsample](https://support.zivid.com/latest//academy/applications/downsampling.html) +[downsample](https://support.zivid.com/en/latest//camera/academy/applications/downsampling.html) the point cloud. ----- @@ -318,7 +322,7 @@ the point cloud. Note: > [Sampling -> (3D)](https://support.zivid.com/latest/reference-articles/settings/sampling.html) +> (3D)](https://support.zivid.com/en/latest/camera/reference-articles/settings/sampling.html) > describes a hardware-based sub-/downsample method that reduces the > resolution of the point cloud during capture while also reducing the > acquisition and capture time. @@ -398,7 +402,7 @@ const auto finalPointCloud = stitchedPointCloud.voxelDownsampled(0.5, 1); ## Normals Some applications require computing -[normals](https://support.zivid.com/latest//academy/applications/normals.html) +[normals](https://support.zivid.com/en/latest//camera/academy/applications/normals.html) from the point cloud. ([go to @@ -454,7 +458,7 @@ visualizer.run(); ``` For more information, check out [Visualization -Tutorial](https://support.zivid.com/latest/academy/applications/visualization-tutorial.html), +Tutorial](https://support.zivid.com/en/latest/camera/academy/applications/visualization-tutorial.html), where we cover point cloud, color image, depth map, and normals visualization, with implementations using third party libraries. @@ -465,8 +469,8 @@ manipulate it, transform it, and visualize it. ## Version History -| SDK | Changes | -| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| 2.16.0 | Added support for `Zivid::UnorganizedPointCloud`. `transformed` is added as a function to `Zivid::PointCloud` (also available in `Zivid::UnorganizedPointCloud`). | -| 2.11.0 | Added support for SRGB color space. | -| 2.10.0 | [:orphan:](https://support.zivid.com/latest/academy/camera/monochrome-capture.html) introduces a faster alternative to `downsample_point_cloud_tutorial`. | +| SDK | Changes | +| ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 2.16.0 | Added support for `Zivid::UnorganizedPointCloud`. `transformed` is added as a function to `Zivid::PointCloud` (also available in `Zivid::UnorganizedPointCloud`). | +| 2.11.0 | Added support for SRGB color space. | +| 2.10.0 | [:orphan:](https://support.zivid.com/en/latest/camera/academy/camera/monochrome-capture.html) introduces a faster alternative to `downsample_point_cloud_tutorial`. | diff --git a/source/CMakeLists.txt b/source/CMakeLists.txt index ef9cbdc5..85e0688f 100644 --- a/source/CMakeLists.txt +++ b/source/CMakeLists.txt @@ -37,8 +37,8 @@ set(SAMPLES Applications/Advanced/CaptureUndistort2D Applications/Advanced/CreateDepthMap Applications/Advanced/Downsample + Applications/Advanced/ExploreSettingsMetaData Applications/Advanced/GammaCorrection - Applications/Advanced/MaskPointCloud Applications/Advanced/ProjectAndFindMarker Applications/Advanced/ReadProjectAndCaptureImage Applications/Advanced/ReprojectPoints @@ -103,7 +103,6 @@ set(PCL_DEPENDING Capture2DAnd3D CaptureAndVisualizeNormals CaptureWritePCLVis3D - MaskPointCloud ReadPCLVis3D ) set(OpenCV_DEPENDING @@ -112,7 +111,6 @@ set(OpenCV_DEPENDING CaptureUndistort2D CreateDepthMap GammaCorrection - MaskPointCloud ProjectAndFindMarker ReprojectPoints ReadProjectAndCaptureImage @@ -169,7 +167,6 @@ set(Halcon_DEPENDING CaptureHalconViaGenICam CaptureHalconViaZivid ) - find_package(Zivid ${ZIVID_VERSION} COMPONENTS Core REQUIRED) find_package(Threads REQUIRED) diff --git a/source/Camera/Advanced/AllocateMemoryForPointCloudData/AllocateMemoryForPointCloudData.cpp b/source/Camera/Advanced/AllocateMemoryForPointCloudData/AllocateMemoryForPointCloudData.cpp index 4b1c0c20..14935a8e 100644 --- a/source/Camera/Advanced/AllocateMemoryForPointCloudData/AllocateMemoryForPointCloudData.cpp +++ b/source/Camera/Advanced/AllocateMemoryForPointCloudData/AllocateMemoryForPointCloudData.cpp @@ -73,6 +73,10 @@ int main() std::cout << "Capturing frame" << std::endl; frame = camera.capture2D3D(settings); + if(!frame.frame2D().has_value()) + { + throw std::runtime_error("Captured frame does not contain a 2D image."); + } std::cout << "Copying colors with Zivid API from GPU to CPU" << std::endl; auto colors = frame.frame2D().value().imageBGRA_SRGB(); diff --git a/source/Camera/Advanced/Capture2DAnd3D/Capture2DAnd3D.cpp b/source/Camera/Advanced/Capture2DAnd3D/Capture2DAnd3D.cpp index caaaa3e2..a8d923c5 100644 --- a/source/Camera/Advanced/Capture2DAnd3D/Capture2DAnd3D.cpp +++ b/source/Camera/Advanced/Capture2DAnd3D/Capture2DAnd3D.cpp @@ -121,6 +121,10 @@ int main() const auto frame = camera.capture2D3D(settings); std::cout << "Getting BGRA image" << std::endl; + if(!frame.frame2D().has_value()) + { + throw std::runtime_error("Captured frame does not contain a 2D image."); + } const auto image = frame.frame2D().value().imageBGRA_SRGB(); const cv::Mat bgra( image.height(), diff --git a/source/Camera/Advanced/CaptureHalconViaZivid/CaptureHalconViaZivid.cpp b/source/Camera/Advanced/CaptureHalconViaZivid/CaptureHalconViaZivid.cpp index 6aa27fa4..c54a0b89 100644 --- a/source/Camera/Advanced/CaptureHalconViaZivid/CaptureHalconViaZivid.cpp +++ b/source/Camera/Advanced/CaptureHalconViaZivid/CaptureHalconViaZivid.cpp @@ -78,6 +78,10 @@ namespace const auto pointsXYZ = pointCloud.copyPointsXYZ(); const auto normalsXYZ = pointCloud.copyNormalsXYZ(); + if(!frame.frame2D().has_value()) + { + throw std::runtime_error("Captured frame does not contain a 2D image."); + } const auto colorsRGBA = frame.frame2D().value().imageRGBA_SRGB(); if(colorsRGBA.height() != height || colorsRGBA.width() != width) diff --git a/source/Camera/Advanced/CaptureViaGenICam/CaptureViaGenICam.cpp b/source/Camera/Advanced/CaptureViaGenICam/CaptureViaGenICam.cpp index ed37879a..f95d0814 100644 --- a/source/Camera/Advanced/CaptureViaGenICam/CaptureViaGenICam.cpp +++ b/source/Camera/Advanced/CaptureViaGenICam/CaptureViaGenICam.cpp @@ -58,6 +58,8 @@ namespace template void checkedTLCall(Function &function, const std::string &message, Ts &&...ts) { + // Calling C API here, so passing void** without explicit cast is okay. + // NOLINTNEXTLINE(bugprone-multi-level-implicit-pointer-conversion) auto errorCode = function(std::forward(ts)...); checkAndThrow(errorCode, message); } diff --git a/source/Camera/Basic/Capture/Capture.cpp b/source/Camera/Basic/Capture/Capture.cpp index 4d8d03e3..293e7139 100644 --- a/source/Camera/Basic/Capture/Capture.cpp +++ b/source/Camera/Basic/Capture/Capture.cpp @@ -24,6 +24,10 @@ int main() std::cout << "Capturing frame" << std::endl; const auto frame = camera.capture2D3D(settings); + if(!frame.frame2D().has_value()) + { + throw std::runtime_error("Captured frame does not contain a 2D image."); + } const auto imageRGBA = frame.frame2D().value().imageRGBA_SRGB(); const auto imageFile = "ImageRGB.png"; std::cout << "Saving 2D color image (sRGB color space) to file: " << imageFile << std::endl; diff --git a/source/Camera/Basic/CaptureTutorial.md b/source/Camera/Basic/CaptureTutorial.md index 89c98851..a6135695 100644 --- a/source/Camera/Basic/CaptureTutorial.md +++ b/source/Camera/Basic/CaptureTutorial.md @@ -2,7 +2,7 @@ Note\! This tutorial has been generated for use on Github. For original tutorial see: -[CaptureTutorial](https://support.zivid.com/latest/academy/camera/capture-tutorial.html) +[CaptureTutorial](https://support.zivid.com/en/latest/camera/academy/camera/capture-tutorial.html) @@ -31,7 +31,7 @@ and 2D images. **Prerequisites** - Install [Zivid - Software](https://support.zivid.com/latest//getting-started/software-installation.html). + Software](https://support.zivid.com/en/latest//camera/getting-started/software-installation.html). - For Python: install [zivid-python](https://github.com/zivid/zivid-python#installation) @@ -108,7 +108,7 @@ As with all cameras there are settings that can be configured. ### Presets The recommendation is to use -[Presets](https://support.zivid.com/latest/reference-articles/presets-settings.html) +[Presets](https://support.zivid.com/en/latest/camera/reference-articles/presets-settings.html) available in Zivid Studio and as .yml files (see below). Presets are designed to work well for most cases right away, making them a great starting point. If needed, you can easily fine-tune the settings for @@ -146,9 +146,9 @@ settings.save(settingsFile); Another option is to configure settings manually. For more information about what each settings does, please see [Camera -Settings](https://support.zivid.com/latest/reference-articles/camera-settings.html). +Settings](https://support.zivid.com/en/latest/camera/reference-articles/camera-settings.html). Then, the next step it's [Capturing High Quality Point -Clouds](https://support.zivid.com/latest/academy/camera/capturing-high-quality-point-clouds.html) +Clouds](https://support.zivid.com/en/latest/camera/academy/camera/capturing-high-quality-point-clouds.html) #### Single 2D and 3D Acquisition - Default settings @@ -346,7 +346,7 @@ const auto frame2D = camera.capture2D(settings); We can now save our results. ([go to -source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Basic/Capture/Capture.cpp#L32-L34)) +source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Basic/Capture/Capture.cpp#L36-L38)) ``` sourceCode cpp const auto dataFile = "Frame.zdf"; @@ -358,17 +358,17 @@ frame.save(dataFile); Tip: > You can open and view `Frame.zdf` file in [Zivid -> Studio](https://support.zivid.com/latest//getting-started/studio-guide.html). +> Studio](https://support.zivid.com/en/latest//camera/getting-started/studio-guide.html). ### Export In the next code example, the point cloud is exported to the .ply format. For other exporting options, see [Point -Cloud](https://support.zivid.com/latest//reference-articles/point-cloud-structure-and-output-formats.html) +Cloud](https://support.zivid.com/en/latest//camera/reference-articles/point-cloud-structure-and-output-formats.html) for a list of supported formats. ([go to -source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Basic/Capture/Capture.cpp#L36-L38)) +source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Basic/Capture/Capture.cpp#L40-L42)) ``` sourceCode cpp const auto dataFilePLY = "PointCloud.ply"; @@ -456,10 +456,10 @@ const auto image2D = frame.frame2D().value().imageBGRA_SRGB(); ## File Camera A [file -camera](https://support.zivid.com/latest//academy/camera/file-camera.html) +camera](https://support.zivid.com/en/latest//camera/academy/camera/file-camera.html) allows you to experiment with the SDK without access to a physical camera. The file cameras can be found in [Sample -Data](https://support.zivid.com/latest/api-reference/samples/sample-data.html) +Data](https://support.zivid.com/en/latest/camera/api-reference/samples/sample-data.html) where there are multiple file cameras to choose from. ([go to @@ -499,7 +499,7 @@ settings.color() = Zivid::Settings::Color{ settings2D }; ``` You can read more about the file camera option in [File -Camera](https://support.zivid.com/latest/academy/camera/file-camera.html). +Camera](https://support.zivid.com/en/latest/camera/academy/camera/file-camera.html). ## Multithreading diff --git a/source/Camera/Basic/QuickCaptureTutorial.md b/source/Camera/Basic/QuickCaptureTutorial.md index cbf97b6a..05587a2e 100644 --- a/source/Camera/Basic/QuickCaptureTutorial.md +++ b/source/Camera/Basic/QuickCaptureTutorial.md @@ -2,7 +2,7 @@ Note\! This tutorial has been generated for use on Github. For original tutorial see: -[QuickCaptureTutorial](https://support.zivid.com/latest/getting-started/quick-capture-tutorial.html) +[QuickCaptureTutorial](https://support.zivid.com/en/latest/camera/getting-started/quick-capture-tutorial.html) @@ -29,7 +29,7 @@ capture point clouds. **Prerequisites** - Install [Zivid - Software](https://support.zivid.com/latest//getting-started/software-installation.html). + Software](https://support.zivid.com/en/latest//camera/getting-started/software-installation.html). - For Python: install [zivid-python](https://github.com/zivid/zivid-python#installation) @@ -75,7 +75,7 @@ const auto frame = camera.capture2D3D(settings); ## Save ([go to -source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Basic/Capture/Capture.cpp#L32-L34)) +source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Basic/Capture/Capture.cpp#L36-L38)) ``` sourceCode cpp const auto dataFile = "Frame.zdf"; @@ -84,7 +84,7 @@ frame.save(dataFile); ``` ([go to -source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Basic/Capture/Capture.cpp#L36-L38)) +source](https://github.com/zivid/zivid-cpp-samples/tree/master//source/Camera/Basic/Capture/Capture.cpp#L40-L42)) ``` sourceCode cpp const auto dataFilePLY = "PointCloud.ply"; @@ -92,7 +92,7 @@ frame.save(dataFilePLY); ``` For other exporting options, see [Point -Cloud](https://support.zivid.com/latest//reference-articles/point-cloud-structure-and-output-formats.html) +Cloud](https://support.zivid.com/en/latest//camera/reference-articles/point-cloud-structure-and-output-formats.html) for a list of supported formats ## Utilize @@ -110,10 +110,10 @@ const auto data = pointCloud.copyData(); Tip: 1. You can export Preset settings to YML from [Zivid - Studio](https://support.zivid.com/latest//getting-started/studio-guide.html) + Studio](https://support.zivid.com/en/latest//camera/getting-started/studio-guide.html) \#. You can open and view `Frame.zdf` file in [Zivid -Studio](https://support.zivid.com/latest//getting-started/studio-guide.html). +Studio](https://support.zivid.com/en/latest//camera/getting-started/studio-guide.html). .. rubric:: Conclusion This tutorial shows the most basic way to use the Zivid SDK to connect diff --git a/source/Camera/InfoUtilOther/ZividBenchmark/ZividBenchmark.cpp b/source/Camera/InfoUtilOther/ZividBenchmark/ZividBenchmark.cpp index e3a2d7d4..c30f06e0 100644 --- a/source/Camera/InfoUtilOther/ZividBenchmark/ZividBenchmark.cpp +++ b/source/Camera/InfoUtilOther/ZividBenchmark/ZividBenchmark.cpp @@ -921,6 +921,10 @@ namespace const std::vector twoApertures{ 3.0, 3.0 }; const auto settings2D3D = makeSettings(camera, twoApertures, twoExposureTimes, exposureTime, false, false); auto warmupFrame = camera.capture2D3D(settings2D3D); + if(!warmupFrame.frame2D().has_value()) + { + throw std::runtime_error("Warmup frame does not contain 2D data"); + } auto warmupFrame2D = warmupFrame.frame2D().value(); copyDataTime(warmupFrame); @@ -936,6 +940,10 @@ namespace for(size_t i = 0; i < numCopies; i++) { auto frame = camera.capture2D3D(settings2D3D); + if(!frame.frame2D().has_value()) + { + throw std::runtime_error("Captured frame does not contain 2D data"); + } auto frame2D = frame.frame2D().value(); copyDataDurations[0].push_back(copyDataTime(frame));