Depth Anything 3 (mono_large)
C++ (Windows, Linux, MacOS / CUDA and Metal accelerated) port of https://github.com/ByteDance-Seed/Depth-Anything-3.git.
Example Input & Outputs
| Inputs | Outputs |
![]() |
![]() |
Demo Code
1#include "blace_ai.h"
2#include <opencv2/opencv.hpp>
3
4// include the models you want to use
5#include "depth_anything_v3_v2_mono_large_v1_ALL_export_version_v26.h"
6
7using namespace blace;
8int main() {
9 workload_management::BlaceWorld blace;
10
11 // load image into op
12 auto exe_path = util::getPathToExe();
13 std::filesystem::path photo_path = exe_path / "butterfly.jpg";
14
15 auto img = CONSTRUCT_OP(ops::FromImageFileOp(photo_path.string()));
16
17 // construct model inference arguments
18 ml_core::InferenceArgsCollection infer_args;
19 infer_args.inference_args.backends = {
20 ml_core::TORCHSCRIPT_CUDA_FP16, ml_core::TORCHSCRIPT_MPS_FP16,
21 ml_core::TORCHSCRIPT_CUDA_FP32, ml_core::TORCHSCRIPT_MPS_FP32,
22 ml_core::ONNX_DML_FP32};
23
24 img = CONSTRUCT_OP(
25 ops::Interpolate2DOp(img, 700, 1288, ml_core::AREA, false, false));
26
27 // construct inference operation
28 auto infer_op = depth_anything_v3_v2_mono_large_v1_ALL_export_version_v26_run(
29 img, 0, infer_args, util::getPathToExe().string());
30
31 // normalize depth to zero-one range
32 auto result_depth = CONSTRUCT_OP(ops::NormalizeToZeroOneOP(infer_op));
33
34 auto out_file = exe_path / "depth_result.png";
35 result_depth =
36 CONSTRUCT_OP(ops::SaveImageOp(result_depth, out_file.string()));
37
38 // construct evaluator and evaluate to cv::Mat
39 computation_graph::GraphEvaluator evaluator(result_depth);
40
41 auto [return_code, cv_result] = evaluator.evaluateToRawMemory();
42
43 return 0;
44}
Tested on version v1.0.5 of blace.ai sdk. Might also work on newer or older releases (check if release notes of blace.ai state breaking changes).
Quickstart
- Download blace.ai SDK and unzip. In the bootstrap script
build_run_demos.ps1(Windows) orbuild_run_demos.sh(Linux/MacOS) set theBLACE_AI_CMAKE_DIRenvironment variable to thecmakefolder inside the unzipped SDK, e.g.export BLACE_AI_CMAKE_DIR="<unzip_folder>/package/cmake". - Download the model payload(s) (
.binfiles) from below and place in the same folder as the bootstrapper scripts. - Then run the bootstrap script with
powershell build_run_demo.ps1(Windows)
sh build_run_demo.sh(Linux and MacOS).
This will build and execute the demo.
Supported Backends
| Torchscript CPU | Torchscript CUDA FP16 * | Torchscript CUDA FP32 * | Torchscript MPS FP16 * | Torchscript MPS FP32 * | ONNX CPU FP32 | ONNX DirectML FP32 * |
|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
(*: Hardware Accelerated)
Artifacts
| Torchscript Payload | Demo Project | Header |



