Distillanydepth (default)
C++ (Windows, Linux, MacOS / CUDA and Metal accelerated) port of https://github.com/Westlake-AGI-Lab/Distill-Any-Depth.
Example Input & Outputs
| Inputs | Outputs |
![]() |
![]() |
Demo Code
1#include "blace_ai.h"
2#include <opencv2/opencv.hpp>
3
4// include the models you want to use
5#include "DistillAnyDepth_v1_default_v1_ALL_export_version_v25.h"
6
7using namespace blace;
8int main() {
9 ::workload_management::BlaceWorld blace;
10
11 // load image into op
12 auto exe_path = util::getPathToExe();
13 std::filesystem::path photo_path = exe_path / "butterfly.jpg";
14 auto img_op = CONSTRUCT_OP(ops::FromImageFileOp(photo_path.string()));
15
16 // construct model inference arguments
17 ml_core::InferenceArgsCollection infer_args;
18 infer_args.inference_args.backends = {
19 ml_core::TORCHSCRIPT_CUDA_FP16, ml_core::TORCHSCRIPT_MPS_FP16,
20 ml_core::TORCHSCRIPT_CUDA_FP32, ml_core::TORCHSCRIPT_MPS_FP32,
21 ml_core::ONNX_DML_FP32, ml_core::TORCHSCRIPT_CPU_FP32};
22
23 // construct inference operation
24 auto infer_op = DistillAnyDepth_v1_default_v1_ALL_export_version_v25_run(
25 img_op, 0, infer_args, util::getPathToExe().string());
26
27 // normalize depth to zero-one range
28 auto result_depth = CONSTRUCT_OP(ops::NormalizeToZeroOneOP(infer_op));
29
30 // write result to image file
31 auto out_file = exe_path / "depth_result.png";
32 result_depth =
33 CONSTRUCT_OP(ops::SaveImageOp(result_depth, out_file.string()));
34
35 // construct evaluator and evaluate
36 computation_graph::GraphEvaluator evaluator(result_depth);
37 auto eval_result = evaluator.evaluateToRawMemory();
38
39 return 0;
40}
Tested on version v0.9.96 of blace.ai sdk. Might also work on newer or older releases (check if release notes of blace.ai state breaking changes).
Quickstart
- Download blace.ai SDK and unzip. In the bootstrap script
build_run_demos.ps1(Windows) orbuild_run_demos.sh(Linux/MacOS) set theBLACE_AI_CMAKE_DIRenvironment variable to thecmakefolder inside the unzipped SDK, e.g.export BLACE_AI_CMAKE_DIR="<unzip_folder>/package/cmake". - Download the model payload(s) (
.binfiles) from below and place in the same folder as the bootstrapper scripts. - Then run the bootstrap script with
powershell build_run_demo.ps1(Windows)
sh build_run_demo.sh(Linux and MacOS).
This will build and execute the demo.
Supported Backends
| Torchscript CPU | Torchscript CUDA FP16 * | Torchscript CUDA FP32 * | Torchscript MPS FP16 * | Torchscript MPS FP32 * | ONNX CPU FP32 | ONNX DirectML FP32 * |
|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
(*: Hardware Accelerated)
Artifacts
| Torchscript Payload | Demo Project | Header |



