Retinexformer (default)
C++ (Windows, Linux, MacOS / CUDA and Metal accelerated) port of https://github.com/caiyuanhao1998/Retinexformer.git.
Example Input & Outputs
| Inputs | Outputs |
![]() |
![]() |
Demo Code
1#include <opencv2/opencv.hpp>
2
3#include "blace_ai.h"
4
5// include the models you want to use
6#include "retinexformer_v1_default_v1_ALL_export_version_v25.h"
7
8using namespace blace;
9int main() {
10 ::workload_management::BlaceWorld blace;
11 // load image into op
12 auto exe_path = util::getPathToExe();
13 std::filesystem::path photo_path = exe_path / "dark_kitchen.png";
14 auto input_img = CONSTRUCT_OP(ops::FromImageFileOp(photo_path.string()));
15
16 // construct model inference arguments
17 ml_core::InferenceArgsCollection infer_args;
18 infer_args.inference_args.backends = {
19 ml_core::TORCHSCRIPT_CUDA_FP16, ml_core::TORCHSCRIPT_MPS_FP16,
20 ml_core::TORCHSCRIPT_CUDA_FP32, ml_core::TORCHSCRIPT_MPS_FP32,
21 ml_core::ONNX_DML_FP32, ml_core::TORCHSCRIPT_CPU_FP32};
22
23 // construct inference operation
24 auto infer_op = retinexformer_v1_default_v1_ALL_export_version_v25_run(
25 input_img, 0, infer_args, util::getPathToExe().string());
26
27 // construct evaluator and evaluate to cv::Mat
28 computation_graph::GraphEvaluator evaluator(infer_op);
29 auto [return_code, cv_result] = evaluator.evaluateToCVMat();
30
31 // multiply for plotting
32 cv_result *= 255.;
33
34 // save to disk and return
35 auto out_file = exe_path / "illuminated_kitchen.png";
36 cv::imwrite(out_file.string(), cv_result);
37
38 return 0;
39}
Tested on version v0.9.96 of blace.ai sdk. Might also work on newer or older releases (check if release notes of blace.ai state breaking changes).
Quickstart
- Download blace.ai SDK and unzip. In the bootstrap script
build_run_demos.ps1(Windows) orbuild_run_demos.sh(Linux/MacOS) set theBLACE_AI_CMAKE_DIRenvironment variable to thecmakefolder inside the unzipped SDK, e.g.export BLACE_AI_CMAKE_DIR="<unzip_folder>/package/cmake". - Download the model payload(s) (
.binfiles) from below and place in the same folder as the bootstrapper scripts. - Then run the bootstrap script with
powershell build_run_demo.ps1(Windows)
sh build_run_demo.sh(Linux and MacOS).
This will build and execute the demo.
Supported Backends
| Torchscript CPU | Torchscript CUDA FP16 * | Torchscript CUDA FP32 * | Torchscript MPS FP16 * | Torchscript MPS FP32 * | ONNX CPU FP32 | ONNX DirectML FP32 * |
|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
(*: Hardware Accelerated)
Artifacts
| Torchscript Payload | Demo Project | Header |



