Coordfill (default)
C++ (Windows, Linux, MacOS / CUDA and Metal accelerated) port of https://github.com/NiFangBaAGe/CoordFill.git.
Example Input & Outputs
| Inputs | Outputs |
![]() ![]() |
![]() |
Demo Code
1#include "blace_ai.h"
2#include <opencv2/opencv.hpp>
3
4// include the models you want to use
5#include "coordfill_v7_default_v1_ALL_export_version_v26.h"
6
7using namespace blace;
8
9int main() {
10 ::workload_management::BlaceWorld blace;
11
12 // load image into op
13 auto exe_path = util::getPathToExe();
14 std::filesystem::path image_path = exe_path / "example.png";
15 auto world_tensor_orig_img =
16 CONSTRUCT_OP(ops::FromImageFileOp(image_path.string()));
17
18 std::filesystem::path mask_path = exe_path / "example_mask.png";
19 auto world_tensor_orig_mask =
20 CONSTRUCT_OP(ops::FromImageFileOp(mask_path.string()));
21 world_tensor_orig_mask =
22 CONSTRUCT_OP(ops::ToColorOp(world_tensor_orig_mask, ml_core::R));
23
24 // interpolate to size consumable by model
25 auto interpolated_img = CONSTRUCT_OP(ops::Interpolate2DOp(
26 world_tensor_orig_img, 640, 640, ml_core::BICUBIC, false, true));
27
28 // interpolate to size consumable by model
29 auto interpolated_mask = CONSTRUCT_OP(ops::Interpolate2DOp(
30 world_tensor_orig_mask, 640, 640, ml_core::BICUBIC, false, true));
31
32 // construct model inference arguments
33 ml_core::InferenceArgsCollection infer_args;
34 infer_args.inference_args.backends = {
35 ml_core::TORCHSCRIPT_CUDA_FP16, ml_core::TORCHSCRIPT_MPS_FP16,
36 ml_core::TORCHSCRIPT_CUDA_FP32, ml_core::TORCHSCRIPT_MPS_FP32,
37 ml_core::ONNX_DML_FP32, ml_core::TORCHSCRIPT_CPU_FP32};
38
39 // construct inference operation
40 auto infer_op = coordfill_v7_default_v1_ALL_export_version_v26_run(
41 interpolated_img, interpolated_mask, 0, infer_args,
42 util::getPathToExe().string());
43
44 // write result to image file
45 auto out_file = exe_path / "filled_image.png";
46 infer_op = CONSTRUCT_OP(ops::SaveImageOp(infer_op, out_file.string()));
47
48 // construct evaluator and evaluate
49 computation_graph::GraphEvaluator evaluator(infer_op);
50 auto eval_result = evaluator.evaluateToRawMemory();
51
52 return 0;
53}
Tested on version v1.0.3 of blace.ai sdk. Might also work on newer or older releases (check if release notes of blace.ai state breaking changes).
Quickstart
- Download blace.ai SDK and unzip. In the bootstrap script
build_run_demos.ps1(Windows) orbuild_run_demos.sh(Linux/MacOS) set theBLACE_AI_CMAKE_DIRenvironment variable to thecmakefolder inside the unzipped SDK, e.g.export BLACE_AI_CMAKE_DIR="<unzip_folder>/package/cmake". - Download the model payload(s) (
.binfiles) from below and place in the same folder as the bootstrapper scripts. - Then run the bootstrap script with
powershell build_run_demo.ps1(Windows)
sh build_run_demo.sh(Linux and MacOS).
This will build and execute the demo.
Supported Backends
| Torchscript CPU | Torchscript CUDA FP16 * | Torchscript CUDA FP32 * | Torchscript MPS FP16 * | Torchscript MPS FP32 * | ONNX CPU FP32 | ONNX DirectML FP32 * |
|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
(*: Hardware Accelerated)
Artifacts
| Torchscript Payload | Demo Project | Header |




