Skip to main content Scroll Top

Llmdet (default)

C++ (Windows, Linux, MacOS / CUDA and Metal accelerated) port of https://github.com/iSEE-Laboratory/LLMDet.git.

Example Input & Outputs

Inputs Outputs
Input
Image Input
Input
Car Detections

Demo Code

 1#include "blace_ai.h"
 2#include <opencv2/opencv.hpp>
 3
 4// include the models you want to use
 5#include "llmdet_v1_default_v1_ALL_export_version_v26.h"
 6
 7using namespace blace;
 8int main() {
 9  workload_management::BlaceWorld blace;
10
11  // load image into op
12  auto exe_path = util::getPathToExe();
13  std::filesystem::path photo_path = exe_path / "street.jpg";
14  auto world_tensor_orig =
15      CONSTRUCT_OP(ops::FromImageFileOp(photo_path.string()));
16
17  // construct model inference arguments
18  ml_core::InferenceArgsCollection infer_args;
19  infer_args.inference_args.backends = {
20      ml_core::TORCHSCRIPT_CUDA_FP16, ml_core::TORCHSCRIPT_MPS_FP16,
21      ml_core::TORCHSCRIPT_CUDA_FP32, ml_core::TORCHSCRIPT_MPS_FP32,
22      ml_core::ONNX_DML_FP32,         ml_core::TORCHSCRIPT_CPU_FP32};
23
24  blace::ops::OpP text_op = CONSTRUCT_OP(blace::ops::FromTextOp("car"));
25  blace::ops::OpP thres = CONSTRUCT_OP(blace::ops::FromFloatOp(0.25));
26  blace::ops::OpP multiple = CONSTRUCT_OP(blace::ops::FromBoolOp(true));
27
28  // construct inference operation, returns (B,6) with 6 elems: top-left,
29  // top-right, bottom-right, bottom-left, input width, input height
30  auto bounding_boxes = llmdet_v1_default_v1_ALL_export_version_v26_run(
31      world_tensor_orig, text_op, thres, multiple, 0, infer_args,
32      util::getPathToExe().string());
33
34  // remove width and height
35  bounding_boxes = CONSTRUCT_OP(blace::ops::IndexOp(
36      bounding_boxes,
37      blace::ml_core::BlaceIndexVec{blace::ml_core::Slice(),
38                                    blace::ml_core::Slice(0, 4)}));
39  auto image_with_rectangles = CONSTRUCT_OP(blace::ops::DrawRectangles(
40      world_tensor_orig, bounding_boxes, 50, 200, 50, 6));
41
42  // write result to image file
43  auto out_file = exe_path / "image_with_rectangles.png";
44  image_with_rectangles =
45      CONSTRUCT_OP(ops::SaveImageOp(image_with_rectangles, out_file.string()));
46
47  // construct evaluator and evaluate
48  computation_graph::GraphEvaluator evaluator(image_with_rectangles);
49  auto eval_result = evaluator.evaluateToRawMemory();
50
51  return 0;
52}

Tested on version v1.0.5 of blace.ai sdk. Might also work on newer or older releases (check if release notes of blace.ai state breaking changes).

Quickstart

  1. Download blace.ai SDK and unzip. In the bootstrap script build_run_demos.ps1 (Windows) or build_run_demos.sh (Linux/MacOS) set the BLACE_AI_CMAKE_DIR environment variable to the cmake folder inside the unzipped SDK, e.g. export BLACE_AI_CMAKE_DIR="<unzip_folder>/package/cmake".
  2. Download the model payload(s) (.bin files) from below and place in the same folder as the bootstrapper scripts.
  3. Then run the bootstrap script with
    powershell build_run_demo.ps1 (Windows)
    sh build_run_demo.sh (Linux and MacOS).
    This will build and execute the demo.

Supported Backends

Torchscript CPU Torchscript CUDA FP16 * Torchscript CUDA FP32 * Torchscript MPS FP16 * Torchscript MPS FP32 * ONNX CPU FP32 ONNX DirectML FP32 *

(*: Hardware Accelerated)

Artifacts

Torchscript Payload Demo Project Header

License