Options
All
  • Public
  • Public/Protected
  • All
Menu

External module "types/opencv/dnn"

Index

Type aliases

Backend

Backend: any

[Net::setPreferableBackend]

Target

Target: any

[Net::setPreferableBackend]

Variables

Const DNN_BACKEND_DEFAULT

DNN_BACKEND_DEFAULT: Backend

DNN_BACKEND_DEFAULT equals to DNN_BACKEND_INFERENCE_ENGINE if OpenCV is built with Intel's Inference Engine library or DNN_BACKEND_OPENCV otherwise.

Const DNN_BACKEND_HALIDE

DNN_BACKEND_HALIDE: Backend

Const DNN_BACKEND_INFERENCE_ENGINE

DNN_BACKEND_INFERENCE_ENGINE: Backend

Const DNN_BACKEND_OPENCV

DNN_BACKEND_OPENCV: Backend

Const DNN_BACKEND_VKCOM

DNN_BACKEND_VKCOM: Backend

Const DNN_TARGET_CPU

DNN_TARGET_CPU: Target

Const DNN_TARGET_FPGA

DNN_TARGET_FPGA: Target

Const DNN_TARGET_MYRIAD

DNN_TARGET_MYRIAD: Target

Const DNN_TARGET_OPENCL

DNN_TARGET_OPENCL: Target

Const DNN_TARGET_OPENCL_FP16

DNN_TARGET_OPENCL_FP16: Target

Const DNN_TARGET_VULKAN

DNN_TARGET_VULKAN: Target

Functions

NMSBoxes

  • NMSBoxes(bboxes: any, scores: any, score_threshold: any, nms_threshold: any, indices: any, eta?: any, top_k?: any): void
  • NMSBoxes(bboxes: any, scores: any, score_threshold: any, nms_threshold: any, indices: any, eta?: any, top_k?: any): void
  • NMSBoxes(bboxes: any, scores: any, score_threshold: any, nms_threshold: any, indices: any, eta?: any, top_k?: any): void
  • Parameters

    • bboxes: any

      a set of bounding boxes to apply NMS.

    • scores: any

      a set of corresponding confidences.

    • score_threshold: any

      a threshold used to filter boxes by score.

    • nms_threshold: any

      a threshold used in non maximum suppression.

    • indices: any

      the kept indices of bboxes after NMS.

    • Optional eta: any

      a coefficient in adaptive threshold formula: $nms_threshold_{i+1}=eta\cdot nms_threshold_i$.

    • Optional top_k: any

      if >0, keep at most top_k picked indices.

    Returns void

  • Parameters

    • bboxes: any
    • scores: any
    • score_threshold: any
    • nms_threshold: any
    • indices: any
    • Optional eta: any
    • Optional top_k: any

    Returns void

  • Parameters

    • bboxes: any
    • scores: any
    • score_threshold: any
    • nms_threshold: any
    • indices: any
    • Optional eta: any
    • Optional top_k: any

    Returns void

blobFromImage

  • blobFromImage(image: InputArray, scalefactor?: double, size?: any, mean?: any, swapRB?: bool, crop?: bool, ddepth?: int): Mat
  • blobFromImage(image: InputArray, blob: OutputArray, scalefactor?: double, size?: any, mean?: any, swapRB?: bool, crop?: bool, ddepth?: int): void
  • if crop is true, input image is resized so one side after resize is equal to corresponding dimension in size and another one is equal or larger. Then, crop from the center is performed. If crop is false, direct resize without cropping and preserving aspect ratio is performed.

    4-dimensional [Mat] with NCHW dimensions order.

    Parameters

    • image: InputArray

      input image (with 1-, 3- or 4-channels).

    • Optional scalefactor: double

      multiplier for image values.

    • Optional size: any

      spatial size for output image

    • Optional mean: any

      scalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true.

    • Optional swapRB: bool

      flag which indicates that swap first and last channels in 3-channel image is necessary.

    • Optional crop: bool

      flag which indicates whether image will be cropped after resize or not

    • Optional ddepth: int

      Depth of output blob. Choose CV_32F or CV_8U.

    Returns Mat

  • This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

    Parameters

    • image: InputArray
    • blob: OutputArray
    • Optional scalefactor: double
    • Optional size: any
    • Optional mean: any
    • Optional swapRB: bool
    • Optional crop: bool
    • Optional ddepth: int

    Returns void

blobFromImages

  • blobFromImages(images: InputArrayOfArrays, scalefactor?: double, size?: Size, mean?: any, swapRB?: bool, crop?: bool, ddepth?: int): Mat
  • blobFromImages(images: InputArrayOfArrays, blob: OutputArray, scalefactor?: double, size?: Size, mean?: any, swapRB?: bool, crop?: bool, ddepth?: int): void
  • if crop is true, input image is resized so one side after resize is equal to corresponding dimension in size and another one is equal or larger. Then, crop from the center is performed. If crop is false, direct resize without cropping and preserving aspect ratio is performed.

    4-dimensional [Mat] with NCHW dimensions order.

    Parameters

    • images: InputArrayOfArrays

      input images (all with 1-, 3- or 4-channels).

    • Optional scalefactor: double

      multiplier for images values.

    • Optional size: Size

      spatial size for output image

    • Optional mean: any

      scalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true.

    • Optional swapRB: bool

      flag which indicates that swap first and last channels in 3-channel image is necessary.

    • Optional crop: bool

      flag which indicates whether image will be cropped after resize or not

    • Optional ddepth: int

      Depth of output blob. Choose CV_32F or CV_8U.

    Returns Mat

  • This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

    Parameters

    • images: InputArrayOfArrays
    • blob: OutputArray
    • Optional scalefactor: double
    • Optional size: Size
    • Optional mean: any
    • Optional swapRB: bool
    • Optional crop: bool
    • Optional ddepth: int

    Returns void

getAvailableBackends

  • getAvailableBackends(): any

getAvailableTargets

  • getAvailableTargets(be: Backend): any

imagesFromBlob

  • imagesFromBlob(blob_: any, images_: OutputArrayOfArrays): any
  • Parameters

    • blob_: any

      4 dimensional array (images, channels, height, width) in floating point precision (CV_32F) from which you would like to extract the images.

    • images_: OutputArrayOfArrays

      array of 2D Mat containing the images extracted from the blob in floating point precision (CV_32F). They are non normalized neither mean added. The number of returned images equals the first dimension of the blob (batch size). Every image has a number of channels equals to the second dimension of the blob (depth).

    Returns any

readNet

  • readNet(model: any, config?: any, framework?: any): Net
  • readNet(framework: any, bufferModel: uchar, bufferConfig?: uchar): uchar

readNetFromCaffe

  • readNetFromCaffe(prototxt: any, caffeModel?: any): Net
  • readNetFromCaffe(bufferProto: uchar, bufferModel?: uchar): uchar
  • readNetFromCaffe(bufferProto: any, lenProto: size_t, bufferModel?: any, lenModel?: size_t): Net
  • [Net] object.

    Parameters

    • prototxt: any

      path to the .prototxt file with text description of the network architecture.

    • Optional caffeModel: any

      path to the .caffemodel file with learned network.

    Returns Net

  • [Net] object.

    Parameters

    • bufferProto: uchar

      buffer containing the content of the .prototxt file

    • Optional bufferModel: uchar

      buffer containing the content of the .caffemodel file

    Returns uchar

  • This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

    [Net] object.

    Parameters

    • bufferProto: any

      buffer containing the content of the .prototxt file

    • lenProto: size_t

      length of bufferProto

    • Optional bufferModel: any

      buffer containing the content of the .caffemodel file

    • Optional lenModel: size_t

      length of bufferModel

    Returns Net

readNetFromDarknet

  • readNetFromDarknet(cfgFile: any, darknetModel?: any): Net
  • readNetFromDarknet(bufferCfg: uchar, bufferModel?: uchar): uchar
  • readNetFromDarknet(bufferCfg: any, lenCfg: size_t, bufferModel?: any, lenModel?: size_t): Net
  • Network object that ready to do forward, throw an exception in failure cases.

    [Net] object.

    Parameters

    • cfgFile: any

      path to the .cfg file with text description of the network architecture.

    • Optional darknetModel: any

      path to the .weights file with learned network.

    Returns Net

  • [Net] object.

    Parameters

    • bufferCfg: uchar

      A buffer contains a content of .cfg file with text description of the network architecture.

    • Optional bufferModel: uchar

      A buffer contains a content of .weights file with learned network.

    Returns uchar

  • [Net] object.

    Parameters

    • bufferCfg: any

      A buffer contains a content of .cfg file with text description of the network architecture.

    • lenCfg: size_t

      Number of bytes to read from bufferCfg

    • Optional bufferModel: any

      A buffer contains a content of .weights file with learned network.

    • Optional lenModel: size_t

      Number of bytes to read from bufferModel

    Returns Net

readNetFromModelOptimizer

  • readNetFromModelOptimizer(xml: any, bin: any): Net
  • [Net] object. Networks imported from Intel's [Model] Optimizer are launched in Intel's Inference Engine backend.

    Parameters

    • xml: any

      XML configuration file with network's topology.

    • bin: any

      Binary file with trained weights.

    Returns Net

readNetFromONNX

  • readNetFromONNX(onnxFile: any): Net
  • readNetFromONNX(buffer: any, sizeBuffer: size_t): Net
  • readNetFromONNX(buffer: uchar): uchar
  • Network object that ready to do forward, throw an exception in failure cases.

    Parameters

    • onnxFile: any

      path to the .onnx file with text description of the network architecture.

    Returns Net

  • Network object that ready to do forward, throw an exception in failure cases.

    Parameters

    • buffer: any

      memory address of the first byte of the buffer.

    • sizeBuffer: size_t

      size of the buffer.

    Returns Net

  • Network object that ready to do forward, throw an exception in failure cases.

    Parameters

    • buffer: uchar

      in-memory buffer that stores the ONNX model bytes.

    Returns uchar

readNetFromTensorflow

  • readNetFromTensorflow(model: any, config?: any): Net
  • readNetFromTensorflow(bufferModel: uchar, bufferConfig?: uchar): uchar
  • readNetFromTensorflow(bufferModel: any, lenModel: size_t, bufferConfig?: any, lenConfig?: size_t): Net
  • [Net] object.

    Parameters

    • model: any

      path to the .pb file with binary protobuf description of the network architecture

    • Optional config: any

      path to the .pbtxt file that contains text graph definition in protobuf format. Resulting Net object is built by text graph using weights from a binary one that let us make it more flexible.

    Returns Net

  • [Net] object.

    Parameters

    • bufferModel: uchar

      buffer containing the content of the pb file

    • Optional bufferConfig: uchar

      buffer containing the content of the pbtxt file

    Returns uchar

  • This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

    Parameters

    • bufferModel: any

      buffer containing the content of the pb file

    • lenModel: size_t

      length of bufferModel

    • Optional bufferConfig: any

      buffer containing the content of the pbtxt file

    • Optional lenConfig: size_t

      length of bufferConfig

    Returns Net

readNetFromTorch

  • readNetFromTorch(model: any, isBinary?: bool, evaluate?: bool): Net
  • [Net] object.

    Ascii mode of Torch serializer is more preferable, because binary mode extensively use long type of C language, which has various bit-length on different systems. The loading file must contain serialized object with importing network. Try to eliminate a custom objects from serialazing data to avoid importing errors.

    List of supported layers (i.e. object instances derived from Torch nn.Module class):

    nn.Sequential nn.Parallel nn.Concat nn.Linear nn.SpatialConvolution nn.SpatialMaxPooling, nn.SpatialAveragePooling nn.ReLU, nn.TanH, nn.Sigmoid nn.Reshape nn.SoftMax, nn.LogSoftMax

    Also some equivalents of these classes from cunn, cudnn, and fbcunn may be successfully imported.

    Parameters

    • model: any

      path to the file, dumped from Torch by using torch.save() function.

    • Optional isBinary: bool

      specifies whether the network was serialized in ascii mode or binary.

    • Optional evaluate: bool

      specifies testing phase of network. If true, it's similar to evaluate() method in Torch.

    Returns Net

readTensorFromONNX

  • readTensorFromONNX(path: any): Mat

readTorchBlob

  • readTorchBlob(filename: any, isBinary?: bool): Mat
  • This function has the same limitations as [readNetFromTorch()].

    Parameters

    • filename: any
    • Optional isBinary: bool

    Returns Mat

shrinkCaffeModel

  • shrinkCaffeModel(src: any, dst: any, layersTypes?: any): void
  • Shrinked model has no origin float32 weights so it can't be used in origin Caffe framework anymore. However the structure of data is taken from NVidia's Caffe fork: . So the resulting model may be used there.

    Parameters

    • src: any

      Path to origin model from Caffe framework contains single precision floating point weights (usually has .caffemodel extension).

    • dst: any

      Path to destination model with updated weights.

    • Optional layersTypes: any

      Set of layers types which parameters will be converted. By default, converts only Convolutional and Fully-Connected layers' weights.

    Returns void

writeTextGraph

  • writeTextGraph(model: any, output: any): void
  • To reduce output file size, trained weights are not included.

    Parameters

    • model: any

      A path to binary network.

    • output: any

      A path to output text file to be created.

    Returns void

Generated using TypeDoc