Options
All
  • Public
  • Public/Protected
  • All
Menu

External module "types/opencv/imgproc_filter"

Index

Type aliases

MorphShapes

MorphShapes: any

MorphTypes

MorphTypes: any

SpecialFilter

SpecialFilter: any

Variables

Const FILTER_SCHARR

FILTER_SCHARR: SpecialFilter

Const MORPH_BLACKHAT

MORPH_BLACKHAT: MorphTypes

"black hat" \\[\\texttt{dst} = \\mathrm{blackhat} ( \\texttt{src} , \\texttt{element} )= \\mathrm{close} ( \\texttt{src} , \\texttt{element} )- \\texttt{src}\\]

Const MORPH_CLOSE

MORPH_CLOSE: MorphTypes

a closing operation \\[\\texttt{dst} = \\mathrm{close} ( \\texttt{src} , \\texttt{element} )= \\mathrm{erode} ( \\mathrm{dilate} ( \\texttt{src} , \\texttt{element} ))\\]

Const MORPH_CROSS

MORPH_CROSS: MorphShapes

a cross-shaped structuring element: \\[E_{ij} = \\fork{1}{if i=\\texttt{anchor.y} or j=\\texttt{anchor.x}}{0}{otherwise}\\]

Const MORPH_DILATE

MORPH_DILATE: MorphTypes

Const MORPH_ELLIPSE

MORPH_ELLIPSE: MorphShapes

an elliptic structuring element, that is, a filled ellipse inscribed into the rectangle Rect(0, 0, esize.width, 0.esize.height)

Const MORPH_ERODE

MORPH_ERODE: MorphTypes

Const MORPH_GRADIENT

MORPH_GRADIENT: MorphTypes

a morphological gradient \\[\\texttt{dst} = \\mathrm{morph\\_grad} ( \\texttt{src} , \\texttt{element} )= \\mathrm{dilate} ( \\texttt{src} , \\texttt{element} )- \\mathrm{erode} ( \\texttt{src} , \\texttt{element} )\\]

Const MORPH_HITMISS

MORPH_HITMISS: MorphTypes

"hit or miss" .- Only supported for CV_8UC1 binary images. A tutorial can be found in the documentation

Const MORPH_OPEN

MORPH_OPEN: MorphTypes

an opening operation \\[\\texttt{dst} = \\mathrm{open} ( \\texttt{src} , \\texttt{element} )= \\mathrm{dilate} ( \\mathrm{erode} ( \\texttt{src} , \\texttt{element} ))\\]

Const MORPH_RECT

MORPH_RECT: MorphShapes

Const MORPH_TOPHAT

MORPH_TOPHAT: MorphTypes

"top hat" `\[\texttt{dst} = \mathrm{tophat} ( \texttt{src} , \texttt{element} )= \texttt{src}

  • \mathrm{open} ( \texttt{src} , \texttt{element} )\]`

Functions

GaussianBlur

  • GaussianBlur(src: InputArray, dst: OutputArray, ksize: Size, sigmaX: double, sigmaY?: double, borderType?: int): void
  • The function convolves the source image with the specified Gaussian kernel. In-place filtering is supported.

    [sepFilter2D], [filter2D], [blur], [boxFilter], [bilateralFilter], [medianBlur]

    Parameters

    • src: InputArray

      input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    • dst: OutputArray

      output image of the same size and type as src.

    • ksize: Size

      Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero's and then they are computed from sigma.

    • sigmaX: double

      Gaussian kernel standard deviation in X direction.

    • Optional sigmaY: double

      Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see getGaussianKernel for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.

    • Optional borderType: int

      pixel extrapolation method, see BorderTypes

    Returns void

Laplacian

  • Laplacian(src: InputArray, dst: OutputArray, ddepth: int, ksize?: int, scale?: double, delta?: double, borderType?: int): void
  • The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:

    \\[\\texttt{dst} = \\Delta \\texttt{src} = \\frac{\\partial^2 \\texttt{src}}{\\partial x^2} + \\frac{\\partial^2 \\texttt{src}}{\\partial y^2}\\]

    This is done when ksize > 1. When ksize == 1, the Laplacian is computed by filtering the image with the following $3 \\times 3$ aperture:

    \\[\\vecthreethree {0}{1}{0}{1}{-4}{1}{0}{1}{0}\\]

    [Sobel], [Scharr]

    Parameters

    • src: InputArray

      Source image.

    • dst: OutputArray

      Destination image of the same size and the same number of channels as src .

    • ddepth: int

      Desired depth of the destination image.

    • Optional ksize: int

      Aperture size used to compute the second-derivative filters. See getDerivKernels for details. The size must be positive and odd.

    • Optional scale: double

      Optional scale factor for the computed Laplacian values. By default, no scaling is applied. See getDerivKernels for details.

    • Optional delta: double

      Optional delta value that is added to the results prior to storing them in dst .

    • Optional borderType: int

      Pixel extrapolation method, see BorderTypes

    Returns void

Scharr

  • Scharr(src: InputArray, dst: OutputArray, ddepth: int, dx: int, dy: int, scale?: double, delta?: double, borderType?: int): void
  • The function computes the first x- or y- spatial image derivative using the Scharr operator. The call

    \\[\\texttt{Scharr(src, dst, ddepth, dx, dy, scale, delta, borderType)}\\]

    is equivalent to

    \\[\\texttt{Sobel(src, dst, ddepth, dx, dy, FILTER_SCHARR, scale, delta, borderType)} .\\]

    [cartToPolar]

    Parameters

    • src: InputArray

      input image.

    • dst: OutputArray

      output image of the same size and the same number of channels as src.

    • ddepth: int

      output image depth, see combinations

    • dx: int

      order of the derivative x.

    • dy: int

      order of the derivative y.

    • Optional scale: double

      optional scale factor for the computed derivative values; by default, no scaling is applied (see getDerivKernels for details).

    • Optional delta: double

      optional delta value that is added to the results prior to storing them in dst.

    • Optional borderType: int

      pixel extrapolation method, see BorderTypes

    Returns void

Sobel

  • Sobel(src: InputArray, dst: OutputArray, ddepth: int, dx: int, dy: int, ksize?: int, scale?: double, delta?: double, borderType?: int): void
  • In all cases except one, the $\\texttt{ksize} \\times \\texttt{ksize}$ separable kernel is used to calculate the derivative. When $\\texttt{ksize = 1}$, the $3 \\times 1$ or $1 \\times 3$ kernel is used (that is, no Gaussian smoothing is done). ksize = 1 can only be used for the first or the second x- or y- derivatives.

    There is also the special value ksize = [FILTER_SCHARR] (-1) that corresponds to the $3\\times3$ Scharr filter that may give more accurate results than the $3\\times3$ Sobel. The Scharr aperture is

    \\[\\vecthreethree{-3}{0}{3}{-10}{0}{10}{-3}{0}{3}\\]

    for the x-derivative, or transposed for the y-derivative.

    The function calculates an image derivative by convolving the image with the appropriate kernel:

    \\[\\texttt{dst} = \\frac{\\partial^{xorder+yorder} \\texttt{src}}{\\partial x^{xorder} \\partial y^{yorder}}\\]

    The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x- or y- image derivative. The first case corresponds to a kernel of:

    \\[\\vecthreethree{-1}{0}{1}{-2}{0}{2}{-1}{0}{1}\\]

    The second case corresponds to a kernel of:

    \\[\\vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}\\]

    [Scharr], [Laplacian], [sepFilter2D], [filter2D], [GaussianBlur], [cartToPolar]

    Parameters

    • src: InputArray

      input image.

    • dst: OutputArray

      output image of the same size and the same number of channels as src .

    • ddepth: int

      output image depth, see combinations; in the case of 8-bit input images it will result in truncated derivatives.

    • dx: int

      order of the derivative x.

    • dy: int

      order of the derivative y.

    • Optional ksize: int

      size of the extended Sobel kernel; it must be 1, 3, 5, or 7.

    • Optional scale: double

      optional scale factor for the computed derivative values; by default, no scaling is applied (see getDerivKernels for details).

    • Optional delta: double

      optional delta value that is added to the results prior to storing them in dst.

    • Optional borderType: int

      pixel extrapolation method, see BorderTypes

    Returns void

bilateralFilter

  • bilateralFilter(src: InputArray, dst: OutputArray, d: int, sigmaColor: double, sigmaSpace: double, borderType?: int): void
  • The function applies bilateral filtering to the input image, as described in bilateralFilter can reduce unwanted noise very well while keeping edges fairly sharp. However, it is very slow compared to most filters.

    Sigma values*: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look "cartoonish".

    Filter size*: Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.

    This filter does not work inplace.

    Parameters

    • src: InputArray

      Source 8-bit or floating-point, 1-channel or 3-channel image.

    • dst: OutputArray

      Destination image of the same size and type as src .

    • d: int

      Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace.

    • sigmaColor: double

      Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color.

    • sigmaSpace: double

      Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0, it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace.

    • Optional borderType: int

      border mode used to extrapolate pixels outside of the image, see BorderTypes

    Returns void

blur

  • blur(src: InputArray, dst: OutputArray, ksize: Size, anchor?: Point, borderType?: int): void
  • The function smooths an image using the kernel:

    \\[\\texttt{K} = \\frac{1}{\\texttt{ksize.width*ksize.height}} \\begin{bmatrix} 1 & 1 & 1 & \\cdots & 1 & 1 \\\\ 1 & 1 & 1 & \\cdots & 1 & 1 \\\\ \\hdotsfor{6} \\\\ 1 & 1 & 1 & \\cdots & 1 & 1 \\\\ \\end{bmatrix}\\]

    The call blur(src, dst, ksize, anchor, borderType) is equivalent to boxFilter(src, dst, src.type(), anchor, true, borderType).

    [boxFilter], [bilateralFilter], [GaussianBlur], [medianBlur]

    Parameters

    • src: InputArray

      input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    • dst: OutputArray

      output image of the same size and type as src.

    • ksize: Size

      blurring kernel size.

    • Optional anchor: Point

      anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.

    • Optional borderType: int

      border mode used to extrapolate pixels outside of the image, see BorderTypes

    Returns void

boxFilter

  • boxFilter(src: InputArray, dst: OutputArray, ddepth: int, ksize: Size, anchor?: Point, normalize?: bool, borderType?: int): void
  • The function smooths an image using the kernel:

    \\[\\texttt{K} = \\alpha \\begin{bmatrix} 1 & 1 & 1 & \\cdots & 1 & 1 \\\\ 1 & 1 & 1 & \\cdots & 1 & 1 \\\\ \\hdotsfor{6} \\\\ 1 & 1 & 1 & \\cdots & 1 & 1 \\end{bmatrix}\\]

    where

    \\[\\alpha = \\fork{\\frac{1}{\\texttt{ksize.width*ksize.height}}}{when \\texttt{normalize=true}}{1}{otherwise}\\]

    Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use [integral].

    [blur], [bilateralFilter], [GaussianBlur], [medianBlur], [integral]

    Parameters

    • src: InputArray

      input image.

    • dst: OutputArray

      output image of the same size and type as src.

    • ddepth: int

      the output image depth (-1 to use src.depth()).

    • ksize: Size

      blurring kernel size.

    • Optional anchor: Point

      anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.

    • Optional normalize: bool

      flag, specifying whether the kernel is normalized by its area or not.

    • Optional borderType: int

      border mode used to extrapolate pixels outside of the image, see BorderTypes

    Returns void

buildPyramid

  • buildPyramid(src: InputArray, dst: OutputArrayOfArrays, maxlevel: int, borderType?: int): void
  • The function constructs a vector of images and builds the Gaussian pyramid by recursively applying pyrDown to the previously built pyramid layers, starting from dst[0]==src.

    Parameters

    • src: InputArray

      Source image. Check pyrDown for the list of supported types.

    • dst: OutputArrayOfArrays

      Destination vector of maxlevel+1 images of the same type as src. dst[0] will be the same as src. dst[1] is the next pyramid layer, a smoothed and down-sized src, and so on.

    • maxlevel: int

      0-based index of the last (the smallest) pyramid layer. It must be non-negative.

    • Optional borderType: int

      Pixel extrapolation method, see BorderTypes (BORDER_CONSTANT isn't supported)

    Returns void

dilate

  • dilate(src: InputArray, dst: OutputArray, kernel: InputArray, anchor?: Point, iterations?: int, borderType?: int, borderValue?: any): void
  • The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken: \\[\\texttt{dst} (x,y) = \\max _{(x',y'): \\, \\texttt{element} (x',y') \\ne0 } \\texttt{src} (x+x',y+y')\\]

    The function supports the in-place mode. Dilation can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    [erode], [morphologyEx], [getStructuringElement]

    Parameters

    • src: InputArray

      input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    • dst: OutputArray

      output image of the same size and type as src.

    • kernel: InputArray

      structuring element used for dilation; if elemenat=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using getStructuringElement

    • Optional anchor: Point

      position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

    • Optional iterations: int

      number of times dilation is applied.

    • Optional borderType: int

      pixel extrapolation method, see BorderTypes

    • Optional borderValue: any

      border value in case of a constant border

    Returns void

erode

  • erode(src: InputArray, dst: OutputArray, kernel: InputArray, anchor?: Point, iterations?: int, borderType?: int, borderValue?: any): void
  • The function erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken:

    \\[\\texttt{dst} (x,y) = \\min _{(x',y'): \\, \\texttt{element} (x',y') \\ne0 } \\texttt{src} (x+x',y+y')\\]

    The function supports the in-place mode. Erosion can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    [dilate], [morphologyEx], [getStructuringElement]

    Parameters

    • src: InputArray

      input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    • dst: OutputArray

      output image of the same size and type as src.

    • kernel: InputArray

      structuring element used for erosion; if element=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using getStructuringElement.

    • Optional anchor: Point

      position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

    • Optional iterations: int

      number of times erosion is applied.

    • Optional borderType: int

      pixel extrapolation method, see BorderTypes

    • Optional borderValue: any

      border value in case of a constant border

    Returns void

filter2D

  • filter2D(src: InputArray, dst: OutputArray, ddepth: int, kernel: InputArray, anchor?: Point, delta?: double, borderType?: int): void
  • The function applies an arbitrary linear filter to an image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values according to the specified border mode.

    The function does actually compute correlation, not the convolution:

    \\[\\texttt{dst} (x,y) = \\sum _{ \\stackrel{0\\leq x' < \\texttt{kernel.cols},}{0\\leq y' < \\texttt{kernel.rows}} } \\texttt{kernel} (x',y')* \\texttt{src} (x+x'- \\texttt{anchor.x} ,y+y'- \\texttt{anchor.y} )\\]

    That is, the kernel is not mirrored around the anchor point. If you need a real convolution, flip the kernel using [flip] and set the new anchor to (kernel.cols - anchor.x - 1, kernel.rows - anchor.y - 1).

    The function uses the DFT-based algorithm in case of sufficiently large kernels (~11 x 11 or larger) and the direct algorithm for small kernels.

    [sepFilter2D], [dft], [matchTemplate]

    Parameters

    • src: InputArray

      input image.

    • dst: OutputArray

      output image of the same size and the same number of channels as src.

    • ddepth: int

      desired depth of the destination image, see combinations

    • kernel: InputArray

      convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split and process them individually.

    • Optional anchor: Point

      anchor of the kernel that indicates the relative position of a filtered point within the kernel; the anchor should lie within the kernel; default value (-1,-1) means that the anchor is at the kernel center.

    • Optional delta: double

      optional value added to the filtered pixels before storing them in dst.

    • Optional borderType: int

      pixel extrapolation method, see BorderTypes

    Returns void

getDerivKernels

  • getDerivKernels(kx: OutputArray, ky: OutputArray, dx: int, dy: int, ksize: int, normalize?: bool, ktype?: int): void
  • The function computes and returns the filter coefficients for spatial image derivatives. When ksize=FILTER_SCHARR, the Scharr $3 \\times 3$ kernels are generated (see [Scharr]). Otherwise, Sobel kernels are generated (see [Sobel]). The filters are normally passed to [sepFilter2D] or to

    Parameters

    • kx: OutputArray

      Output matrix of row filter coefficients. It has the type ktype .

    • ky: OutputArray

      Output matrix of column filter coefficients. It has the type ktype .

    • dx: int

      Derivative order in respect of x.

    • dy: int

      Derivative order in respect of y.

    • ksize: int

      Aperture size. It can be FILTER_SCHARR, 1, 3, 5, or 7.

    • Optional normalize: bool

      Flag indicating whether to normalize (scale down) the filter coefficients or not. Theoretically, the coefficients should have the denominator $=2^{ksize*2-dx-dy-2}$. If you are going to filter floating-point images, you are likely to use the normalized kernels. But if you compute derivatives of an 8-bit image, store the results in a 16-bit image, and wish to preserve all the fractional bits, you may want to set normalize=false .

    • Optional ktype: int

      Type of filter coefficients. It can be CV_32f or CV_64F .

    Returns void

getGaborKernel

  • For more details about gabor filter equations and parameters, see: .

    Parameters

    • ksize: Size

      Size of the filter returned.

    • sigma: double

      Standard deviation of the gaussian envelope.

    • theta: double

      Orientation of the normal to the parallel stripes of a Gabor function.

    • lambd: double

      Wavelength of the sinusoidal factor.

    • gamma: double

      Spatial aspect ratio.

    • Optional psi: double

      Phase offset.

    • Optional ktype: int

      Type of filter coefficients. It can be CV_32F or CV_64F .

    Returns Mat

getGaussianKernel

  • The function computes and returns the $\\texttt{ksize} \\times 1$ matrix of Gaussian filter coefficients:

    \\[G_i= \\alpha *e^{-(i-( \\texttt{ksize} -1)/2)^2/(2* \\texttt{sigma}^2)},\\]

    where $i=0..\\texttt{ksize}-1$ and $\\alpha$ is the scale factor chosen so that $\\sum_i G_i=1$.

    Two of such generated kernels can be passed to sepFilter2D. Those functions automatically recognize smoothing kernels (a symmetrical kernel with sum of weights equal to 1) and handle them accordingly. You may also use the higher-level GaussianBlur.

    [sepFilter2D], [getDerivKernels], [getStructuringElement], [GaussianBlur]

    Parameters

    • ksize: int

      Aperture size. It should be odd ( $\texttt{ksize} \mod 2 = 1$ ) and positive.

    • sigma: double

      Gaussian standard deviation. If it is non-positive, it is computed from ksize as sigma = 0.3((ksize-1)0.5 - 1) + 0.8.

    • Optional ktype: int

      Type of filter coefficients. It can be CV_32F or CV_64F .

    Returns Mat

getStructuringElement

  • The function constructs and returns the structuring element that can be further passed to [erode], [dilate] or [morphologyEx]. But you can also construct an arbitrary binary mask yourself and use it as the structuring element.

    Parameters

    • shape: int

      Element shape that could be one of MorphShapes

    • ksize: Size

      Size of the structuring element.

    • Optional anchor: Point

      Anchor position within the element. The default value $(-1, -1)$ means that the anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted.

    Returns Mat

medianBlur

  • medianBlur(src: InputArray, dst: OutputArray, ksize: int): void
  • The function smoothes an image using the median filter with the $\\texttt{ksize} \\times \\texttt{ksize}$ aperture. Each channel of a multi-channel image is processed independently. In-place operation is supported.

    The median filter uses [BORDER_REPLICATE] internally to cope with border pixels, see [BorderTypes]

    [bilateralFilter], [blur], [boxFilter], [GaussianBlur]

    Parameters

    • src: InputArray

      input 1-, 3-, or 4-channel image; when ksize is 3 or 5, the image depth should be CV_8U, CV_16U, or CV_32F, for larger aperture sizes, it can only be CV_8U.

    • dst: OutputArray

      destination array of the same size and type as src.

    • ksize: int

      aperture linear size; it must be odd and greater than 1, for example: 3, 5, 7 ...

    Returns void

morphologyDefaultBorderValue

  • morphologyDefaultBorderValue(): Scalar

morphologyEx

  • morphologyEx(src: InputArray, dst: OutputArray, op: int, kernel: InputArray, anchor?: Point, iterations?: int, borderType?: int, borderValue?: any): void
  • The function [cv::morphologyEx] can perform advanced morphological transformations using an erosion and dilation as basic operations.

    Any of the operations can be done in-place. In case of multi-channel images, each channel is processed independently.

    [dilate], [erode], [getStructuringElement]

    The number of iterations is the number of times erosion or dilatation operation will be applied. For instance, an opening operation ([MORPH_OPEN]) with two iterations is equivalent to apply successively: erode -> erode -> dilate -> dilate (and not erode -> dilate -> erode -> dilate).

    Parameters

    • src: InputArray

      Source image. The number of channels can be arbitrary. The depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    • dst: OutputArray

      Destination image of the same size and type as source image.

    • op: int

      Type of a morphological operation, see MorphTypes

    • kernel: InputArray

      Structuring element. It can be created using getStructuringElement.

    • Optional anchor: Point

      Anchor position with the kernel. Negative values mean that the anchor is at the kernel center.

    • Optional iterations: int

      Number of times erosion and dilation are applied.

    • Optional borderType: int

      Pixel extrapolation method, see BorderTypes

    • Optional borderValue: any

      Border value in case of a constant border. The default value has a special meaning.

    Returns void

pyrDown

  • pyrDown(src: InputArray, dst: OutputArray, dstsize?: any, borderType?: int): void
  • By default, size of the output image is computed as Size((src.cols+1)/2, (src.rows+1)/2), but in any case, the following conditions should be satisfied:

    \\[\\begin{array}{l} | \\texttt{dstsize.width} *2-src.cols| \\leq 2 \\\\ | \\texttt{dstsize.height} *2-src.rows| \\leq 2 \\end{array}\\]

    The function performs the downsampling step of the Gaussian pyramid construction. First, it convolves the source image with the kernel:

    \\[\\frac{1}{256} \\begin{bmatrix} 1 & 4 & 6 & 4 & 1 \\\\ 4 & 16 & 24 & 16 & 4 \\\\ 6 & 24 & 36 & 24 & 6 \\\\ 4 & 16 & 24 & 16 & 4 \\\\ 1 & 4 & 6 & 4 & 1 \\end{bmatrix}\\]

    Then, it downsamples the image by rejecting even rows and columns.

    Parameters

    • src: InputArray

      input image.

    • dst: OutputArray

      output image; it has the specified size and the same type as src.

    • Optional dstsize: any

      size of the output image.

    • Optional borderType: int

      Pixel extrapolation method, see BorderTypes (BORDER_CONSTANT isn't supported)

    Returns void

pyrMeanShiftFiltering

  • The function implements the filtering stage of meanshift segmentation, that is, the output of the function is the filtered "posterized" image with color gradients and fine-grain texture flattened. At every pixel (X,Y) of the input image (or down-sized input image, see below) the function executes meanshift iterations, that is, the pixel (X,Y) neighborhood in the joint space-color hyperspace is considered:

    \\[(x,y): X- \\texttt{sp} \\le x \\le X+ \\texttt{sp} , Y- \\texttt{sp} \\le y \\le Y+ \\texttt{sp} , ||(R,G,B)-(r,g,b)|| \\le \\texttt{sr}\\]

    where (R,G,B) and (r,g,b) are the vectors of color components at (X,Y) and (x,y), respectively (though, the algorithm does not depend on the color space used, so any 3-component color space can be used instead). Over the neighborhood the average spatial value (X',Y') and average color vector (R',G',B') are found and they act as the neighborhood center on the next iteration:

    \\[(X,Y)~(X',Y'), (R,G,B)~(R',G',B').\\]

    After the iterations over, the color components of the initial pixel (that is, the pixel from where the iterations started) are set to the final value (average color at the last iteration):

    \\[I(X,Y) <- (R*,G*,B*)\\]

    When maxLevel > 0, the gaussian pyramid of maxLevel+1 levels is built, and the above procedure is run on the smallest layer first. After that, the results are propagated to the larger layer and the iterations are run again only on those pixels where the layer colors differ by more than sr from the lower-resolution layer of the pyramid. That makes boundaries of color regions sharper. Note that the results will be actually different from the ones obtained by running the meanshift procedure on the whole original image (i.e. when maxLevel==0).

    Parameters

    • src: InputArray

      The source 8-bit, 3-channel image.

    • dst: OutputArray

      The destination image of the same format and the same size as the source.

    • sp: double

      The spatial window radius.

    • sr: double

      The color window radius.

    • Optional maxLevel: int

      Maximum level of the pyramid for the segmentation.

    • Optional termcrit: TermCriteria

      Termination criteria: when to stop meanshift iterations.

    Returns void

pyrUp

  • pyrUp(src: InputArray, dst: OutputArray, dstsize?: any, borderType?: int): void
  • By default, size of the output image is computed as Size(src.cols\\*2, (src.rows\\*2), but in any case, the following conditions should be satisfied:

    \\[\\begin{array}{l} | \\texttt{dstsize.width} -src.cols*2| \\leq ( \\texttt{dstsize.width} \\mod 2) \\\\ | \\texttt{dstsize.height} -src.rows*2| \\leq ( \\texttt{dstsize.height} \\mod 2) \\end{array}\\]

    The function performs the upsampling step of the Gaussian pyramid construction, though it can actually be used to construct the Laplacian pyramid. First, it upsamples the source image by injecting even zero rows and columns and then convolves the result with the same kernel as in pyrDown multiplied by 4.

    Parameters

    • src: InputArray

      input image.

    • dst: OutputArray

      output image. It has the specified size and the same type as src .

    • Optional dstsize: any

      size of the output image.

    • Optional borderType: int

      Pixel extrapolation method, see BorderTypes (only BORDER_DEFAULT is supported)

    Returns void

sepFilter2D

  • sepFilter2D(src: InputArray, dst: OutputArray, ddepth: int, kernelX: InputArray, kernelY: InputArray, anchor?: Point, delta?: double, borderType?: int): void
  • The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .

    [filter2D], [Sobel], [GaussianBlur], [boxFilter], [blur]

    Parameters

    • src: InputArray

      Source image.

    • dst: OutputArray

      Destination image of the same size and the same number of channels as src .

    • ddepth: int

      Destination image depth, see combinations

    • kernelX: InputArray

      Coefficients for filtering each row.

    • kernelY: InputArray

      Coefficients for filtering each column.

    • Optional anchor: Point

      Anchor position within the kernel. The default value $(-1,-1)$ means that the anchor is at the kernel center.

    • Optional delta: double

      Value added to the filtered results before storing them.

    • Optional borderType: int

      Pixel extrapolation method, see BorderTypes

    Returns void

spatialGradient

  • spatialGradient(src: InputArray, dx: OutputArray, dy: OutputArray, ksize?: int, borderType?: int): void
  • Equivalent to calling:

    Sobel( src, dx, CV_16SC1, 1, 0, 3 );
    Sobel( src, dy, CV_16SC1, 0, 1, 3 );

    [Sobel]

    Parameters

    • src: InputArray

      input image.

    • dx: OutputArray

      output image with first-order derivative in x.

    • dy: OutputArray

      output image with first-order derivative in y.

    • Optional ksize: int

      size of Sobel kernel. It must be 3.

    • Optional borderType: int

      pixel extrapolation method, see BorderTypes

    Returns void

sqrBoxFilter

  • sqrBoxFilter(src: InputArray, dst: OutputArray, ddepth: int, ksize: Size, anchor?: Point, normalize?: bool, borderType?: int): void
  • For every pixel $ (x, y) $ in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel $ (x, y) $.

    The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel.

    [boxFilter]

    Parameters

    • src: InputArray

      input image

    • dst: OutputArray

      output image of the same size and type as _src

    • ddepth: int

      the output image depth (-1 to use src.depth())

    • ksize: Size

      kernel size

    • Optional anchor: Point

      kernel anchor point. The default value of Point(-1, -1) denotes that the anchor is at the kernel center.

    • Optional normalize: bool

      flag, specifying whether the kernel is to be normalized by it's area or not.

    • Optional borderType: int

      border mode used to extrapolate pixels outside of the image, see BorderTypes

    Returns void

Generated using TypeDoc