ovl.visions.vision module

class ovl.visions.vision.Vision(detector: ovl.detectors.detector.Detector = None, threshold: ovl.thresholds.threshold.Threshold = None, morphological_functions: List[function] = None, target_filters: List[function] = None, director: ovl.directions.director.Director = None, width=320, height=240, connection: ovl.connections.connection.Connection = None, camera: Union[int, str, ovl.camera.camera.Camera, cv2.VideoCapture, Any] = None, camera_settings: ovl.camera_.camera_settings.CameraSettings = None, image_filters: List[function] = None, ovl_camera: bool = False, haar_classifier: str = None)[source]

Bases: object

Vision object represents a computer vision pipeline. The pipeline consists of 4 main stages:

  1. processing - ‘apply_all_image_filters’ which uses a list of image_filter functions
  2. detection - ‘detect’ which comes from Detector objects and detects objects (contours, bounding rectangles..)
  3. filtering - ‘apply_target_filters’ which is uses filter_functions like contour_filters
  4. conversion & usage - ‘direct’ which comes from director objects

Each functionality can be used to easily create a complex yet modular

Additional capabilities and tuning options are:

Image filters (Blurs, rotations, cropping), Morphological functions, Ovl color HSVCalibration,

  1. camera handling
  2. connection clean-up and sending

Vision can also be used as a part of a more complex pipeline.

MultiVision can contain multiple vision objects and switch between pipelines, allowing for very versatile logic that can fit multiple needs.

Ambient Vision is another option for using 2 different Vision objects and alternate between the 2.

apply_image_filters(image: numpy.ndarray) → numpy.ndarray[source]

Applies all given image filters to the given image This is used to apply various image filters on your image in a pipeline, like blurs, image cropping, contrasting, sharpening, rotations, translations etc.

Parameters:image – the image that the image filters should be applied on (numpy array)
Returns:the image with the filters applied
apply_target_filter(filter_function, contours, verbose=False)[source]

Applies a filter function on the contour list, this is used to remove contours that do not match desired features

NOTE: Vision.detect is mainly used for full object detection and filtering, refer to it for common use of Vision

Parameters:
  • filter_function – Filter functions are function with a contour list variable that apply some sort of filter on the contours, thus removing ones that don’t fit the limit given by the filter. for example: straight_rectangle_filter removes contours that are not rectangles that are parallel to the frame of the picture
  • contours – the contours on which the filter should be applied (list of numpy.ndarrays)
  • verbose – if true_shape does not print anything
Returns:

returns the output of the filter function.

apply_target_filters(targets: List[numpy.ndarray], verbose=False) → Tuple[List[numpy.ndarray], List[float]][source]

Applies all of the filters on a list of contours, one after the other. Applies the first filter and passes the output to the second filter,

Parameters:
  • targets – List of targets (numpy arrays or bounding boxes) to
  • verbose – prints out information about filtering process if true (useful for debugging)
Returns:

a list of all of the ratios given by the filter function in order.

camera_setup(source=0, image_width=None, image_height=None, ovl_camera=False)[source]

Opens up the camera reference and sets a given width and height to all images taken

Parameters:
  • image_width – the width of the images to be taken, 0 does not set a width
  • image_height – the height of the images to be taken, 0 does not set a height
  • source – the location from which to open the camera string for network connections int for local USB connections.
  • ovl_camera – if the camera object should be ovl.Camera
Returns:

the camera object, also sets self.camera to the object.

detect(image, verbose=False, *args, **kwargs)[source]

This is the function that performs processing detection and filtering on a given image, essentially passing the image through the detection related part of the pipeline

detect applies image filters, detects objects in the filtered images (using the passed/created detector object) and finally applies all of the target_filters on the image.

args and kwargs are passed to the detect function (passed to the detect method of the detector)

Parameters:
  • verbose – passes verbose to apply_target_filters, which prints out information about the target filtering.
  • image – image in which the vision should detect an object
Returns:

contours and the filtered image and the ratios if return_ratios is true

get_directions(contours: List[numpy.ndarray], image: numpy.ndarray, sorter=None)[source]

Calculates the directions, based on contours found in the given image

Parameters:
  • contours – final contours after filtering
  • image – the image from which to find the contours
  • sorter – optional parameter, applies a sorter on the given contours
Returns:

a string of the director (output of the director function), length depends on the director function

get_image() → numpy.ndarray[source]

Gets an image from self.camera and applies image filters

Returns:the image, false if failed to get it
send(data: Any, *args, **kwargs) → Any[source]

Sends data to the destination using self.connection

Parameters:
  • data – The data to send to the Connection
  • args – any other arguments for the send function in your connection
  • kwargs – any other named arguments for the connection object
Returns:

Depends on the connection object used, returns its result

send_to_location(data: Any, network_location: ovl.connections.network_location.NetworkLocation, *args, **kwargs)[source]

A function that sends data to a specific NetworkLocation

Parameters:
  • data – the data to be sent
  • network_location – information used to send the data to a specific ‘location’ in the network
Returns:

Depends on the connection object

target_amount

The wanted amount of targets Determined by self.director (0 None or math.inf if there is no limit, 1 if 1 target is wanted etc.)