Configuring foreground settings for object detection scenarios - KiwiVision™ 4.7.1 | Security Center 5.11.2.0

KiwiVision™ User Guide for Security Center 5.11.2.0

Applies to
KiwiVision™ 4.7.1 | Security Center 5.11.2.0
Last updated
2023-02-03
Content type
Guides > User guides
Language
English
Product
KiwiVision™
Version
4.7

You can choose between different learning models, depending on the scene you want analyzed, so that changes in the scene can be properly detected.

What you should know

  • Each learning model is optimized for different types of scenes. The scene background is analyzed and learned so that differences in the scene are detected. When you change the learning model or the motion analysis mode, or click Relearn, the background is relearned. The relearning process can take up to a few minutes, depending on your configuration and learning models.
  • You can enable the Show scene background or the Show motion analysis option to visualize the sensitivity of your configuration. Detected differences from the background are highlighted in blue if they are minor, in green if they are moderate, or in red if they are severe.
NOTE: Depending on which scenario type you are configuring, some settings might be unavailable or hidden if Advanced mode is disabled.

Procedure

  1. From the Config Tool home page, open the Video task.
  2. From the area view, select a camera that you applied an analytics scenario to.
  3. Click Video analytics > Security monitoring, and then click the scenario you want to configure.
  4. Click the Foreground tab.
    The background is learned using the default learning model.
  5. In the Scene background section, select a learning model:
    Color-based model
    This model is designed for indoor scenarios and thermal cameras. This model continuously learns the colors of the scene and compares them to the current frame.
    Edge-based model
    This model is designed for outdoor scenarios. This model continuously learns the edges of objects in the scene and compares them to the current frame.
    Startup learning period
    Specifies the duration for which the scene is analyzed when the learning model is changed or when you click Relearn.
    NOTE: The initial scene should not contain any moving objects in the foreground during the startup learning period because the object detection model will learn to consider them as part of the background.

    Continuous learning period
    Specifies the duration for which the scene is analyzed continuously in the background.
    Reference image
    This model is designed for detecting changes in empty areas with unchanging lighting, such as in an elevator. To ensure that the scene you want is learned, click Relearn to manually capture the reference image when the scene is empty. The current frame is compared to this reference image.
  6. In the Motion analysis section, select the type of changes that you want to detect:
    Detect all changes
    People and objects that are detected are considered part of the foreground, and can trigger alerts, whether or not they are moving.
    NOTE: When this option is selected, the Show motion analysis option is not available.
    Detect stationary objects
    Only people or objects that remain still for the specified Stationary object time period are considered part of the foreground, and can trigger alerts.
  7. Click Apply.