1 Introduction

This is the user manual for the VCAcore Video Analytics system. This manual describes how to set up and configure the video analytics to detect events of interest while minimising false alerts.

The VCAcore Video Analytics system is available on a number of platforms:

The functionality and UI is designed to be consistent across these platforms, any specific differences will be highlighted where necessary throughout this manual.

The menu on the left-hand-side provides shortcuts to the major topic areas. Alternatively, see the Getting Started topic for the essentials necessary to get started rapidly.

2 Getting Started

This user guide documents each topic in detail, and each topic is accessible via the menu. However, to get started quickly, the essential topics are listed below.

2.1 Fundamentals

2.2 User Credentials

Note that the default username and password for the VCAcore platform are:

2.3 Advanced Topics

Once the basic settings are configured, the following advanced topics may be relevant:

3 Installing VCAcore

Installation instructions for the various platforms supported by VCAcore vary slightly and are outlined below.

3.1 VCAserver (Windows 10)

3.1.1 Installation

VCAserver is installed to Windows machines as a service, as such when installation is complete the VCAcore service must be started and managed using the Windows service manager.

The configuration file for VCAcore is stored: C:\VCACore

Though not recommended, VCAserver can also be launched as an application from the command line. In this case the configuration file is stored: C:\Users\USERNAME\AppData\Local\vca-cored

VCAserver comes as two installation packages:

By default the setup package will configure the deep-learning features to run on CPU (i.e. GPU Additions will not be installed). However, VCAcore does support GPU acceleration for the deep-learning features with the following requirements:

  1. A NVIDIA GPU with CUDA Compute Capability 3.5 or higher

  2. NVIDIA's CUDA Toolkit 9.0 must be installed on the server.

Once these hardware and software requirements are satisfied ensure that the GPU Additions components are also installed.

Important notes:

  1. If no NVIDIA GPU hardware is installed, then the GPU Additions components must not be installed, as this will prevent VCAcore from running.

  2. Likewise if a NVIDIA GPU with CUDA Compute Capability 3.5 or higher is present and the GPU Additions components have been installed, but CUDA 9.0 is not present, this too will prevent VCAcore from running.

  3. VCAserver is developed and tested against Windows 10. Although the application may run on other versions of Windows, support is limited to this version only.

3.1.2 Upgrading

Periodically new versions of VCAcore will be released with new features, please check for software updates at support.vcatechnology.com.

When upgrading VCAserver, simply run the installation packages as above. This will overwrite the existing version. When updating the existing config file will be persisted and used with the new version. This applies to any version of VCAserver (v1.0.1 or greater)

3.1.3 Downgrading

If you wish to downgrade VCAserver to an earlier version, the existing version must first be uninstalled using the Windows control panel. Next, install the desired version of VCAserver using the appropriate installation packages.

Note: This won't remove your configuration however if there are changes in the configuration between the versions you are installing your existing configuration may not work and you will need to delete it and reconfigure

3.2 VCAserver (Linux)

3.2.1 Installation

VCAserver is installed as an application

VCAserver on Linux comes as a single archive file containing three .sh scripts, which handle the installation of the VCAcore components. The installation method for each .sh file is the same, however depending on user needs and hardware configuration not all will be required.

Once the archive has been downloaded to a location suitable for the installation of VCAserver, navigate to folder and unpack the archive into the three installation scripts;

Next run the first .sh script: sh VCA-Core-VERSION_NUMBER-vca_core, (for example e.g. sh VCA-Core-1.0.2-vca_core.sh). This first installation script is required for any VCAserver install,

The licensing terms and conditions page will be presented. Hit the ’Enter’ key on the keyboard to navigate to subsequent pages of the T&Cs. Finally, the user will be presented with a choice to accept the license, type in ’Y’ and hit the ’Enter’ button.

Next, a choice of installation directories will be presented. Make note of the target directory to which the files are unpacked to.

If the Deep-Learning components of VCAcore are required then the additional two installation scripts must also be installed. This is to be completed in the same way as above, ensuring that the same installation directory is selected in both instances.

VCAcore supports GPU acceleration for the deep-learning features with the following requirements:

  1. A NVIDIA GPU with CUDA Compute Capability 3.5 or higher
  2. NVIDIA's CUDA Toolkit 9.0 must be installed on the server.
  3. Current NVIDIA graphics drivers must be installed for the GPU, can be checked by ensuring nvidia-smi runs and the output correlates with your installed GPU.

Once all required components are installed via the .sh scripts, the VCAcore licensing daemon must be started with the Super User privileges allowing the daemon to scan the system hardware (subsequent launches of the daemon do not require Super User privileges)

sudo ./bin/vca-daemon-cli -v

Once the VCAcore licensing daemon has been run once it may be closed and the VCAcore application started and subsequently accessed via the user interface.

./bin/vca-cored

The configuration file for VCAcore is stored: /home/USERNAME/.config/vca-cored

Important notes:

  1. If no NVIDIA GPU hardware is installed, then the GPU Additions components must not be installed, as this will prevent VCAcore from running.

  2. Likewise if a NVIDIA GPU with CUDA Compute Capability 3.5 or higher is present and the GPU Additions components have been installed, but CUDA 9.0 is not present, this too will prevent VCAcore from running.

  3. VCAserver is developed and tested against Ubuntu 16.04 / 18.04. Although the application may run on different Linux distributions, support is limited to this version only.

3.2.2 Upgrading

Periodically new versions of VCAcore will be released with new features, please check for software updates at support.vcatechnology.com.

When upgrading VCAserver, simply run the installation packages as above. This will overwrite the existing version.

When updating the existing config file will be persisted and used with the new version. This applies to any version of VCAserver (v1.0.1 or greater)

3.2.3 Downgrading

If you wish to downgrade VCAserver to an earlier version, the existing version must first be removed from the installation directory. Next, install the desired version of VCAserver using the appropriate installation packages.

Note: This won't remove your configuration however if there are changes in the configuration between the versions you are installing your existing configuration may not work and you will need to delete it and reconfigure

3.3 VCAbridge

VCAbridge will always be shipped with a version of VCAcore pre-installed. Check the software running on your platform is fully up to date and upgrade if necessary from the system settings page.

3.3.1 Upgrading

Periodically new versions of VCAcore will be released with new features, checking for software updates at support.vcatechnology.com.

For VCAbridge, upgrade the firmware from the system settings page.

When updating on any platform, the existing config file will be persisted and used with the new version. This applies to any version of VCAbridge (v0.4.11 or greater).

3.3.2 System BIOS

It is also recommended that the BIOS of the unit be updated to the most recent version where possible to leverage any additional security and stability improvements provided by the motherboard manufacturer. The BIOS can be accessed by pressing F2 when first booting the VCAbridge. Within the interface the model number is provided allowing the appropriate firmware and upgrade instructions to be downloaded from the manufacturer website.

3.3.3 Upgrading VCAbridge From 0.3.xx to 1.0.xx

When upgrading the firmware of a VCAbridge device running version 0.3.xx an interim upgrade to 0.4.11 is required to preserve the configuration file. As such the following steps must be performed:

Important note: The export section needs to be completed before upgrading the system.

3.3.3.1 Exporting the old configuration

In order to export the current configuration the ExportConfigurationTool tool needs be downloaded from here: http://www.vcatechnology.com/support/downloads.

The name of the application is ExportConfigurationTool as shown in this image:

3.3.3.2 Run the ExportConfigurationTool

Run the ExportConfigurationTool and fill in the information requested:

Once the process has successfully completed, two JSON files will be generated in the same folder as the application: * A backup of the current configuration of the VCA Bridge (the one that contains backup in the name) * The upgraded configuration to be used in the import process (the one that contains updated in the name).

3.3.3.3 Importing the configuration

Once the ExportConfigurationTool has generated the updated version of the JSON file, firmware version 0.4.10 can be installed on the VCA Bridge.

3.3.3.4 After installing the new package

To start the import process go to the Settings page in the VCA Bridge. Then find the Configuration section and use the Import Configuration button to start the import process.

VCA Bridge will show the following dialog:

Click the Browse button and select the updated JSON file that was generated by the ExportConfigurationTool.

Then, the import process will start. Do not close the window or refresh the page while the following message is shown:

Once this process is complete, version 1.0.x can be installed on the VCA Bridge.

4 Device Discovery

The device discovery tool can be used to locate VCAbridge devices on the network.

Locate the VCAbridge device on the network and select the corresponding entry in the list in the discovery tool. A number of options can then be performed by clicking the appropriate button:

The discovery tool is available from the VCA website www.vcatechnology.com

4.1 Next Steps

Learn more about Navigation or go back to the Getting Started guide.

5 Navigation

This topic provides a general overview of the VCAcore configuration user interface elements and controls.

The VCA user interface features a persistent navigation bar displayed at the top of the window.

There are a number of elements in this navigation bar, each is described below:

5.2 Side Menu

Clicking the icon displays the side navigation menu:

Every page in the VCA user interface is accessible through the side menu. The icon next to a menu item indicates that the item has sub-items and can be expanded.

Items in the side menu are automatically expanded to reflect the current location within the web application.

5.3 Settings Page

The settings page displays a number of links to various configuration pages:

5.4 Next Steps

Learn more about Activation or go back to the Getting Started guide.

6 Activation

To create sources and take advantage of the VCAcore analytics a licence is required.

In many cases VCAcore on the VCAbridge platform is pre-activated in the factory and further activation is only necessary to enable additional functionality. For VCAserver, an activation code, linked to your hardware configuration, will be provided by the reseller.

Additional features can be activated by applying a new activation code to the device. Each license is only valid for a specific device and each device is uniquely identified by a hardware code.

To manage activation and hardware codes, navigate to the license settings page:

6.1 Steps to Activate Additional Functionality

6.2 More Information

For more information on the complete range of additional features available, please visit VCA Technology

7 Sources

Sources are user configured inputs to the VCAcore system, which include video sources and non-video sources (e.g. digital inputs). The Edit Sources page allows users to add/remove sources and configure existing sources.

Common Properties:

7.1 Video Sources

Video sources are automatically linked with a channel when added. The number of video sources available is dependant on the user's license.

7.1.1 File

File sources stream video from a local sample file embedded within the VCAcore firmware.

Properties:

7.1.2 Rtsp

Rstp sources stream video from remote RTSP sources such as IP cameras and encoders.

Properties:

7.1.3 Milestone

Milestone sources stream video from a Milestone XProtect VMS server

Properties:

7.1.4 Supported Video Sources

The range of video sources supported by VCAcore is always growing; the current list of supported codes is given below:

When using an RTSP stream as a source please ensure it is encoded with one of the supported compression formats. likewise, when using a file as a source please note that VCAcore is compatible with many video file containers (.mp4, .avi etc.) but the video file itself must be encoded with one of the above supported compression formats.

7.2 Other Sources

Various non-video sources are available to the user. Once added, these sources can then be assigned to Actions.

7.2.1 Interval

Interval sources can be used to generate events periodically, e.g. a heartbeat to check that the device is still running.

Properties:

7.2.2 Digital Input

If digital input hardware is available, these will show in the list of other sources.

Properties:

7.2.3 Armed

The Armed source generates an event when the system becomes armed.

7.2.4 Disarmed

The Disarmed source generates an event when the system becomes disarmed. Note that any actions that this source is assigned to must be set to Always Trigger, otherwise the action will not be triggered due to the system being disarmed.

8 Actions

Actions are user configured outputs which can be triggered by a variety of events that occur within VCAcore.

Common Properties:

8.1 Event Sources

Any action can have multiple event sources assigned to it. Once an event source is assigned to an action, any event of that type will trigger the action. Available event sources are grouped according to Video Sources and include either VCA Analytics, customer defined logical rules (with the 'Can Trigger Action' box checked) or loss of signal events and any configured Digital Input or Interval sources.

8.2 Action Types

8.2.1 Tcp

The TCP action sends data to a remote TCP server when triggered. The format of the body is configurable with a mixture of plain text and Tokens, which are substituted with event-specific values at the time an event is generated.

See the Tokens topic for full details about the token system and example templates.

8.2.2 Email

The email action sends events in pre- or user-configured formats to remote email servers.

8.2.3 Http

The HTTP action sends a text/plain HTTP request to a remote endpoint when triggered. The URL, HTTP headers and message body are all configurable with a mixture of plain text and Tokens, which are substituted with event-specific values at the time an event is generated. Additionally, snapshots from the camera can be sent as a multipart/form-data request with the configured snapshots included as image/jpeg's

See the Tokens topic for full details about the token system and example templates.

8.2.4 Digital Output

A digital output is a logical representation of a digital output hardware channel. To configure the properties of a physical digital output channel, such as activation time, refer to the Digital IO page.

8.2.5 Milestone

The Milestone XProtect Event Server action sends VCA events to a Milestone XProtect VMS Event Server. For more details, refer to the [Milestone XProtect] topic.

8.2.6 Arm

The Arm action sets the device state to armed when triggered.

8.2.7 Disarm

The Disarm action sets the device state to disarmed when triggered.

8.3 Arm/Disarm State

The Arm/Disarm functionality provides a means of disabling/enabling all of the configured actions. For example, users may wish to disable all actions when activity is normal and expected (e.g. during normal working hours) and re-enable the actions at times when activity is not expected.

The Arm/Disarm state can be toggled manually by clicking the icon in the Navigation Bar or by using the Arm or Disarm actions.

9 Digital IO

VCAcore supports digital input and output hardware for interfacing with third party systems. Digital inputs can be used as triggers for events in VCAcore, and digital outputs can be triggered by a VCAcore or a system event.

Configuration of the digital inputs and outputs consists of three tasks:

9.1 Digital Output Device Configuration

From the Settings Page select Edit Digital Outputs to access the digital IO device configuration page.

The digital output device configuration page contains a section for each digital output device. Note that the number of digital output channels available depends on the specific hardware device in use.

9.1.1 Digital Outputs

Digital outputs can be triggered by a range of analytics event sources. Each digital output channel has the following properties:

Default State DO Inactive DO Active
Normally Open Open (Low) Closed (High)
Normally Closed Closed (High) Open (Low)

Once the digital output hardware has been configured, digital output hardware channels must be assigned to Actions.

9.2 Digital IO Connections

On a VCAbridge device, a number of built-in digital IO channels are provided. Different models support different numbers of IO channels. Refer to the quick start guide that came with the device for details of the digital IO connector pinout. The quick start guides are also available from the VCA support portal

9.3 Sources and Actions

In order for digital IO channels to interact with VCAcore and system events, Sources must be created for digital inputs and Actions for digital outputs.

9.4 Digital Input Mode

See the system settings page for more digital input configuration options.

10 ONVIF support

VCAcore has inbuilt support for a subset of ONVIF profile S endpoints. To date these provide the following functions using the ONVIF interface:

More detail on each ONVIF function is given below, screenshots are provided using the ONVIF Device Manager implementation varies application to application.

Note: ONVIF Device Manager is a third-party, open source windows application available at ONVIF Device Manager

10.1 Discovery

ONVIF device discovery retrieves information about the ONVIF enabled device including the following data:

The above image shows the ONVIF Device Manager's Identification interface with a VCAbridge running on 192.168.1.34

10.2 Events

The ONVIF events service allows a third-party application to pull a list of events from the VCAcore platform. When a pull request is made the last 100 events triggered within VCAcore are returned. An event is defined as any logical rule (with Can Trigger Actions enabled) or Other Source (such as interval or DI) which triggers. Importantly, neither the logical rule nor the Other Source has to be configured with an action to be included within ONVIF event service cache.

The following fields are currently included for each event.

Property Description
start_time The start time of the event
end_time The end time of the event
id The id of the event
name The user-specified name of the event
type The type of the event
category The category of the event

The above image shows the ONVIF Device Managers Events interface with a VCAbridge running on 192.168.1.34 where the data component of each event is populated with the above properties.

10.3 ONVIF User Management

The ONVIF functions such as device discovery or events are secured using user credentials.

the default credentials for ONVIF access are:

Please note that these credentials are separate from the VCAcore platform, changing one has no impact on the other. The ONVIF password can only be changed using the ONVIF user management. The process for this will vary depending on the ONVIF implementation.

Only a single user, with username admin is supported within VCAcore

If you require more information on ONVIF profiles please refer to the Onvif documentation.

10.4 Next Steps

Learn more about Channels or go back to the Getting Started guide.

11 Channels

11.1 View Channels Page

Once a channel has been configured with a valid input it can be viewed on the View Channels page. A thumbnail (or an error message) is displayed for each configured channel.

Click a thumbnail to view the channel and configure VCAcore related settings.

11.2 Channel Pages

After clicking on a channel, a full view of the channel's video stream is displayed along with any configured zones, counters and rules.

A tab with a icon is displayed on the right hand side of the page. Click this to open the channel settings menu.

11.2.1 Channel Settings Menu

This menu contains various useful links for configuring various aspects of the channel:

11.3 Next Steps

Once a channel has been configured, zones and rules can be configured to detect specific scenarios.

12 Zones

Zones are the detection areas on which VCAcore rules operate. In order to detect a specific behaviour, a zone must be configured to specify the area where a rule applies.

12.1 Adding a Zone

Zones can be added in multiple ways:

12.2 The Context Menu

Right-clicking or tap-holding (on mobile devices) displays a context menu which contains commands specific to the current context.

The possible actions from the context menu are:

12.3 Positioning Zones

To change the position of a zone, click and drag the zone to a new position. To change the shape of a zone, drag the nodes to create the required shape. New nodes can be added by double-clicking on the edge of the zone or clicking the add node icon from the context menu.

12.4 Zone Specific Settings

The zone configuration menu contains a range of zone-specific configuration parameters:

12.5 Deleting a Zone

Zones can be deleted in the following ways:

12.6 Next Steps

Once a zone has been configured, rules can be applied to detect specific scenarios. See Rule Configuration for more information.

13 Logical Rules

VCAcore's logical rules are used to detect specific events in a video stream. VCAcore's logical rules engine uses two over arching concepts to detect events:

Using these concepts, it is simple to build configurations which are used to trigger actions. These can be simple rules attached to zones or more complex configurations whereby rules can be combined or enhanced using the logical rules. The overarching goal of the logical rules is to help eliminate erroneous alerts being generated by providing functions to filter out unwanted behaviour from trigger an action.

More detail on the differences between these two concepts is outlined below:

13.1 Basic Inputs

The basic input or rule can only be used to trigger an action or as an input to another rule. An input to a rule can be thought of as the input condition required to trigger the logical operators.

The complete list of basic inputs are:

13.2 Conditional Inputs

The conditional input is one that cannot trigger an action on its own. It requires the input of another rule or logical rule to be meaningful. An example of this is the AND rule. The AND rule requires two inputs to compare in order to function.

The complete list of conditional input rules are:

13.3 General Concepts

13.3.1 Object Display

As rules are configured they are applied to the channel in real time allowing feedback on how they work. Objects which have triggered a rule are annotated with a bounding box and a trail. Objects can be rendered in two states:

As seen below, when an event is raised, the default settings render details of the event in the lower half of the video stream. Object class annotations in this example are generated through calibrated

13.3.2 Object Trails

The trail shows the history of where the object has been. Depending on the calibration the trail can be drawn from the centroid or the mid-bottom point of the object. (See Advanced Settings for more information).

13.3.3 Trail Importance

The trail is important for determining how a rule is triggered. The intersection of the trail point with a zone or line determines whether a rule is triggered or not. The following image illustrates this point: the blue vehicle's trail intersects with the detection zone and is rendered in red. Conversely, while the white vehicle intersects the detection zone, its trail does not (yet) intersect and hence it has not triggered the rule and is rendered in yellow.

13.4 Logical Rules Configuration

Logical Rules can be configured on a per channel basis by opening the video of that channel, and clicking either the channel sub-menu on the left or the right-side channel settings menu. Configuration is possible in two forms the docked mode, in which both the rules and the video stream are visible or expanded view in which a graph representation is provided to visualise the way the rules are connected.

Initially, the logical rules page opens in the 'docked' mode, alongside the live video stream.

The user may click on the expand button (next to the ) button to switch to the expanded view. Please note that the logical rules graph is only visible in the expanded view.

In the expanded view, the user can add rules, and use the Rules Editor to connect the rules to one another. The graph on the right hand side updates in real time to reflect the user's changes.

13.4.1 Creating a Logical Rule

The first steps to defining a logical rule are to add the initial basic inputs, configure the rule parameters and link them to a zone. Click the button and select the desired rule from the drop menu.

To delete a rule click the corresponding delete icon . Please note rules of any type cannot be deleted if they serve as an input to another rule. In this case the other rule must be deleted first.

13.5 Rule Types (Basic Inputs)

Below is a list of the currently supported rules, along with a detailed description of each.

13.5.1 Presence

A rule which fires an event where an object is first detected in a particular zone.

Note: The Presence rule encapsulates a variety of different behaviour, for example the Presence rule will trigger in the same circumstances as an Enter and Appear rule. The choice of which rule is most appropriate will be dependant on the scenario.

13.5.1.1 Graph View

13.5.1.2 Form View

13.5.1.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Presence #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

13.5.2 Direction

The direction rule detects objects moving in a specific direction. Configure the direction and acceptance angle by moving the arrows on the direction control widget. The primary direction is indicated by the large central arrow. The acceptance angle is the angle between the two smaller arrows.

Objects that travel in the configured direction (within the limits of the acceptance angle), through a zone or over a line, trigger the rule and raise an event.

The following image illustrates how the white car moving in the configured direction triggers the rule whereas the other objects do not.

13.5.2.1 Graph View

13.5.2.2 Form View

13.5.2.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Direction #"
Angle Primary direction angle, 0 - 359. 0 references up. 0
Acceptance Allowed variance each side of primary direction that will still trigger rule 0
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

13.5.3 Stopped

The stopped rule detects objects which are stationary inside a zone for longer than the specified amount of time. The stopped rule requires a zone to be selected before before being able to configure an amount of time.

Note: The stopped rule does not detect abandoned objects. It only detects objects which have moved at some point and then become stationary.

13.5.3.1 Graph View

13.5.3.2 Form View

13.5.3.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Stopped #"
Zone The zone this rule is associated with None
Time Period of time before a stopped object triggers the rule 0
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active

13.5.4 Enter and Exit

The enter rule detects when objects enter a zone. In other words, when objects cross from the outside of a zone to the inside of a zone.

Conversely, the exit rule detects when an object leaves a zone: when it crosses the border of a zone from the inside to the outside.

Note: Enter and exit rules differ from appear and disappear rules, as follows:

13.5.4.1 Graph View

13.5.4.2 Form View

13.5.4.3 Configuration Enter

Property Description Default Value
Name A user-specified name for this rule "Enter #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

13.5.4.4 Configuration Exit

Property Description Default Value
Name A user-specified name for this rule "Exit #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

13.5.5 Appear and Disappear

The appear rule detects objects that start being tracked within a zone, e.g. a person who appears in the scene from a doorway.

Conversely, the disappear rule detects objects that stop being tracked within a zone, e.g. a person who exits the scene through a doorway.

Note: The appear and disappear rules differ from the enter and exit rules as detailed in the enter and exit rule descriptions.

13.5.5.1 Graph View

13.5.5.2 Form View

13.5.5.3 Configuration Appear

Property Description Default Value
Name A user-specified name for this rule "Appear #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

13.5.5.4 Configuration Disappear

Property Description Default Value
Name A user-specified name for this rule "Disappear #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

13.5.6 Abandoned and Removed Object

The abandoned and removed object rule triggers when an object has been either left within a defined zone, e.g. a person leaving a bag on a train platform, or when an object is removed from a defined zone. The abandoned rule has a duration property which defines the amount of time an object must have been abandoned for or removed for, to trigger the rule.

Below is a sample scenario where a bag is left in a defined zone resulting in the rule triggering.

Below is a similar example scenario where the bag is removed from the defined zone resulting in the rule triggering.

Note: The algorithm used for abandoned and removed object detection is the same in each case, and therefore cannot differentiate between objects which have been abandoned or removed. This arises because the algorithm only analyses how blocks of pixels change with respect to a background model which is constructed over time.

13.5.6.1 Graph View

13.5.6.2 Form View

13.5.6.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Abandoned #"
Zone The zone this rule is associated with None
Duration Period of time a object must have been abandoned or removed before the rule triggers 0
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active

13.5.7 Speed

The speed rule detects objects that are moving in the range of speeds defined by a lower and upper boundary. The default speed rule is not attached to a zone and will generate alters on a channel for any object moving within the defined speeds.

Note: The channel must be calibrated in order for the speed filter to be available.

Commonly this rule is combined with a presence rule via an AND logical rule, an example rule graph is provided to illustrate this below. The following image illustrates how such a rule combination triggers on the car moving at 52km/h but the person moving at 12km/h falls outside the configured range (50-200km/h) and thus does not trigger the rule.

13.5.7.1 Graph View

13.5.7.2 Form View

13.5.7.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Presence #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Min Speed The minimum speed (km/h) an object must be going to trigger the rule 0
Max Speed The maximum speed (km/h) an object can be going to trigger the rule 0

13.5.7.4 Typical Logical Rule Combination

The below example logical rule checks if an object triggering the presence rule Presence 4 attached to zone Centre, is also travelling between 50 and 200 km/h as specified by the speed rule Speed 3.

Only the AND rule Centre Zone Speed is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions.

13.5.8 Tailgating

The tailgating rule detects objects which cross through a zone or over a line within quick succession of each other.

In this example, object 1 is about to cross a detection line. Another object (object 2) is following closely behind. The tailgating detection threshold is set to 5 seconds. That is, any object crossing the line within 5s of an object having already crossed the line will trigger the object tailgating rule.

Object 2 crosses the line within 5 seconds of object 1. This triggers the tailgating filter and raises an event.

13.5.8.1 Graph View

13.5.8.2 Form View

13.5.8.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Tailgating #"
Zone The zone this rule is associated with None
Duration Maximum amount of time between first and second object entering a zone to trigger the rule 0
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active

13.5.9 Object Classification Filter

The object classification filter provides the ability to filter out objects, which trigger a logical rule, if they are not classified as a certain class (e.g. person, vehicle). By default the object classification filter requires a defined zone. When attached to a zone in this way any object interaction (enter, exit, appear, disappear) will trigger the rule.

Typically the object classification filter would be combined with another logical rule(s) to prevent unwanted objects from triggering an alert, an example rule graph is provided to illustrate this below.

The previous image illustrates how object classification filter configured with Person class, includes only Person objects. The vehicle in the zone is filtered out since the Vehicle class is not selected in the filter list.

Note: the channel must be calibrated for the object classification filter to be available.

13.5.9.1 Graph View

13.5.9.2 Form View

13.5.9.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Presence #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None
Filter The object class allowed to trigger an alert None

13.5.9.4 Typical Logical Rule Combination

The below example logical rule checks if the object triggering the stopped rule Stopped (1 Sec) attached to zone Centre, is also classified as a Person as specified by the Object Filter Object Filter Person.

Only the AND rule Stopped Person is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions.

13.5.10 Counter

Counters can be configured to count the number of times a rule is triggered, for example the number of people crossing a line. The counter rule is designed to be utilised in two ways:

More than one rule can be assigned to any of a counter's three inputs. This allows, for example, the occupancy of two presence rules to be reflected in a single counter or more than one entrance / exit gate to reflect in a single counter, an example rule graph is provided to illustrate this below.

Broadly speaking a single counter should not be used for both purposes occupancy and increment / decrement.

13.5.10.1 Positioning Counters

When added, a counter object is visualised on the video stream as seen below. The counter can be repositioned by grabbing the 'handle' beneath the counter name and moving the counter to the desired location.

13.5.10.2 Graph View

13.5.10.3 Form View

13.5.10.4 Configuration

Property Description Default Value
Name A user-specified name for this rule "Counter #"
Increment The rule which, when triggered, will add one to the counter None
Decrement The rule which, when triggered, will subtract one from the counter None
Occupancy Sets counter to current number of the rule's active triggers* None
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Reset Counter A button allowing the counter value to be reset to 0 None

* E.g. if a presence rule is set as the occupancy target and two objects are currently triggering that presence rule, the counter will show the value of '2'.

13.5.10.5 Typical Logical Rule Combination

The below counter example increments a counter based on two enter rules Enter Center and Enter Top attached to the zones Center and Top respectively, this means that when either of these enter rules triggers the counter will be incremented by + 1. The counter also decrements based on the exit rule Exit which will subtract 1 from the counter each time an object exits the zone Centre.

Only the counter rule Counter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. In this case an action using this rule as a source will trigger every time the counter changes.

13.5.11 Counting Line

A counting line is a detection filter optimized for directional object counting (e.g. people or vehicles) in busier detection scenarios. Examples of such applications may include:

In some scenes, such as entrances with camera installed overhead, the counting line typically will generate a higher accuracy count than using the aforementioned counters connected to a presence rule.

An event is generated every time an object crosses the line in the configured direction. If multiple objects cross the line together, multiple corresponding events are generated. These events can be directly used to trigger actions if the Can Trigger Actions property is checked.

Counting lines are attached to zones configured with a Line shape. See Zones for more information. If a counting line is configured with a zone not defined with a Line shape, the zone property will be automatically changed (it will not be possible to change the zone shape back until the counting line stops referencing the zone in question).

Counting lines have a specified direction indicated by the arrow in the UI (direction A or B), the direction of this arrow is governed by the configured zone. Each instance of the rule counts in a single direction. To count in both directions a second counting line rule must be added to the same zone with the opposite direction selected. An example rule graph of a two way counting line configured with a counter is provided to illustrate this below.

NOTE: The maximum number of counting line filters that can be applied per video channel is 5.

13.5.11.1 Calibrating the Counting Line

In order to generate accurate counts, the counting line requires calibration. Unlike the object tracking function engine, this cannot be performed at a general level for the whole scene using the 3D Calibration tool. This is because the counting line is not always placed on the ground plane; it may be placed at any orientation at any location in the scene. For example, a counting line could be configured vertically with a side-on camera view.

Instead of the 3D calibration tool, the counting line has its own calibration setting. Two bars equidistant from the centre of the line represent the width of the expected object. This allows the counting line to reject noise and also count multiple objects.

To calibrate the counting line:

13.5.11.2 Counting Line Calibration Feedback

To enable the user to more accurately configure the calibration for the counting line, the widths of detected objects are displayed as an overlay next to the counting line when objects pass over it. By default this display option is enabled. However, if it does not appear, ensure that the option is enabled on the Burnt-in Annotation settings.

The calibration feedback is rendered as black and white lines on either side of the counting line on the Zones configurations page. Each line represents an object detected by the counting algorithm. The width of the line shows the width of the object detected by the line. The last few detections are displayed for each direction with the latest one appearing closest to the counting line.

Each detection is counted as a number of objects based on the current width calibration. This is displayed as follows:

Using the feedback from the calibration feedback annotation, the width calibration can be fine tuned to count the correct sized objects and filter out spurious detections.

13.5.11.3 Shadow Filter

The counting line features a shadow filter which is designed to remove the effects of object shadows affecting the counting algorithm. Shadows can cause inaccurate counting results by making an object appear larger than its true size or by joining two or more objects together. If shadows are causing inaccurate counting, the shadow filter should be enabled by selecting the Shadow Filter check box for the line. It is recommended that the shadow filter only be enabled when shadows are present because the algorithm can mistake certain parts of an object for shadows and this may lead to worse counting results. This is especially the case for objects that have little contrast compared to the background (e.g. people wearing black coats against a black carpet).

13.5.11.4 Graph View

13.5.11.5 Form View

13.5.11.6 Configuration

Property Description Default Value
Name A user-specified name for this rule "Line_Counter #"
Zone The zone this rule is associated with None
Direction Enable counting in the 'A' or 'B' direction (one direction per counting line) None
Enable Width Calibration Width calibration to allow more accurate counting None
Width Width calibration value 0
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active

13.5.11.7 Typical Logical Rule Combination

The below example has two line counters, Line_Counter A and Line_Counter B attached to the zone Center Line each with differing directions selected. Line_Counter A is configured to increment the counter, whilst Line_Counter B is configured to decrement the counter value.

Only the counter rule Counter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. In this case an action using this rule as a source will trigger every time the counter changes.

13.6 Logical Rule Types (Conditional Inputs)

Below is a list of the currently supported logical rules, along with a detailed description of each.

13.6.1 And

A logical operator that combines two rules and only fires events if both inputs are true.

13.6.1.1 Graph View

13.6.1.2 Form View

13.6.1.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "And #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input A The first input None
Input B The second input None
Per Target Fire one event per tracked object Active

If we consider a scene with two presence rules, connected to two separate zones, connected by an AND rule, the table below explains the behaviour of the Per Target property. Note that object here refers to a tracked object, as detected by the VCA tracking engine.

State Per Target Outcome
Object A in Input A, Object B in input B On Two events generated, one for each object
Object A in Input A, Object B in input B Off Only one event generated

Additionally, it is important to note that when per target is switched off, if the rule fires, it will not fire again until it is 'reset', i.e. until the AND condition is no longer true.

13.6.2 Or

A logical operator that combines two rules and fires events if either input is true.

13.6.2.1 Graph View

13.6.2.2 Form View

13.6.2.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Or #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input A The first input None
Input B The second input None
Per Target Fire one event per tracked object Active

If we consider a scene with two presence rules, connected to two separate zones, connected by an OR rule, the table below explains the behaviour of the Per Target property.

State Per Target Outcome
Object A in Input A, No object in input B On Two events generated, one for each object
No object in Input A, Object B in input B On Only one event generated (for Object B)
Object A in Input A, No object in input B On Only one event generated (for Object A)
Object A in Input A, No object in input B Off Only one event generated
No object in Input A, Object B in input B Off Only one event generated
Object A in Input A, No object in input B Off Only one event generated

Additionally, it is important to note that when per target is switched off, if the rule fires, it will not fire again until it is 'reset', i.e. until the OR condition is no longer true.

13.6.3 Previous

A logical operator that triggers for input events which were active at some point in a past window of time. This window is defined by between the current time and the period before the current time (specified by the interval parameter value).

13.6.3.1 Graph View

13.6.3.2 Form View

13.6.3.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Previous #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Per Target Fire one event per tracked object Active
Interval The time in milliseconds 1000ms

13.6.4 Continuously

A logical operator that fires events when its input has occurred continuously for a user-specified time.

13.6.4.1 Graph View

13.6.4.2 Form View

13.6.4.3 Configuration

Property Description Default Value
Name A user-specified name for this rule "Continuously #"
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Per Target Fire one event per tracked object. See description below for more details Active
Interval The time in milliseconds 1000ms

Considering a scene with one zone, a presence rule associated with that zone, and a Continuously rule attached to that presence rule, when the Per Target property is on, the rule will generate an event for each tracked object that is continuously present in the zone. When it is off, only one event will be generated by the rule, even if there are multiple tracked objects within the zone. Additionally, when Per Target is off, the rule will only generate events when there is change of state - i.e. the rule condition changes from true to false or vice versa. When Per Target is off, the state will change when:

13.7 Logical Rule Examples

13.7.1 Dwell Rule

A dwell rule triggers when an object has remained in a zone for a specified amount of time. The dwell rule can be easily implemented as a logical rule by combining the 'Presence' and 'Continuously' rules. The 'Presence' rule is added first, and linked to a zone, followed by the 'Continuously' rule, which is linked to the 'Presence' rule. The interval parameter of the 'Continuously' Rule is the time the object has to remain in the zone for before an event is triggered. The graph of a dwell rule is as follows:

A simple use-case for this rule is an area where loitering is prohibited. The dwell rule can be used to detect this behavior.

In the following image the person has been in the zone for longer than 5 seconds, whereas the vehicle has not. Hence the person generates an event but the vehicle does not.

13.7.2 Double-knock Rule

The 'double-knock' logical rule triggers when an object enters a zone which had previously entered another defined, zone within a set period of time. The time interval on the 'Previous' rule in the graph decides how much time can elapse between the object entering the first and then second zone. The graph for a double-knock logical rule is as follows:

The rule may be interpreted as follows: 'An object is in Zone 2, and was previously in Zone 1 in the last 1000 milliseconds'. This rule can be used as a robust way to detect entry into an area. Since the object has to enter two zones in a specific order, it has the ability to eliminate false positives that may arise from a simple Presence rule.

13.7.3 Presence in A or B

This rule triggers when an object is present in either Zone A or Zone B. Its graph is as follows:

A typical use case for this rule is having multiple areas where access is prohibited, but the areas cannot be easily covered by a single zone. Two zones can be created, associated with two separate presence rules, and they can then be combined using an Or rule.

13.8 Usage notes

13.9 Next Steps

Learn more about Calibration.

14 Calibration

Camera calibration is required in order for VCAcore to classify objects into different object classes. Once a channel has been calibrated, VCAcore can infer real-world object properties such as speed, height and area and classify objects accordingly.

Camera calibration is split into the following sub-topics:

14.1 Enabling Calibration

By default calibration is disabled. To enable calibration on a channel, check the Enable Calibration checkbox.

14.2 Calibration Controls

The calibration page contains a number of elements to assist with calibrating a channel as easily as possible. Each is described below.

14.2.1 3D Graphics Overlay

During the calibration process, the features in the video image need to be matched with a 3D graphics overlay. The 3D graphics overlay consists of a green grid that represents the ground plane. Placed on the ground plane are a number of 3D mimics (people-shaped figures) that represent the dimensions of a person with the current calibration parameters. The calibration mimics are used for verifying the size of a person in the scene and are 1.8 metres tall.

The mimics can be moved around the scene to line up with people (or objects which are of a known, comparable height) to a person.

14.2.2 Mouse Controls

The calibration parameters can be adjusted with the mouse as follows: - Click and drag the ground plane to change the camera tilt angle. - Use the mouse wheel to adjust the camera height. - Drag the slider to change the vertical field of view.

Note: The sliders in the control panel can also be used to adjust the camera tilt angle and height.

14.2.3 Control Panel Items

The control panel (shown on the right hand side in the image above) contains the following controls:

14.2.4 Context Menu Items

Right-clicking the mouse (or tap-and-hold on a tablet) on the grid displays the context menu:

Performing the same action on a mimic displays the mimic context menu:

The possible actions from the context menu are:

14.3 Calibrating a Channel

Calibrating a channel is necessary in order to estimate object parameters such as height, area, speed and classification. If the height, tilt angle and vertical field of view corresponding to the installation are known, these can simply be entered as parameters in the appropriate fields in the control panel.

If however, these parameters are not explicitly known this section provides a step-by-step guide to calibrating a channel.

14.3.1 Step 1: Find People in the Scene

Find some people, or some people-sized objects in the scene. Try to find a person near the camera, and a person further away from the camera. It is useful to use the play/pause control to pause the video so that the mimics can be accurately placed. Place the mimics on top of or near the people:

14.3.2 Step 2: Enter the Camera Vertical Field of View

Determining the correct vertical field of view is important for an accurate calibration. The following table shows pre- calculated values for vertical field of view for different sensor sizes.

Focal Length(mm) 1 2 3 4 5 6 7 8 9 10 15 20 30 40 50
CCD Size (in) CCD Height(mm)
1/6" 1.73 82 47 32 24 20 16 14 12 11 10 7
1/4" 2.40 100 62 44 33 27 23 19 17 15 14 9 7
1/3.6" 3.00 113 74 53 41 33 28 24 21 19 12 11 9 6
1/3.2" 3.42 119 81 59 46 38 32 27 24 21 16 13 10 7
1/3" 3.60 122 84 62 48 40 33 29 25 23 20 14 10 7 5
1/2.7" 3.96 126 89 67 53 43 37 32 28 25 22 15 11 8 6
1/2" 4.80 135 100 77 62 51 44 38 33 30 27 18 14 9 7 5
1/1.8" 5.32 139 106 83 67 56 48 42 37 33 30 20 15 10 8 6
2/3" 6.60 118 95 79 67 58 50 45 40 37 25 19 13 9 8
1" 9.60 135 116 100 88 77 69 62 56 51 35 27 18 14 11
4/3" 13.50 132 119 107 97 88 80 74 68 48 37 25 19 15

If the table does not contain the relevant parameters, the vertical FOV can be estimated by viewing the extremes of the image at the top and bottom. Note that without the correct vertical FOV, it may not be possible to get the mimics to match people at different positions in the scene.

14.3.3 Step 3: Enter the Camera Height

If the camera height is known, type it in directly. If the height is not known, estimate it as far as possible and type it in directly.

14.3.4 Step 4: Adjust the Tilt Angle and Camera Height

Adjust the camera tilt angle (and height if necessary) until both mimics are approximately the same size as a real person at that position in the scene. Click and drag the ground plane to change the tilt angle and use the mouse wheel or control panel to adjust the camera height.

The objective is to ensure that mimics placed at various locations on the grid line up with people or people-sized- objects in the scene.

Once the parameters have been adjusted, the object annotation will reflect the changes and classify the objects accordingly.

14.3.5 Step 5: Verify the Setup

Once the scene is calibrated, drag or add mimics to different locations in the scene and verify they appear at the same size/height as a real person would. Validate that the height and area reported by the VCAcore annotation looks approximately correct. Note that the burnt-in -annotation settings in the control panel can be used to enable and disable the different types of annotation.

Repeat step 4 until the calibration is acceptable.

Tip: If it all goes wrong and the mimics disappear or get lost due to an odd configuration, select one of the preset configurations to restore the configuration to normality.

14.4 Advanced Calibration Parameters

The advanced calibration parameters allow the ground plane to be panned and rolled without affecting the camera calibration parameters. This can be useful to visualize the calibration setup if the scene has pan or roll with respect to the camera.

Note: the pan and roll advanced parameters only affect the orientation of the 3D ground plane so that it can be more conveniently aligned with the video scene, and does not actually affect the calibration parameters.

14.5 Next Steps

Once the channel has been calibrated, the Classification Settings can be configured.

15 Classification

VCAcore can perform object classification once the camera has been calibrated. The object classification is based on properties extracted from the object including object area and speed. VCAcore comes pre-loaded with the most common object classes, and in most cases these will not need to be modified. In some situations it might be desirable to change the classification parameters, or add new object classes.

Each of the UI elements are described below:

To add a new classification group click the Add Classifier button .

15.1 Object Classification

Objects are classified according to how their calibrated properties match the classification groups. Each classification group specifies a speed range and an area range. Objects which fall within both ranges of speed and area will be classified as being an object of the corresponding class.

Note: If multiple classes contain overlapping speed and area ranges then object classification may be ambiguous (since an object will match more than one class). In this case the actual classification is not specified and may be any one of the overlapping classes.

15.2 Deep-Learning Filter

VCAcore also supports classification through the use of the deep-learning filter, for an outline of its functionality and how it interacts with object classification on calibrated scenes, please see Deep-Learning.

15.3 Next Steps

Learn more about Tamper Detection.

16 Tamper Detection

The Tamper Detection module is intended to detect camera tampering events such as bagging, de-focusing and moving the camera. This is achieved by detecting large persistent changes in the image.

16.1 Enabling Tamper Detection

To enable tamper detection click the Enabled checkbox.

16.2 Advanced Tamper Detection Settings

In the advanced tamper detection settings it is possible to change the thresholds for the area of the image which must be changed and the length of time it must be changed for before the tamper event is triggered.

If false alarms are a problem the duration and/or area should be increased so that large transient changes such as close objects temporarily obscuring the camera do not cause false alarms.

16.3 Notification

When a tamper event is detected, a tamper event is generated. This event is transmitted through any output elements as well as being displayed in the video stream:

16.4 Next Steps

Learn more about Scene Change Detection.

17 Scene Change Detection

The scene change detection module resets the tracking algorithm when it detects a large persistent change in the image. This prevents the tracking engine from detecting image changes as tracked objects which could be potential sources of false alarms.

The kinds of changes the scene change detection module detects are as follows:

17.1 Scene Change Settings

There are 3 options for the scene change detection mode:

17.1.1 Automatic

This is the default setting and will automatically use the recommended settings. It is recommended to use the automatic setting unless the scene change detection is causing difficulties.

17.1.2 Disabled

Scene change detection is disabled.

Note that when the scene change detection is disabled, gross changes in the image will not be detected. For example, if a truck parks in front of the camera the scene change will not be detected and false events may occur as a result.

17.1.3 Manual

Allows user configuration of the scene change detection algorithm parameters.

If automatic mode is triggering in situations where it's not desired (e.g. it's too sensitive, or not sensitive enough) then the parameters can be adjusted to manually control the behaviour.

In the manual mode the following settings are available:

When both the time and area thresholds are exceeded the scene is considered to have changed and will be reset.

If false scene change detections are a problem, the time and/or area should be increased so that large transient changes such as a close object temporarily obscuring the camera do not cause false scene change detections.

17.2 Notification

When a scene change is detected, the scene is re-learnt and a message is displayed in the event log and annotated on the video

17.3 Next Steps

Learn more about Burnt-in Annotation.

18 Burnt-in Annotation

The Burnt-in Annotation setting allows the VCAcore annotation to be burnt in to the raw video stream. Annotations can include tracked objects, counters and system messages.

18.1 Burnt-in Annotation Settings

The burnt-in annotation settings control which portions of the VCAcore metadata (objects, events, etc) are rendered into the video stream.

Note: to display object parameters such as speed, height, area and classifications, the channel must first be calibrated.

18.2 Display Event Log

Check the Display Event Log option to show the event log in the lower portion of the image.

18.3 Display Zones

Check the Display Zones option to show the outline of any configured zones.

18.4 Display Objects

Check the Display Objects option to show the bounding boxes of tracked objects. Objects which are not in an alarmed state are rendered in yellow. Objects rendered in red are in an alarmed state (i.e. they have triggered a rule).

18.4.1 Object Speed

Check the Object Speed option to show the object speed.

18.4.2 Object Height

Check the Object Height option to show the object height.

18.4.3 Object Area

Check the Object Area option to show object area.

18.4.4 Object Classification

Check the Object Class to show the object Classification.

18.5 Display Line counters

Check the Display Line Counters option to display the line counter calibration feedback information. See the [Rules] for more information.

18.6 Display Counters

Check the Display Counters option to display the counter names and values. See the [Counters] topic for more information.

18.6.1 System Messages

System messages (e.g. 'Learning Scene') are currently always rendered into the video stream.

18.7 Next Steps

Learn more about Advanced Settings.

19 Advanced Settings

In most installations, the default VCAcore configuration will suffice. However, in some cases, better performance can be achieved with modified parameters. The Advanced settings page allows configuration of the advanced VCAcore parameters.

19.1 Parameters

19.1.1 Alarm Holdoff Time

The Alarm Holdoff Time defines the time between the successive re-triggering of an alarm generated by the same object triggering the same rule. To explain this concept, consider the following diagram where no Alarm Holdoff Time is configured:

In this detection scenario, the person enters the zone 3 times. At each point an alarm is raised, resulting in a total of 3 alarms. With the Alarm Holdoff Time configured, it's possible to prevent re-triggering of the same rule for the same object within the configured time period.

Consider the same scenario, but with an Alarm Holdoff Time of 5 seconds configured:

In this case, an alarm is not raised when the person enters the zone for the second time, because the time between the occurrence of the last alarm of the same type for the object is less than the Alarm Holdoff Time. When the person re-enters the zone for a third time, the elapsed time since the previous alarm of the same type for that object is greater than the Alarm Holdoff time and a new alarm is generated. In essence, the Alarm Holdoff Time can be configured to prevent multiple alarms being generated because an object is loitering on the edge of a zone. Without Alarm Holdoff Time configured, this scenario would cause so called "Alarm chatter".

The default setting for Alarm Holdoff Time is 5 seconds

19.1.2 Stationary Object Hold-on Time

The Stationary Object Hold-on Time defines the amount of time that an object will be tracked by the engine once it becomes stationary. Since objects which become stationary must be "merged" into the scene after some finite time, the tracking engine will forget about objects that have become stationary after the Stationary Object Hold-on Time.

The default setting is 60 seconds.

19.1.3 Minimum Tracked Object Size

The Minimum Tracked Object Size defines the size of the smallest object that will be considered for tracking.

For most applications the default setting of 10 is recommended. In some situations, where extra sensitivity is required, the value can be manually specified. While lower values allow the engine to track smaller objects, it may increase the susceptibility to false detections.

19.1.4 Camera Shake Cancellation

Enabling Camera Shake Cancellation stabilises the video stream before the analytics process runs. This can be useful where the camera is installed on a pole or unstable platform and subject to sway or shake.

It's recommended to only enable this option when camera shake is expected in the installation scenario.

19.1.5 Detection Point of Tracked Objects

For every tracked object, a point is used to determine the object's position, and evaluate whether it intersects a zone and triggers a rule. This point is called the detection point.

There are 3 modes that define the detection point relative to the object:

19.1.5.1 Automatic

In automatic mode, the detection point is automatically set based on how the channel is configured. It selects 'Centroid' if the camera is calibrated overhead, or 'Midbottom' if the camera is calibrated side-on or uncalibrated.

19.1.5.2 Centroid

In this mode, the detection point is forced to be the centroid of the object.

19.1.5.3 Midbottom

In this mode, the detection point is forced to be the middle of the bottom edge of the tracked object. Normally this is the ground contact point of the object (where the object intersects the ground plane).

19.1.5.4 Loss Of Signal Emit Interval

The Loss Of Signal Emit Interval defines the amount of time between emissions when a channel loses signal to it's source.

The default setting is 1 second.

19.2 Next Steps

Learn more about the System Settings.

20 Deep-Learning

The deep-learning page allows the user to enable and configure the optional deep-learning modules in VCAcore.

20.1 Deep-Learning (DL) Filter

The deep-learning filter is an optional module which uses a deep-learning classification engine to filter out false alarms. The deep-learning filter neither requires the source input to have been calibrated or the object classifier to be configured. With the filter enabled, any events generated by the currently configured rules will be passed into the deep-learning filter for verification. If the filter classifies the object which generated the event as background (not a person or vehicle), the event will be filtered out and any attached actions will not be triggered.

When the filter is enabled, the classification (with confidence percentage) will be shown in the event log in the burnt-in annotations:

The classification data from the deep-learning filter can also be accessed via the template tokens.

When using VCAbridge the deep-learning filter comes pre-loaded within the firmware (v1.0.1 or higher) and when enabled runs on the CPU. In order to use the deep-learning filter on VCAserver, the deep-learning filter add-on must be installed. On VCAserver the filter can optionally use GPU acceleration, which requires that the GPU additions add-on is also installed. GPU acceleration requires a NVIDIA GPU with CUDA Compute Capability 3.5 or higher and CUDA 9.0 must be installed.

Without GPU acceleration, enabling the filter on multiple channels which are generating a high volume of events (more than 1 per second) may result in poor performance of the system.

Please note, as the deep-learning filter is trained to detect people and vehicles, if custom object classes have been configured in the object classifier, the deep-learning filter may erroneously filter those alerts out. In these cases, use of the deep-learning filter is not recommended.

20.1.1 Class Parameters

Each of the possible object classifications has additional parameters:

21 System Settings

The system settings page facilitates administration of system level settings such as network configuration, and authentication.

On the VCAbridge platform, additional network configuration settings, system time and other platform specific settings are also provided.

21.1 Network Settings

The network configuration of the device can be changed in the network settings configuration section:

21.2 Authentication Settings

The VCA system can be protected against unauthorised access by enabling authentication. By default, authentication is enabled and the default credentials must be entered when accessing the device for the first time. Authentication applies to all functions including the web interface and API, RTSP server and discovery interfaces.

21.2.1 Enabling Authentication

Click the Enable button to enable authentication.

The password must be confirmed before authentication can be enabled in order to prevent the user being locked out of the device.

21.2.2 Changing the Password

Click the Change Password button to change the password.

Enter the new password, and confirm the current password in order to apply the changes.

WARNING: If the password is forgotten, the device will not be accessible. The only way to recover access to a device without a valid password is to perform a physical reset as described in the Forgotton Password section.

21.2.3 Disabling Authentication

Click the Disable button to disable authentication and allow users to access the device without entering a password. The password is required to disable authentication.

21.2.4 Default Credentials

The default credentials are as follows:

21.2.5 Forgotten Password

If a system becomes inaccessible due to a lost password, the only way to recover access to the device is to delete the configuration file VCAcore is using. This process differs between platforms:

21.3 VCAbridge Specific Settings

The following settings are specific to the VCAbridge platform.

21.3.1 Network Settings (cont.)

21.3.2 Time Settings

The system time settings of the VCAbridge device can be changed in the time settings configuration section:

21.3.3 Digital Input

If digital inputs are available, the input sensors can be configured in two different modes:

21.3.4 System Information

The system information section shows the device up-time (how long the device has been running without restarting):

21.3.5 Power Settings

The power settings section supports device maintenance functions:

21.3.6 Software Upgrade

22 Template Tokens

VCAcore can be set up to perform a specific action when an analytic event occurs. Examples include sending an email, TCP or HTTP message to a server.

VCAcore allows templated messages to be written for email, TCP and HTTP actions which are automatically filled in with the metadata for the event. This allows the details of the event to be specified in the message that the action sends, e.g. the location of the object, type of event, etc.

22.1 Syntax

The templating system uses mustache, which is widely used and well-documented online.

A brief overview of the templating syntax will be provided here.

Templated messages can be written by using tokens in the message body. For example:

Hello {{name}}!

is a template with a name token. When the template is processed, the event metadata is checked to see if it has a name entry. If it does, the {{name}} token is replaced with the name of the event. If it isn't present, the token will be replaced with blank space.

If an event with the name Presence occurs, the processed template will be Hello Presence! but if it doesn't have a name, it will be Hello !

Some tokens may also have sub-properties which can be accessed as follows:

It happened at {{start.hours}}!

22.1.1 Conditionals

Tokens can also be evaluated as boolean values, allowing simple conditional statements to be written:

{{#some_property}}Hello, world!{{/some_property}}

In this example, if some_property is present in the event metadata, then "Hello, world!" will appear in the message. Otherwise, nothing will be added to the message.

If some_property is a boolean, then its value will determine whether or not the conditional is entered. If some_property is an array property, it will only evaluate as true if the array is not empty.

22.1.2 Arrays

Finally, tokens can also be arrays which can be iterated over. For example:

{{#object_array}}
{{name}} is here!
{{/object_array}}

This template will iterate through each item in object_array and print its name, if it has a name property. For example, the array [{"name": "Bob"}, {"name": "Alice"}, {"name": "Charlie"}] will result in the following output:

Bob is here!
Alice is here!
Charlie is here!

22.2 List of tokens

Lower case names represent tokens that can be used with the {{token}} syntax. Upper case names represent boolean or array properties that should be used with the {{#token}}...{{/token}} syntax.

22.2.1 {{name}}

The name of the event

22.2.2 {{id}}

The unique id of the event

22.2.3 {{type.string}}

The type of the event. This is usually the type of rule that triggered the event

22.2.4 {{type.name}}

This is a boolean property that allows conditionals to be performed on the given type name.

For example, to print something only for events of type "presence":

{{#type.presence}}My text{{/type.presence}}

22.2.5 {{start}}

The start time of the event. It has the following subproperties:

The iso8601 property is a date string in the ISO 8601 format.

The offset property is the timezone offset.

22.2.6 {{end}}

The end time of the event. Same properties as start

22.2.7 {{host}}

The hostname of the device that generated the event

22.2.8 {{#Channel}}{{id}}{{/Channel}}

The id of the channel that the event occured on

22.2.9 {{#Zone}}

An array of the zones associated with the event.

Sub-properties:

Example:

{{#Zone}}
id: {{id}}
name: {{name}}
channel:{{channel}}
colour: ({{colour.r}}, {{colour.g}}, {{colour.b}}, {{colour.a}})
{{/Zone}}

22.2.10 {{#Rule}}

An array of the rules associated with the event.

Sub-properties:

Example:

{{#Rule}}
id: {{id}}
name: {{name}}
type:{{type}}
{{/Rule}}

22.2.11 {{#Object}}

An array of the objects that triggered the event.

Sub-properties:

Example:

{{#Object}}
id: {{id}}
Top left corner: ({{outline.rect.top_left.x}}, {{outline.rect.top_left.y}})
{{/Object}}

22.2.12 {{outline}}

The bounding box outline of an object or zone

Sub-properties:

22.2.13 {{#CountingLine}}

An array of line counter counts.

Sub-properties:

Example:

{{#CountingLine}}
rule_id: {{rule_id}}
calibration width: {{width}}
position: {{position}}
count: {{count}}
direction: {{direction}}
{{/CountingLine}}

22.2.14 {{#Counter}}

An array of counter counts.

Sub-properties:

Example:

{{#Counter}}
id: {{id}}
name: {{name}}
count: {{value}}
{{/Counter}}

22.2.15 {{#Tamper}}

A boolean that is true if a camera tamper has been detected

Example:

{{#Tamper}}The camera has been tampered with!{{/Tamper}}

22.2.16 {{#Area}}

The estimated area of the object. This token is a property of the object token. It is only produced if calibration is enabled.

Sub-properties:

Example:

{{#Object}}{{#Area}}{{value}}{{/Area}}{{/Object}}

22.2.17 {{#Height}}

The estimated height of the object. This token is a property of the object token. It is only produced if calibration is enabled.

Sub-properties:

Example:

{{#Object}}{{#Height}}{{value}}{{/Height}}{{/Object}}

22.2.18 {{#GroundPoint}}

The estimated position of the object. This token is a property of the object token. It is only produced if calibration is enabled.

Sub-properties:

Example:

{{#Object}}{{#GroundPoint}}Position: ({{value.x}}, {{value.y}}){{/GroundPoint}}{{/Object}}

22.2.19 {{#Speed}}

The estimated speed of the object. This token is a property of the object token. It is only produced if calibration is enabled.

Sub-properties:

Example:

{{#Object}}{{#Speed}}{{value}}{{/Speed}}{{/Object}}

22.2.20 {{#Classification}}

The classification of the object. This token is a property of the object token. It is only produced if calibration is enabled.

Sub-properties:

Example:

{{#Object}}{{#Classification}}{{value}}{{/Classification}}{{/Object}}

22.2.21 {{#DLClassification}}

The classification generated by the Deep-Learning Filter. The filter must be enabled in order to produce this token, but calibration is not required.

Sub-properties:

Example:

{{#DLClassification}}
Class: {{class}}
Confidence: {{confidence}}
{{/DLClassification}}

22.3 Examples

The following is an example of a template using most of the available tokens:

Event #{{id}}: {{name}}
Event type: {{type}}
Start time (ISO 8601 format): {{start.iso8601}}
End time:
day: {{end.day}}
time: {{end.hour}}:{{end.minutes}}:{{end.seconds}}.{{end.microseconds}}
Device: {{host}}
Channel: {{#Channel}}{{id}}{{/Channel}}
{{#type.presence}}
{{#Object}}
Object ID: {{id}}
{{#Classification}}Object Classification: {{value}}{{/Classification}}
{{#Height}}Object Height: {{value}}m{{/Height}}
Object bounding box: [
  ({{outline.rect.top_left.x}}, {{outline.rect.top_left.y}}),
  ({{outline.rect.bottom_right.x}}, {{outline.rect.top_left.y}}),
  ({{outline.rect.bottom_right.x}}, {{outline.rect.bottom_right.y}}),
  ({{outline.rect.top_left.x}}, {{outline.rect.bottom_right.y}})
]
{{/Object}}
{{/type.presence}}

{{#Counter}}
Counter triggered.
id: {{id}}
name: {{name}}
count: {{count}}
{{/Counter}}

{{#LineCounter}}
rule_id: {{rule_id}}
calibration width: {{width}}
position: {{position}}
count: {{count}}
direction: {{direction}}
{{/LineCounter}}

In this example, the object information is only printed for events of type "presence".

This template might result in the following message:

Event #350: My Bad Event
Event type: presence
Start time (ISO 8601 format): 2017-04-21T10:09:42+00:00
End time:
day: 21
time: 10:09:42.123456
Device: mysecretdevice
Channel: 0

Object ID: 1
Object Classification: Person
Object Height: 1.8m
Object bounding box: [
  (16000, 30000),
  (32000, 30000),
  (32000, 0),
  (16000, 0)
]

Counter triggered.
id: 10
name: My Counter
count: 1

rule_id: 350
calibration width: 1
position: 1
count: 1
direction: 0

23 RTSP Server

VCAcore supports an RTSP server that streams annotated video in RTSP format.

The RTSP URL for channels on a VCA device is as follows:

rtsp://\<device ip\>:554/channels/\<channel id\>

24 Sureview Immix

VCAcore supports the notification of events with annotated snapshots and streaming of real-time annotated video to Sureview Immix.

24.1 Prerequisites

The following ports need to be accessible on the VCA device (i.e. a VCAbridge or an instance of VCAserver) from the Immix server:

24.2 Limitations

24.3 Immix Configuration

24.3.1 Add VCA Device

The first step is to add the VCA device.

In the Immix site configuration tab, click Manage Devices and Alarms, then Add Device:

On the Add Device page, set the following options:

24.3.2 Add Camera

Once the device has been added, channels from the VCA device can be added.

Note: Immix currently supports only one VCA channel per device. To support more channels, simply add more devices.

Click the Cameras tab and Add a Camera to add a new channel:

On the Camera Details page set the following options:

24.3.3 Setting the Input in Immix

In order to set the Input value correctly in Immix, the following steps should be followed:

CHANNEL ID

Channel Id in VCA Input in Immix
0 1
1 2
2 3
5 6
100 101

The reason that the Immix Input is 1 higher than the VCA channel Id is that Immix uses 1-based inputs but VCA uses 0-based channel Ids.

24.3.4 Retrieve a Summary

Generating a summary provides a single document with all of the details necessary to configure the VCA device. Click the Summary tab and a PDF report is created:

Make a note of the email addresses highlighted in red. These email addresses need to be entered in the VCA device configuration (see next section).

24.4 VCAcore Configuration

Once a device and camera are configured in Immix, the email addresses generated as part of the summary need to be added to the VCAcore configuration.

VCAcore notifies Immix of events via email, so each channel configured for Immix needs to have an email action configured. For more details on how to configure Actions or Sources see the corresponding topics.

24.4.1 Add an Email Action

Add an Email action with the following configuration:

Once this is done, add the correct source to the email action.

24.4.2 Event Type Mappings

The event types reported in the VCAcore interface are slightly different to the event types reported in the Immix client. The events are mapped as follows:

Event in VCA Event in Immix
Presence Object Detected
Enter Object Entered
Exit Object Exited
Appear Object Appeared
Disappear Object Disappeared
Stopped Object Stopped
Dwell Object Dwell
Direction Object Direction
Speed Object Speed
Tailgating Tailgaiting
Tamper Tamper Alarm

25 Verifier

VCAcore supports the notification of events with annotated snapshots to Verifier.

25.1 Prerequisites

A Verifier account, please contact your VCA sales representative for details.

25.2 VCAcore Configuration

Once you have a Verifier account setup, an email action using the Verifier template must be configured for VCAcore to send data to Verifier. See below (or Actions) for more details on how to configure a Verifier email action within VCAcore.

25.2.1 Add an Email Action

Add an Email action with the following configuration:

Once this is done, add the correct source to the email action.

26 Troubleshooting

26.1 Error when starting the VCAcore service on Windows 10

26.1.1 Issue

When starting core as a service you get an error similar to this:

26.1.2 Solution

The most common cause of this error is one of the ports VCAcore is trying to bind to is already in use by another service or application. To fix this you can manually specify different ports:

  1. Open the services window
  2. Find the VCA Core service listing, right click and click properties
  1. In the Start Parameters field insert -p 8989 where 8989 is the port you want VCAcore's webserver to bind to. Once defined, click the Start button to start the service with these settings.

After doing this once, the service should start correctly and the -p 8989 text will disappear, this setting will now be saved to the VCAcore configuration so until you change it again VCACore will run on port 8989

Another useful Start Parameter to be aware of allows the definition of the port for VCAcore's RTSP server. In this case insert --rtsp-port 1234 in the the Start Parameters field, where 1234 is the desired port for VCAcore's RTSP server. This can be done at the same time as the webserver port.

26.2 Error when starting vca-cored application on Ubuntu

26.2.1 Issue

When starting VCAcore the following message is displayed Error attaching rtsp server

26.2.2 Solution

Typically if the VCAserver application is not starting it is due to a port conflict where both VCAcore and another application are trying to bind to the same port. This can be either the webservice serving the VCAcore UI or the RTSP Server. To fix this you can specify different ports manually when starting the application. (-p for the webserver, --rtsp-port for the RTSP server). Once VCAcore successfully starts with these parameters the configuration file will persist these changes, negating the need to specify the parameters each time VCAcore is started.

./vca-cored -p 8989 --rtsp-port 1234

26.3 Service Unavailable, Your GStreamer installation is missing a plug-in

26.3.1 Issue

When adding a source to VCAcore you are presented with the following message in the View Sources page:

26.3.2 Solution

This error will be presented when the source input into VCAcore is encoded with an unsupported codec. Please ensure that the source being added is encoded with one of VCAcore's Supported Video Sources

26.4 No Option to Backup a VCAserver Configuration File (Windows or Linux)

26.4.1 Issue

Within VCAserver the UI option to backup or restore a configuration file is not provided.

26.4.2 Solution

As VCAserver is designed to run on open platforms, access to the VCAcore configuration file is readily available. The locations of the configuration files are provided in the Installing VCAcore section of the manual. These files can be copied in and out of these default locations, allowing for the backup and restoring of configurations where required.

Please note that configuration files are not currently compatible between different installs of VCAcore.