VCA Documentation

v2.3.2

1 Introduction

This is the user manual for the VCAserver video analytics framework developed by VCA Technology. This manual describes how to set up and configure the video analytics to detect events of interest, whilst minimising false alerts.

VCAserver is available as a server application for Windows 11/Server or Ubuntu 22.04 (x86) and Ubuntu 22.04 (ARMv8).

See Getting Started for the essentials necessary to get VCAserver configured and start generating metadata.

2 Getting Started

This user guide documents each topic in detail, and each topic is accessible via the menu. However, to get started quickly, the essential topics are listed below.

2.1 Fundamentals

2.2 User Credentials

Note that the default username and password for the VCAcore platform are:

2.3 Advanced Topics

Once the basic settings are configured, the following advanced topics may be relevant:

3 Prerequisites

For the purposes of this document, it is assumed that VCAserver will be installed on a dedicated hardware platform.

3.1 Hardware

The hardware specifications for a given system will depend on the intended number of video channels to be processed, as well as which trackers and algorithms will be run on those channels. Some initial guidelines are provided below:

3.1.1 x86

3.2 Software

As the combinations of operating system, drivers and hardware is so variable, software requirements are based on configurations used internally for testing.

3.2.1 Environment

To ensure a host system is ready to run VCAserver, it is advised the following checks are made to ensuring the host system is ready to run the analytics.

Executing on device: 0000:01:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.18.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8
Serial Number: <N/A>
Part Number: <N/A>
Product Name: <N/A>

4 Installing VCAserver

Installation instructions for the various platforms supported by VCAcore vary slightly and are outlined below.

4.1 VCAserver (Windows 10)

VCAserver is installed as a service called VCA core and can be managed using the Windows service manager.

The configuration file for VCAcore is stored: C:\VCACore

The VCAserver MSI package installs the base analytics engine, interface and deep learning models.

4.1.1 Changing Ports

The VCAserver installation can be modified to reconfigure the Web UI and the Recovery service ports. Navigate to Apps & Features within the Windows settings and select Modify on the VCA-Core entry.

Select Change, enter the desired ports and proceed through installation to apply the changes.

4.2 VCAserver (Ubuntu 18.04)

VCAserver is installed as a systemd service, as such when installation is complete the VCAcore service will be started automatically. The VCAcore service can be managed using the a systemd service manager, e.g. sudo systemctl restart vca-core.service

When installed the VCAserver configuration file is stored: /var/opt/VCA-Core/

VCAserver on Linux comes as a single archive file containing an .sh script, which handles the installation of the VCAcore components. Once the archive has been downloaded, navigate to folder and unpack the installation scripts from the archive:

Change file attributes to allow the script to run. chmod +x ./VCA-Core-**VERSION_NUMBER**-vca_core (for example e.g. chmod +x ./VCA-Core-1.5.0-vca_core.sh)

Next run the .sh script: sudo ./VCA-Core-**VERSION_NUMBER**-vca_core, (for example e.g. sudo ./VCA-Core-1.5.0-vca_core.sh).

VCAserver should be installed as a system service. The install directory is fixed to /opt/VCA-Core/and will request desired ports for VCAserver’s manager and web servers. During install it is possible to run the Prebuild engines optimisation step. This runs the model optimisation step as part of the install, ensuring all models are ready to run when installation is finished. Depending on GPU configuration this could take a long time.

Important notes: VCAserver is developed and tested against Windows 10 and Ubuntu 18.04. Although the application may run on other versions of Windows or Linux, support is limited to this version only.

4.3 Upgrading VCAserver

Periodically new versions of VCAserver will be released with new features, please check for software updates at the VCA Support page.

When upgrading VCAserver, backup the system configuration (System -> Configuration), uninstall or delete the existing VCAserver version and run the new installation packages as above.

When the upgrade is complete the configuration is persisted and upgraded to work with the new version.

4.3.1 Downgrading VCAserver

Downgrading to a previous version of VCAserver is not supported. If a previous version is required, the existing installation and configuration must be deleted before the desired version is installed. Windows systems will raise an error during install in this case.

5 VCAcore Service

VCAserver, on both Windows and Ubuntu, has a management service built in. This utility allows control of the VCAserver application via a Web UI to allow simple remote management.

The recovery service is always running and by default is accessible at: http://[HOST_COMPUTER_IP]:9090/.

The recovery service provides a range of functionality.

5.1 Info

Details of the logs and configuration locations for the currently running instance of VCAserver.

A failure count is provided which will keep track of the number of times the VCAcore application has restarted.

Lastly, the current status of the VCAserver application is also provided.

5.2 Managing VCAserver

The main function of the VCAcore Service it to manage the VCAserver application. By default VCAserver is always running. To stop the application, press the Stop button which will allow the service to perform additional management tasks.

Once stopped, the VCAcore Service is able to erase data and settings, resetting the VCAserver application and configuration back to a default state. An option is also available to download the log files.

Lastly to restart the VCAserver application click the Restart button.

6 Navigation

This topic provides a general overview of the VCAserver configuration user interface.

The VCA user interface features a persistent navigation bar displayed at the top of the window.

There are a number of elements in this navigation bar, each is described below:

6.2 Side Menu

Clicking the icon displays the side navigation menu:

Every page in the VCA user interface is accessible through the side menu. The icon next to a menu item indicates that the item has sub-items and can be expanded.

Items in the side menu are automatically expanded to reflect the current location within the web application.

6.3 Settings Page

The settings page displays a number of links to various configuration pages:

7 Licensing

To create sources and take advantage of the VCAserver analytics, each channel will require a license. There are a number of different license types available, a Video Source can be assigned a specific license type based on the required features.

Licensing is either managed by a License Server or via Cloud Licensing. A License Server is user managed, and supports perpetual license solutions both on the host system running VCAserver or across the network. Cloud Licensing is an externally managed service, allowing for subscription based licensing models, requiring VCAserver to have a active internet connection.

7.1 Management

To manage licensing, navigate to the license settings page. This interface allows the user to define which licensing Method to use and the settings associated with each.

Both methods expose a pool of available licenses which VCAserver can use with configured Video Sources. A pool of licenses can be made up of a range of different license packs with different license types and available amounts. For each license pack, the total number of channels and the currently assigned channels is provided. The assigned channels takes into account all instances of VCAserver using a license from this pack. Additionally, the features available to the license type are also shown.

When a license is assigned, it cannot be used by another channel or instance of VCAserver. Both the License Server and Cloud Licensing manage multiple instances of VCAserver using licenses simultaneously.

7.1.1 Switching License Method

The licensing Method can be switched from Cloud to Local (or vice-versa) at any time:

7.2 License Server

A License Server links perpetual license packs to a Hardware GUID, which is a unique identifier specific to the physical hardware the License Server is running on. The License Server generates the Hardware GUID only on physical (non-virtualised) systems. On virtualised systems a Hardware GUID will not be available and the following message displayed.

Once a VCAserver is connected to a License Server, the license pool associated with that License Server will be shown. The License Server will either be running:

To configure the License Server settings the following options are provided:

7.2.1 Adding Licenses to the License Pool

The Activation Key field can be utilised in two ways to to add a license to the License Server’s pool:

  1. Entering a pre-validated Activation Key for this Hardware GUID:

  2. Entering an Activation Token (required the system accessing the Licensing Settings page to be connected to the internet):

On new installations, before a user is able to add sources, the License Server will need a license added to the license pool.

In the case of an upgrade, or on systems that have run the License Server before, the system will persist the licenses in the pool.

7.2.2 Removing Licenses from the License Pool

Licenses can also be deleted from the License Server’s pool (in the case of expired evaluation licenses):

7.2.3 Changing VCAserver’s License Server

The License Server used by VCAserver can be switched at any time:

7.3 Cloud Licensing (Cloud)

Once a valid API Key is provided and the connection to the Cloud Licensing Server is established, the license pool associated with that API key will be shown.

When using Cloud Licensing the license pool available to VCAserver is managed using a cloud portal.

On new installations, before a user is able to add sources, the Cloud Licensing account will need a license added to the license pool.

7.4 Losing Connection to a Licensing Method

VCAserver will lose connection to its Licensing method in certain situations:

VCAserver has a 5 day grace period, allowing the analytics to continue to process in the absence of a License method. Additionally, an action can be configured to generate an event in this situation. After this time, analytics will stop processing and no events or metadata will be generated.

7.4.1 Reconnection

When VCAserver’s connection to a License Server or Cloud Licensing is re-established, and the license pool of the License Server or Cloud Licensing has not changed, VCAserver will reconnect and checkout licenses for the channels that were using them previously.

If the license pool has changed during the downtime, or if a configuration is imported to VCAserver which specifies a different License Server or Cloud Licensing account, with a different license pool, VCAserver will attempt to assign licenses to configured Video Sources, if available. If for any reason a previously configured license is not available, a checkout failed message will be see in the View Channels Page, and a review of the Video Sources may be required to ensure that all channels are correctly licensed.

If a different License method is to be used, or if a configuration is imported to VCAserver which specifies a License method that is no longer available, then follow the guide lines for Switching License Method

7.5 More Information

For more information on the complete range of additional features available, please visit VCA Technology

8 Sources

Sources are user configured inputs to VCAserver, which include video sources and non-video sources (e.g. digital inputs). The Edit Sources page allows users to add/remove sources and configure existing sources.

Common Properties:

8.1 Video Sources

Video sources are automatically linked with a channel when added. A preview is provided of the video source showing snapshots of the video content or any warnings. The number of video sources which can be added to the system is dependant on the user’s license. A list of the currently available license types (e.g. Pro) and the number of those licenses used is provided (e.g. 2 / 16).

License selection allows for a specific license type to be associated with a channel. Licenses can be changed on a video source at any time. However, once a channel is configured with rules and functions linked to a particular license type, changing the license type for that channel is not advised.

8.1.1 File

File sources enable the streaming of video from a file located in a test-clips folder on the host machine. The folder is in a subdirectory of the default data location:

Any video files located in this folder will be presented in the File drop down menu. Please note that when files are added to this folder, the web interface will need to be refreshed for the UI to see the files in the drop down menu.

Properties:

8.1.2 RTSP

The RTSP source streams video from remote RTSP servers such as IP cameras and encoders. The minimum frame rate required for good quality tracking is 15fps. The suggested resolution for these RTSP streams is 480p or greater.

Note: resolutions greater than 480p will result in greater CPU resource usage and may not always result in greater tracking accuracy.

Properties:

8.1.3 Supported Video Sources

The range of video codecs supported by VCAserver is given below:

Note: where supported, the following H.264 profiles can be decoded using hardware acceleration: ConstrainedBaseline, Main, High, MultiviewHigh and StereoHigh

When using an RTSP stream as a source please ensure it is encoded with one of the supported compression formats. Likewise, when using a file as a source please note that VCAserver is compatible with many video file containers (.mp4, .avi etc.) but the video file itself must be encoded with one of the above supported compression formats.

8.2 Other Sources

Various non-video sources are available to the user. Once added, these sources can then be assigned to Actions and, in certain cases, referenced in the Rules.

8.2.1 Interval

Interval sources can be used to generate events periodically, e.g. a heartbeat to check that the device is still running.

Properties:

8.2.2 Digital Input

If digital input hardware is available, these will show in the list of other sources.

Properties:

8.2.3 Armed

The Armed source generates an event when the system becomes armed.

8.2.4 Disarmed

The Disarmed source generates an event when the system becomes disarmed. Note that any actions that this source is assigned to must be set to Always Trigger, otherwise the action will not be triggered due to the system being disarmed.

8.2.5 License Server (Lost)

The License Server source generates an event when VCAserver’s connection to it’s License Server changes. A event is generated both when a connection is lost and restored. The Event type token (e.g. {{type.string}}) can be used to identify the type of connection event being generated.

Properties:

8.2.6 HTTP

The HTTP source creates an arbitrary REST API endpoint with a state variable that can be set true or false. This creates a virtual Digital Input which third party systems can enable or disable. The HTTP source can be referenced by the [Source Filter] in a rule graph.

Properties:

8.2.7 Schedule

The Schedule source allows the definition of a schedule of time when the source is either on or off. The Schedule other source can be referenced by the [Source Filter] in a rule graph. Additionally, the schedule source can be used to directly control the armed state of VCAserver.

Properties:

8.2.8 System

The System source generates an event when the selected system resource goes above the user defined threshold. The source can be configured to continue to send events, whilst the resource remains above the threshold, at a set interval or to send a single event each time the threshold is reached.

Properties:

9 Channels

9.1 View Channels Page

The View Channels page displays a preview of each configured channel along with any event messages.

Click a thumbnail to view the channel and configure VCAserver related settings. Click the plus icon to go to the add video source page.

9.2 Channel Pages

After clicking on a channel, a full view of the channel’s video stream is displayed along with any configured zones, counters and rules and the channel settings menu open.

If the settings menu is closed, a tab with a icon is displayed on the right hand side of the page. Click this to reopen the channel settings menu.

9.2.1 Channel Settings Menu

This menu contains various useful links for configuring various aspects of the channel:

10 Trackers

VCAserver supports a number of tracker technologies for use with a configured channel of video. The available trackers are listed below:

Under the Trackers menu item is a drop down menu option for Tracking Engine; under which, one of the available trackers can be selected.

10.1 Initialisation

When a tracker is selected by the user, an initialisation phase will be required. This will vary based on the selected tracker.

Once initialised, VCAserver will begin analysing the video stream with the selected tracker. Settings specific to that tracker will also be displayed below the tracker engine selection option.

Regardless of the tracker selected, any tracked object can be passed through the available rules. However, in some cases, certain rules or algorithms will only be available with a specific tracker. For example, the abandoned and removed object rules are only available with the Object Tracker.

10.2 Universal Settings

Some settings are universal across all trackers, these are outlined below:

10.2.1 Loss Of Signal Emit Interval

The Loss Of Signal Emit Interval defines the amount of time between emissions when a channel loses signal to it’s source.

The default setting is 1 second.

10.2.2 Tamper Detection

The Tamper Detection module is intended to detect camera tampering events such as bagging, de-focusing and moving the camera. This is achieved by detecting large persistent changes in the image.

10.2.2.1 Enabling Tamper Detection

To enable tamper detection click the Enabled checkbox.

10.2.2.2 Advanced Tamper Detection Settings

In the advanced tamper detection settings, it is possible to change the thresholds for the area of the image which must be changed and the length of time it must be changed for before the tamper event is triggered.

If false alarms are a problem, the duration and/or area should be increased, so that large transient changes such as close objects temporarily obscuring the camera do not cause false alarms.

10.2.2.3 Notification

When a tamper event is detected, a tamper event is generated. This event is transmitted through any output elements as well as being displayed in the video stream:

10.2.3 Calibration Filtering

Calibration filtering is a tool preventing very large or very small objects from being tracked and causing false alarms.

In the above example; a small object with an estimated height of 0.3m and area of 0.3sqm is removed by the calibration filter.

This can also improve situations where large motion is detected in the Object Tracker caused by lighting changes, or a Deep Learning Tracker recognising very large or small features as a valid object. An object is defined as large or small based on the metadata produced when Calibration is enabled. When Calibration Filtering is enabled an object is valid when it meets all of the following criteria:

If any of the above criteria is not met, the object will no longer appear as a tracked object. Filtered Objects can be visualised using the Burnt-In-Annotations.

To enable calibration filtering click the Enabled checkbox. Calibration must be enabled on the channel and properly configured to ensure valid objects are not removed.

10.3 Object Tracker

The Object Tracker is a motion based detection engine. Based on changes detected in the image, the algorithm separates the image into foreground and background, tracking any foreground object that is moving above a set threshold. The Object Tracker has the following settings:

10.3.1 Deep Learning Filter (Object Tracker)

Enables the Deep Learning Filter to analyse any detected objects.

The default setting is off.

10.3.2 Stationary Object Hold-on Time

The Stationary Object Hold-on Time defines the amount of time an object will be tracked by the engine once it becomes stationary. Since objects which become stationary must be “merged” into the scene after some finite time, the tracking engine will forget about objects that have become stationary after the Stationary Object Hold-on Time.

The default setting is 60 seconds.

10.3.3 Abandoned / Removed Object Threshold

This threshold amount of time an object must be classed as abandoned or removed before an Abandoned / Removed rule will trigger.

The default setting is 5 seconds.

10.3.4 Minimum and Maximum Tracked Object Size

The Minimum and Maximum Tracked Object Size defines the size limits of the object that will be considered for tracking.

For most applications, the default settings are recommended. In some situations, where more specificity is required, the value can be manually specified. Changing these values allow the engine to track smaller and larger objects, which it may increase the susceptibility to false detections.

10.3.5 Object Tracker Sensitivity

The Object Tracker Sensitivity value allows the object tracker to be tuned to ignore movement below a certain threshold. Combined with the foreground pixels burnt in annotation, which visualises the area of the scene the object tracker is detecting movement, this value can be adjusted to filter out environmental noise.

The default setting is Medium High.

10.3.6 Scene Change Detection (Object Tracker)

Learn more about Scene Change Detection.

10.3.7 Detection Point of Tracked Objects

For every tracked object, a point is used to determine the object’s position, and evaluate whether it intersects a zone and triggers a rule. This point is called the detection point.

There are 3 modes that define the detection point relative to the object:

10.3.7.1 Automatic

In automatic mode, the detection point is automatically set based on how the channel is configured. It selects ‘Centroid’ if the camera is calibrated overhead, or ‘Mid-bottom’ if the camera is calibrated side-on or not calibrated.

10.3.7.2 Centroid

In this mode, the detection point is forced to be the centroid of the object.

10.3.7.3 Mid-bottom

In this mode, the detection point is forced to be the middle of the bottom edge of the tracked object. Normally this is the ground contact point of the object (where the object intersects the ground plane).

10.3.8 Tamper Detection (Object Tracker)

Learn more about Tamper Detection.

10.3.9 Calibration Filtering (Object Tracker)

Learn more about Calibration Filtering.

10.3.10 Loss Of Signal Emit Interval (Object Tracker)

See Loss Of Signal Emit Interval.

10.4 Deep Learning People Tracker

The Deep Learning People tracker tracks people in dense and busy scenes.

The Deep Learning People Tracker is based on the detection of a person’s head and shoulders, providing the location of a person in the field of view even when large parts of their body are occluded. See Deep Learning Requirements for hardware requirements for this algorithm.

The Deep Learning People Tracker has the following settings:

10.4.1 Tamper Detection (DLPT)

Learn more about Tamper Detection.

10.4.2 Calibration Filtering (DLPT)

Learn more about Calibration Filtering.

10.4.3 Loss Of Signal Emit Interval (DLPT)

See Loss Of Signal Emit Interval.

10.5 Deep Learning Skeleton Tracker

The Deep Learning Skeleton tracker tracks people in situations where the camera field of view is relatively close.

The Deep Learning Skeleton Tracker is based on Pose Estimation technology, providing the location of a person in the field of view as well as additional key point metadata on the parts of the body. See Deep Learning Requirements for hardware requirements for this algorithm.

The Deep Learning Skeleton Tracker has the following settings:

10.5.1 Tamper Detection (DLST)

Learn more about Tamper Detection.

10.5.2 Calibration Filtering (DLST)

Learn more about Calibration Filtering.

10.5.3 Loss Of Signal Emit Interval (DLST)

See Loss Of Signal Emit Interval.

10.6 Deep Learning Object Tracker

The Deep Learning Object Tracker is designed for accurate detection and tracking of people, vehicles and key objects in challenging environments where motion based tracking methods would struggle. The list of objects detected by the Deep Learning Object Tracker is given below:

Class Name Description
person A person, or tracked object with a person present (e.g bicycle)
motorcycle A motorcycle
bicycle A bicycle
bus A bus
car A car
van A van, including mini-vans and mini-buses
truck A truck, including lorries / commercial work vehicles and bus / coaches
forklift A forklift truck
bag A backpack or holdall

The Deep Learning Object Tracker is based on a classification and detection model, providing the location of an object in the field of view. See Deep Learning Requirements for hardware requirements for this algorithm.

The Deep Learning Object Tracker has the following settings:

10.6.1 Stationary Object Filtering (DLOT)

See Stationary Hold On Time.

In addition to the Stationary Hold On Time, an additional setting Require Initial Movement, is available which will prevent objects which have not moved from being tracked.

10.6.2 Detection Point of Tracked Objects (DLOT)

See Detection Point of Tracked Objects.

10.6.3 Tamper Detection (DLOT)

Learn more about Tamper Detection.

10.6.4 Calibration Filtering (DLOT)

Learn more about Calibration Filtering.

10.6.5 Loss Of Signal Emit Interval (DLOT)

See Loss Of Signal Emit Interval.

10.7 Deep Learning Fisheye Tracker

The Deep Learning Fisheye Tracker tracks people in fisheye camera views.

Note: The Deep Learning Fisheye Tracker only works on fisheye video streams which have not been dewarped.

The Deep Learning Fisheye Tracker uses a deep learning segmentation method, providing the location of a person in the field of view even when large parts of their body are occluded. See Deep Learning Requirements for hardware requirements for this algorithm.

The Deep Learning Fisheye Tracker has the following settings:

10.7.1 Stationary Object Filtering (DLFT)

See Stationary Hold On Time.

In addition to the Stationary Hold On Time, an additional setting Require Initial Movement, is available which will prevent objects which have not moved from being tracked.

10.7.2 Detection Point of Tracked Objects (DLFT)

See Detection Point of Tracked Objects.

10.7.3 Tamper Detection (DLFT)

Learn more about Tamper Detection.

10.7.4 Calibration Filtering (DLFT)

Learn more about Calibration Filtering.

10.7.5 Loss Of Signal Emit Interval (DLFT)

See Loss Of Signal Emit Interval.

10.8 Hand Object Interaction Tracker

The Hand Object Interaction (HOI) Tracker is designed for the detection of hands, and the objects they hold. The HOI tracker requires a top down and relatively close field of view to detect optimally. The list of objects detected by the Hand Object Interaction Tracker is given below:

Class Name Description
hand A hand
object An object being held by a hand object

The Hand Object Interaction Tracker is based on a classification and detection model, providing the location of an object in the field of view. See Deep Learning Requirements for hardware requirements for this algorithm.

The Hand Object Interaction Tracker has the following settings:

10.8.1 Detection Point of Tracked Objects (HOI)

See Detection Point of Tracked Objects.

10.8.2 Tamper Detection (HOI)

Learn more about Tamper Detection.

10.8.3 Calibration Filtering (HOI)

Learn more about Calibration Filtering.

10.8.4 Loss Of Signal Emit Interval (HOI)

See Loss Of Signal Emit Interval.

11 Zones

Zones are the detection areas on which VCAserver rules operate. In order to detect a specific behaviour, a zone must be configured to specify the area where a rule applies.

11.1 Adding a Zone

Zones can be added in multiple ways:

11.2 The Context Menu

Right-clicking or tap-holding (on mobile devices) displays a context menu which contains commands specific to the current context.

The possible actions from the context menu are:

11.3 Positioning Zones

To change the position of a zone, click and drag the zone to a new position. To change the shape of a zone, drag the nodes to create the required shape. New nodes can be added by double-clicking on the edge of the zone or clicking the add node icon from the context menu.

11.4 Zone Specific Settings

The zone configuration menu contains a range of zone-specific configuration parameters:

11.5 Deleting a Zone

Zones can be deleted in the following ways:

12 Rules

VCAserver’s rules are used to detect specific events in a video stream. There are three rule types which can be utilised to detect events and trigger actions:

Within VCAserver, rule configurations can be as simple as individual basic inputs attached to a zone used to trigger an action. Alternatively rules can be combined into more complex logical rule configurations using conditional rules and filters. The overarching goal of the rules in VCAserver is to help eliminate erroneous alerts being generated, by providing functions to prevent unwanted behaviour from triggering an action.

More detail on the differences between these concepts is outlined below:

12.1 Basic Inputs

A basic input or rule can only be used to trigger an action or as an input to another rule type. Basic inputs always require a zone, and potentially some additional parameters. A basic input can be used on its own to trigger an action, although they are often used as an input to other filters or conditional rules.

The complete list of basic inputs are:

12.2 Filters

A filter cannot trigger an action on its own as it requires another basic input, filter or conditional rule to trigger. An example of this is the Object rule.

The complete list of filters are:

12.3 Conditional Rules

A conditional input, like a filter, is one that cannot trigger an action on its own. It requires the input of another basic input, conditional rule or filter to be meaningful. An example of this is the AND rule. The AND rule requires two inputs to compare in order to function.

The complete list of conditional rules are:

12.4 General Concepts

12.4.1 Object Display

As rules are configured they are applied to the channel in real time allowing feedback on how they work. Objects which have triggered a rule are annotated with a bounding box and a trail. Objects can be rendered in two states:

As seen below, when an event is raised, the default settings render details of the event in the lower half of the video stream. Object class annotations in this example are generated through calibrated classification

12.4.2 Object Trails

The trail shows the history of where the object has been. Depending on the calibration the trail can be drawn from the centroid or the mid-bottom point of the object. (See Detection Point of Tracked Objects for more information).

12.4.3 Trail Importance

The trail is important for determining how a rule is triggered. The intersection of the trail point with a zone or line determines whether a rule is triggered or not. The following image illustrates this point: the blue vehicle’s trail intersects with the detection zone and is rendered in red. Conversely, while the white vehicle intersects the detection zone, its trail does not (yet) intersect and hence it has not triggered the rule and is rendered in yellow.

12.5 Rules Configuration

Rules are configured on a per channel basis by opening the rules menu when viewing the channel. Configuration is possible in two forms: the docked mode, in which both the rules and the video stream are visible or expanded view, in which a graph representation is provided to visualise the way the rules are connected.

The rules page opens in the ‘docked’ mode, alongside the live video stream.

The user may click on the expand button to switch to the expanded view. Please note that the rules graph is only visible in the expanded view.

In the expanded view, the user can add rules, and use the Rules Editor to connect the rules to one another. The graph on the right hand side updates in real time to reflect the user’s changes.

12.5.1 Event Retrigger Time

The Event Retrigger Time allows the specification of a period of time in which a rule, triggered by the same object, can not generate multiple events. This prevents scenarios where an object crossing the boundary of zone multiple times within the period, could trigger a configured rule repeatedly.

This setting takes into account the object triggering the rule, ensuring events from new objects triggering the same rule are not suppressed. Only rules with can trigger actions enabled will be impacted by this setting.

12.5.2 Adding Rules

The first steps to defining a rule configuration is to add the basic inputs, configure the respective parameters and link to a zone. Click the button and select the desired rule from the drop menu.

To delete a rule click the corresponding delete icon . Please note rules of any type cannot be deleted if they serve as an input to another rule. In this case the other rule must be deleted first.

12.6 Basic Inputs

Below are the currently supported basic inputs, along with a detailed description of each.

12.6.1 Presence

A rule which fires an event when an object is first detected in a particular zone.

Note: The Presence rule encapsulates a variety of different behaviour, for example the Presence rule will trigger in the same circumstances as an Enter and Appear rule. The choice of which rule is most appropriate will be dependant on the scenario.

12.6.1.1 Graph View

12.6.1.2 Form View

12.6.1.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Presence #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

12.6.2 Fight

A rule which fires when fight behaviour is detected in the field of view for longer than the specified duration.

Note: Fight does not require a zone and runs independently of the tracker. Enabling this algorithm, by adding this rule, will impact channel capacity, as the algorithm runs in addition to the channels selected tracker.

12.6.2.1 Graph View

12.6.2.2 Form View

12.6.2.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Fight #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Duration Period of time before a fight triggers the rule 0.75
Threshold Confidence threshold before a fight is detected 95
Continuous Threshold Minimum persistent confidence threshold required for duration 50

12.6.3 Direction

The direction rule detects objects moving in a specific direction. Configure the direction and acceptance angle by moving the arrows on the direction control widget. The primary direction is indicated by the large central arrow. The acceptance angle is the angle between the two smaller arrows.

Objects that travel in the configured direction (within the limits of the acceptance angle), through a zone or over a line, trigger the rule and raise an event.

The following image illustrates how the white car, moving in the configured direction, triggers the rule whereas the other objects do not.

Note: Direction is calculated as the vector between, the the oldest history point of a tracked object (the end of the yellow tail) and the point of intersection of a zone/line. This can lead to some unexpected behaviour, see the two examples below:

The Directional Crossing rule would avoid both of these scenarios.

12.6.3.1 Graph View

12.6.3.2 Form View

12.6.3.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Direction #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None
Angle Primary direction angle, 0 - 359. 0 references up. 0
Acceptance Allowed variance each side of primary direction that will still trigger rule 0

12.6.4 Directional Crossing

The directional crossing rule is designed to reduce false alarms common with simple line crossing use cases. Directional Crossing is designed for use with a zone rather than a line, and adds a number of additional checks for an object as it enters as well as exits that zone.

For an object to trigger the Directional Crossing rule it must:

Configure the direction and acceptance angle by moving the arrows on the direction control widget. The primary direction is indicated by the large central arrow. The acceptance angle is the angle between the two smaller arrows.

The following image illustrates how the white car, moving in the configured direction, triggers the rule whereas the other objects do not.

12.6.4.1 Graph View

12.6.4.2 Form View

12.6.4.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Directional #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None
Angle Primary direction angle, 0 - 359. 0 references up. 0
Acceptance Allowed variance each side of primary direction that will still trigger rule 0
Classes The object classes allowed to trigger an alert None

12.6.5 Dwell

A dwell rule triggers when an object has remained in a zone for a specified amount of time. The interval parameter the time the object has to remain in the zone before an event is triggered.

The following image illustrates how the person, detected in the zone, is highlighted red as they have dwelt in the zone for the desired period of time. The two vehicles have not been present in the zone for long enough yet to trigger the dwell rule.

12.6.5.1 Graph View

12.6.5.2 Form View

12.6.5.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Direction #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None
Interval Period of time in seconds) 1 to 86400

12.6.6 Stopped

The stopped rule detects objects which are stationary inside a zone for longer than the specified amount of time. The stopped rule requires a zone to be selected before being able to configure an amount of time.

Note: The stopped rule does not detect abandoned objects. It only detects objects which have moved at some point and then become stationary.

12.6.6.1 Graph View

12.6.6.2 Form View

12.6.6.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Stopped #”
Zone The zone this rule is associated with None
Interval Period of time before a stopped object triggers the rule 1 to 60 seconds
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active

12.6.7 Enter and Exit

The enter rule detects when objects enter a zone. In other words, when objects cross from the outside of a zone to the inside of a zone.

Conversely, the exit rule detects when an object leaves a zone: when it crosses the border of a zone from the inside to the outside.

Note: Enter and exit rules differ from appear and disappear rules, as follows:

12.6.7.1 Graph View

12.6.7.2 Form View

12.6.7.3 Configuration Enter

Property Description Default Value
Name A user-specified name for this rule “Enter #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

12.6.7.4 Configuration Exit

Property Description Default Value
Name A user-specified name for this rule “Exit #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

12.6.8 Fall

The Fall rule detects when a object classified as a Person, by either the Deep Learning People Tracker, Deep Learning Skeleton Tracker or Deep Learning Object Tracker, is in the fallen state.

When the Fall rule is added to a channel configuration, the fall detection algorithm begins to run on any object detected as person, which will have a GPU overhead proportional to the number of people detected in the scene.

Fall detection accuracy is reliant on continuing to track a person in the unusual orientations brought about by a fall. As such, it is advised to use the Deep Learning Skeleton Tracker, as it is better able to detect and track people in this fallen state. Interruptions in tracking a fallen person, will prevent the fall detection algorithm running whilst they are in that fallen state, and could result in missed events.

12.6.8.1 Graph View

12.6.8.2 Form View

12.6.8.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Fall #”
Zone The zone this rule is associated with None
Duration Period of time a object must have been fallen before the rule triggers 1 to 60 seconds
Confidence Threshold The algorithm confidence (as a percentage) required to trigger the rule 0
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active

12.6.9 Hands Up

The Hands Up rule detects when a object classified as a Person, by the Deep Learning Skeleton Tracker, has their hands up.

When the Hands Up rule is added to a channel configuration, the Hands Up detection algorithm begins to run in the background on any detected person. Classification of Hands Up is based on the skeleton key point metadata generated by the Deep Learning Skeleton Tracker. Currently this rule is only available when using the Deep Learning Skeleton Tracker.

12.6.9.1 Graph View

12.6.9.2 Form View

12.6.9.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Hands Up #”
Zone The zone this rule is associated with None
Duration Period of time a person must have their hands up before the rule triggers 1 to 60 seconds
Confidence Threshold The algorithm confidence (as a percentage) required to trigger the rule 0
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active

12.6.10 Appear and Disappear

The appear rule detects objects that start being tracked within a zone, e.g. a person who appears in the scene from a doorway.

Conversely, the disappear rule detects objects that stop being tracked within a zone, e.g. a person who exits the scene through a doorway.

Note: The appear and disappear rules differ from the enter and exit rules as detailed in the enter and exit rule descriptions.

12.6.10.1 Graph View

12.6.10.2 Form View

12.6.10.3 Configuration Appear

Property Description Default Value
Name A user-specified name for this rule “Appear #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

12.6.10.4 Configuration Disappear

Property Description Default Value
Name A user-specified name for this rule “Disappear #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Zone The zone this rule is associated with None

12.6.11 Abandoned and Removed Object

The abandoned and removed object rule triggers when an object has been either left within a defined zone. E.g. a person leaving a bag on a train platform, or when an object is removed from a defined zone. The abandoned rule has a duration property which defines the amount of time an object must have been abandoned for, or removed for, to trigger the rule.

Below is a sample scenario where a bag is left in a defined zone resulting in the rule triggering.

Below is a similar example scenario where the bag is removed from the defined zone resulting in the rule triggering.

Note: The algorithm used for abandoned and removed object detection is the same in each case, and therefore cannot differentiate between objects which have been abandoned or removed. This is because the algorithm only analyses how blocks of pixels change over time with respect to a background model. Note: The algorithm used for abandoned and removed object will only work when the Object Tracker is selected under Trackers

12.6.11.1 Graph View

12.6.11.2 Form View

12.6.11.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Abandoned #”
Zone The zone this rule is associated with None
Duration Period of time a object must have been abandoned or removed before the rule triggers 0
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active

12.6.12 Tailgating

The tailgating rule detects objects which cross through a zone or over a line within quick succession of each other.

In this example, object 1 is about to cross a detection line. Another object (object 2) is following closely behind. The tailgating detection threshold is set to 5 seconds. That is, any object crossing the line within 5 seconds of an object having already crossed the line will trigger the object tailgating rule.

Object 2 crosses the line within 5 seconds of object 1. This triggers the tailgating filter and raises an event.

12.6.12.1 Graph View

12.6.12.2 Form View

12.6.12.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Tailgating #”
Zone The zone this rule is associated with None
Duration Maximum amount of time between first and second object entering a zone to trigger the rule 1 to 60 seconds
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active

12.7 Filters

Below is a list of the currently supported filters, along with a detailed description of each.

When filters are used to trigger an action the rule type property is propagated from the filter input. For example, if the input to the speed filter is a presence rule, then actions generated as a result of the speed filter will have a presence event type.

12.7.1 Accessory Filter

The accessory filter provides a way to check if a given person, which has triggered an input, is wearing, or not wearing, a particular accessory.

Due to the use cases associated with accessory detection, Accessory Filtering Type is required to differentiate between a person with the detected accessory (Present), a person classified as not wearing the accessory (Not Present) and someone not yet evaluated. In the latter case Accessory Filter will not generate a rule as a decision has not yet been made.

Classification of Accessory is based on the skeleton key point metadata generated by the Deep Learning Skeleton Tracker. Currently this rule is only available when using the Deep Learning Skeleton Tracker.

Commonly, this rule is combined with a presence rule, an example rule graph is provided to illustrate this below. The following image illustrates how such a rule combination triggers on people not detected wearing a high-visibility vest, but those wearing a high visibility vest do not trigger the rule.

12.7.1.1 Graph View

12.7.1.2 Form View

12.7.1.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Accessory #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Filtered Accessory The type of accessory the rule will check for High Vis Vest
Confidence Threshold The algorithm confidence required to trigger the filter 60
Acc. Filtering Type Specifies if the rule should trigger if accessory is present or not Present

12.7.1.4 Typical Logical Rule Combination

The logical rule example below checks if an object triggering the presence rule Presence Rule, attached to zone Work Area, is not detected as wearing a Hi-Vis Vest.

Only the Accessory Filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. In this example, any action generated by the accessory filter will have the event type Presence.

12.7.2 Speed Filter

The speed filter provides a way to check if the speed of an object, which has triggered an input, is moving within the range of speeds defined by a lower and upper boundary.

Note: The channel must be calibrated in order for the speed filter to be available.

Commonly, this rule is combined with a presence rule, an example rule graph is provided to illustrate this below. The following image illustrates how such a rule combination triggers on the car moving at 52 km/h, but the person moving at 12 km/h falls outside the configured range (25-100 km/h) and thus does not trigger the rule.

12.7.2.1 Graph View

12.7.2.2 Form View

12.7.2.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Speed #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Min Speed The minimum speed (km/h) an object must be going to trigger the rule 0
Max Speed The maximum speed (km/h) an object can be going to trigger the rule 0

12.7.2.4 Typical Logical Rule Combination

The logical rule example below checks if an object triggering the presence rule Presence Rule attached to zone Centre, is also travelling between 25 and 100 km/h as specified by the speed rule Speed Filter 25-100 km/h.

Only the Speed Filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. In this example, any action generated by the speed filter will have the event type Presence.

12.7.3 Object Filter

The object filter provides the ability to filter out objects which trigger a rule, if they are not classified as a certain class (e.g. person, vehicle). The available classes which can be used to filter, depend on which tracker is currently selected. In cases where the class is assigned via a deep learning model (DLF, DLOT, DLPT), the confidence threshold can also be used to further filter out objects which the model is not confident about its class. If a channel running the Object Tracker is both calibrated and has the Deep Learning Filter enabled, the Object Filter will default to the Deep Learning Filter classification options.

The object classification filter must be combined with another rule(s) to prevent unwanted objects from triggering an alert, an example rule graph is provided to illustrate this below.

The previous image illustrates how the object classification filter configured with Vehicle class, only triggers on Vehicle objects. The person in the zone is filtered out since the Person class is not selected in the filter list.

Note: when using the Object Tracker, the channel must be calibrated for the object classification filter to be available.

12.7.3.1 Graph View

12.7.3.2 Form View

12.7.3.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Object Filter #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Classes The object classes allowed to trigger an alert None
Confidence Threshold The algorithm confidence required to trigger the filter 10

12.7.3.4 Typical Logical Rule Combination

The logical rule example below checks if the object triggering the presence rule Presence Rule attached to zone Centre, is also classified as a Vehicle as specified by the Object Filter Vehicle Filter.

Only the Object filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. In this example, any action generated by the speed filter will have the event type Presence.

12.7.4 Colour Filter

The colour filter rule utilises the Colour Signature algorithm, providing the ability to filter out objects based on an object’s colour components.

The Colour Signature algorithm groups the pixel colours of an object. When a Colour Filter rule is added to a channel, any object that is tracked by VCAserver will also have its pixels grouped into 10 colours. By default this information is added to VCAserver’s metadata, available as tokens, via the SSE metadata service or that channel’s RTSP metadata stream.

The colour filter allows you to select one or more of these colour bins, and will trigger only if the subject object contains one or more of those selected colours.

The below image shows an example tracked object with the colour signature annotations enabled. Here the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object. In this case a person being tracked in the scene with high visibility safety clothing. Here the colour filter is set to trigger on Yellow, detecting the person but ignoring the shadow.

Typically, the colour filter would be combined with another rule(s) to prevent unwanted objects from triggering an alert, an example rule graph is provided to illustrate this below.

The previous image illustrates how the colour filter prevents objects, which do not contain the specified colours, from generating an event. In this case only the person generates an event but not the train line.

Note: the channel must have the Colour Signature enabled for the colour filter to work.

12.7.4.1 Graph View

12.7.4.2 Form View

12.7.4.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Colour Filter #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Colours The colours allowed to trigger an alert All Unchecked

12.7.4.4 Typical Logical Rule Combination

The logical rule example below checks if the object triggering the presence rule Train line attached to zone Centre, also contains the colour Yellow as one of the top four colours by percentage.

Only the Colour filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions.

12.7.5 Retrigger Filter

The Retrigger Filter acts as an event pass through, which only generates an event if the input has not fired previously within the defined interval.

Typically, the Retrigger Filter would be applied at the end of a rule(s) combination to prevent duplicate alarms being sent, this provides more granular control than the Event Retrigger Time option. Events produced by the Retrigger Filter will have the event type of the input rule.

12.7.5.1 Graph View

12.7.5.2 Form View

12.7.5.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Retrigger #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Interval Period in which the input event cannot generate another event 3

12.7.5.4 Typical Logical Rule Combination

The logical rule example below takes as input the presence rule Object Waiting attached to zone Waiting Area, and will generate an event a maximum of once every 3 seconds, assuming the presence rule had objects regularly triggering it.

Only the Retrigger filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. This will then limit any associated actions from generating messages more than once every three seconds. Additionally in this case, the event generated by the Retrigger filter will have the event type Presence.

12.7.6 Other Source Filter

The other source filter provides the ability to use Other Sources to filter an input rule in a rule graph. The other source filter will only trigger an event in cases when the selected other source evaluates as on, whilst the input rule triggers an event.

Valid Other Sources and the valid on scenario are outlined in the table below:

Other Source Type on Condition off Condition
HTTP The observable state is set true The observable state is set false
Schedule The current system clock falls into a scheduled ‘on’ period The current system clock falls into a scheduled ‘off’ period

Typically the other source filter would be used to limit a rule(s) from firing if an external requirement is not met. For example, using a Schedule source with the source filter only triggers events if the input rule fires during set periods of time. Alternatively, using a HTTP source would only trigger an event when the input rule triggers and the HTTP source state is set to true. An example rule graph is provided to illustrate this below.

12.7.6.1 Graph View

12.7.6.2 Form View

12.7.6.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Other Source #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Source The other source None

12.7.6.4 Typical Logical Rule Combination

The logical rule example below will only generate an event if the current system time falls within an on period, defined in the source Schedule Source and the input rule Presence Centre attached to zone Zone 0, triggers an event.

Only the other source filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. In this example, any action generated by the other source filter will have the event type Presence.

12.8 Conditional Rule Types

Below is a list of the currently supported conditional rules, along with a detailed description of each.

12.8.1 And

A logical operator that combines two rules and only fires events if both inputs are true.

12.8.1.1 Graph View

12.8.1.2 Form View

12.8.1.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “And #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input A The first input None
Input B The second input None
Per Target Fire one event per tracked object Active

If we consider a scene with two presence rules, connected to two separate zones, connected by an AND rule, the table below explains the behaviour of the Per Target property. Note that object here refers to a tracked object, as detected by the VCA tracking engine.

State Per Target Outcome
Object A in Input A, Object B in input B On Two events generated, one for each object
Object A in Input A, Object B in input B Off Only one event generated

Additionally, it is important to note that if the rule fires when Per Target is switched off, it will not fire again until it is ‘reset’, i.e. until the AND condition is no longer true.

12.8.2 Continuously

A logical operator that fires events when its input has occurred continuously for a user-specified time.

12.8.2.1 Graph View

12.8.2.2 Form View

12.8.2.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Continuously #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Per Target Fire one event per tracked object. See description below for more details Active
Interval The time in milliseconds 1

Considering a scene with a Presence rule associated with a zone and a Continuously rule attached to that Presence rule, when the Per Target property is on, the rule will generate an event for each tracked object that is continuously present in the zone. When it is off, only one event will be generated by the rule, even if there are multiple tracked objects within the zone. Additionally, when Per Target is off, the rule will only generate events when there is change of state, i.e. the rule condition changes from true to false or vice versa. When Per Target is off, the state will change when:

12.8.3 Counter

Counters can be configured to count the number of times a rule is triggered. For example, the number of people crossing a line. The counter rule is designed to be utilised in two ways:

More than one rule can be assigned to any of a counter’s three inputs. This allows, for example, the occupancy of two presence rules to be reflected in a single counter, or more than one entrance / exit gate to reflect in a single counter. An example rule graph is provided to illustrate this below.

Broadly speaking a single counter should not be used for both purposes occupancy and increment / decrement.

The Counter’s Threshold Operator allows the user to limit when a counter generates an event. Based on the selected behaviour and a defined Threshold Value, the counter can be configured to only send events in specific scenarios. Threshold Operators include:

The Counter’s Reset allows another Rule or selected Other Source(s) to reset the counter to 0. An example use case could be to zero out counters at the end of the day. Any Basic Input, Filter or Conditional rule can be used to trigger the Counter’s reset. The HTTP and Schedule Other Source(s) can be used to trigger the Counter’s reset.

12.8.3.1 Positioning Counters

When added, a counter object is visualised on the video stream as seen below. The counter can be repositioned by grabbing the ‘handle’ beneath the counter name and moving the counter to the desired location.

Right-clicking the mouse (or tap-and-hold on a tablet) on the grid displays the context menu:

12.8.3.2 Graph View

12.8.3.3 Form View

12.8.3.4 Configuration

Property Description Default Value
Name A user-specified name for this rule “Counter #”
Increment The rule which, when triggered, will add one to the counter None
Decrement The rule which, when triggered, will subtract one from the counter None
Occupancy Sets counter to current number of the rule’s active triggers None
Reset Resets the count to 0 when the assigned rule or other source triggers None
Threshold Operator Defines when a Counter will trigger events based on the threshold None
Threshold Value The value used by the Threshold Operator to define the behaviour 0
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Reset Counter A button allowing the counter value to be reset to 0 None

* E.g. if a Presence rule is set as the occupancy target and two objects are currently triggering that Presence rule, the counter will show the value of 2.

12.8.3.5 Typical Logical Rule Combination

The below counter example increments a counter based on two enter rules, Enter Centre and Enter Top attached to the zones Centre and Top respectively, this means that when either of these enter rules triggers the counter will be incremented by + 1. The counter also decrements based on the exit rule Exit, which will subtract 1 from the counter each time an object exits the zone Centre. The Threshold Operator and Threshold Value, limit the counter to only generate events when the count is more than 20.

Only the counter rule Counter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. In this case an action using this rule as a source will trigger every time the counter changes.

12.8.4 Not

A logical operator that generates an event when the input rule becomes false.

12.8.4.1 Graph View

12.8.4.2 Form View

12.8.4.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Not #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None

12.8.5 Or

A logical operator that combines two rules and fires events if either input is true.

12.8.5.1 Graph View

12.8.5.2 Form View

12.8.5.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Or #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input A The first input None
Input B The second input None
Per Target Fire one event per tracked object Active

If we consider a scene with two Presence rules connected to two separate zones, connected by an OR rule, the table below explains the behaviour of the Per Target property.

State Per Target Outcome
Object A in Input A, No object in input B On Two events generated, one for each object
No object in Input A, Object B in input B On Only one event generated (for Object B)
Object A in Input A, No object in input B On Only one event generated (for Object A)
Object A in Input A, No object in input B Off Only one event generated
No object in Input A, Object B in input B Off Only one event generated
Object A in Input A, No object in input B Off Only one event generated

Additionally, it is important to note that if the rule fires when Per Target is switched off, it will not fire again until it is ‘reset’, i.e. until the OR condition is no longer true.

12.8.6 Previous

A logical operator that triggers for input events which were active at some point in a past window of time. This window is defined as between the current time and the period before the current time (specified by the Interval value).

12.8.6.1 Graph View

12.8.6.2 Form View

12.8.6.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Previous #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Per Target Fire one event per tracked object Active
Interval The time in milliseconds 1

12.8.7 Repeatedly

A logical operator that triggers when an input rule is triggered a set number of times within a defined period. The Duration period is a window of time computed from every input event. For example, with a Repeatedly rule configured to generate an event when the input triggers three times in eight seconds, and that input rule triggers four times in eight seconds, the repeatedly rule will trigger after both the third input rule trigger and again after the fourth. This is because the first three triggers (events 1-3) fired within an 8 second window, additionally the second set (events 2-4) also occurred within their own 8 second window.

The Per Target option specifies that it must be the same tracked object that triggers the input.

12.8.7.1 Graph View

12.8.7.2 Form View

12.8.7.3 Configuration

Property Description Default Value
Name A user-specified name for this rule “Repeatedly #”
Can Trigger Actions Specifies whether events generated by this rule trigger actions Active
Input The input rule None
Duration The time in which the Number of Events to Trigger must fire 3
Number of Events to Trigger The number of times the input is required to trigger 4
Per Target Specifies if the input needs to be triggered by the same object Inactive

12.9 Combined Rule Examples

12.9.1 Double-knock Rule

The ‘double-knock’ logical rule triggers when an object enters a zone which had previously entered another defined, zone within a set period of time. The interval on the Previous rule decides how much time can elapse between the object entering the first and second zone. The graph for a double-knock logical rule is as follows:

The rule may be interpreted as follows: ‘An object is in Zone 2, and was previously in Zone 1 in the last 1000 milliseconds’. This rule can be used as a robust way to detect entry into an area. Since the object has to enter two zones in a specific order, it has the ability to eliminate false positives that may arise from a simple Presence rule.

12.9.2 Presence in A or B

This rule triggers when an object is present in either Zone A or Zone B. Its graph is as follows:

A typical use case for this rule is having multiple areas where access is prohibited, but the areas cannot be easily covered by a single zone. Two zones can be created, associated with two separate Presence rules, and they can then be combined using an Or rule.

12.10 Usage notes

13 Actions

Actions are user configured outputs which can be triggered by a variety of events that occur within VCAserver.

Common Properties:

13.1 Event Sources

Any action can have multiple event sources assigned to it. Once an event source is assigned to an action, any event generated by that source will trigger the action. Available event sources are grouped according to Video Sources including either, customer defined logical rules (with the Can Trigger Action box checked), loss of signal events and any configured Digital Input, Armed, Disarmed or Interval sources.

13.2 Action Types

13.2.1 TCP

The TCP action sends data to a remote TCP server when triggered. The format of the body is configurable with a mixture of plain text and Tokens, which are substituted with event-specific values at the time an event is generated.

See the Tokens topic for full details about the token system and example templates.

13.2.2 Email

The email action sends events in pre- or user-configured formats to remote email servers.

See the Tokens topic for full details about the token system and example templates.

It is not advised to enable Send Snapshots on an action linked to a loss of signal event source. If the signal has been lost, snapshots will not be gathered until the signal is restored, delaying the action.

13.2.3 HTTP

The HTTP action sends a text/plain HTTP or HTTPs request to a remote endpoint when triggered. The URL, HTTP headers and message body are all configurable with a mixture of plain text and Tokens, which are substituted with event-specific values at the time an event is generated. Additionally, snapshots from the camera can be sent as a multipart/form-data request with the configured snapshots included as image/jpeg’s. HTTP actions are sent using the HTTP/1.1 standard.

See the Tokens topic for full details about the token system and example templates.

It is not advised to enable Send Snapshots on an action linked to a loss of signal event source. If the signal has been lost, snapshots will not be gathered until the signal is restored, delaying the action.

13.2.4 Digital Output

A digital output is a logical representation of a digital output hardware channel. To configure the properties of a physical digital output channel, such as activation time, refer to the Digital IO page.

13.2.5 Arm

The Arm action sets the device state to armed when triggered.

13.2.6 Disarm

The Disarm action sets the device state to disarmed when triggered.

13.3 Arm/Disarm State

The Arm/Disarm functionality provides a means of disabling/enabling all of the configured actions. For example, users may wish to disable all actions when activity is normal and expected (e.g. during normal working hours) and re-enable the actions at times when activity is not expected.

The Arm/Disarm state can be toggled manually by clicking the icon in the Navigation Bar or by using the Arm or Disarm actions.

14 Calibration

Camera calibration is required in order for VCAserver to classify objects into different object classes. Once a channel has been calibrated, VCAserver can infer real-world object properties such as speed, height and area and classify objects accordingly.

Camera calibration is split into the following sub-topics:

14.1 Enabling Calibration

By default calibration is disabled. To enable calibration on a channel, check the Enable Calibration checkbox.

14.2 Calibration Controls

The calibration page contains a number of elements to assist with calibrating a channel as easily as possible. Each is described below.

14.2.1 3D Graphics Overlay

During the calibration process, the features in the video image need to be matched with a 3D graphics overlay. The 3D graphics overlay consists of a green grid that represents the ground plane. Placed on the ground plane are a number of 3D mimics (people-shaped figures) that represent the dimensions of a person with the current calibration parameters. The calibration mimics are used for verifying the size of a person in the scene and are 1.8 metres tall.

The mimics can be moved around the scene to line up with people (or objects which are of a known, comparable height) to a person.

14.2.2 Mouse Controls

The calibration parameters can be adjusted with the mouse as follows: - Click and drag the ground plane to change the camera tilt angle. - Use the mouse wheel to adjust the camera height. - Drag the slider to change the vertical field of view.

Note: The sliders in the control panel can also be used to adjust the camera tilt angle and height.

14.2.3 Control Panel Items

The control panel (shown on the right hand side in the image above) contains the following controls:

14.2.4 Context Menu Items

Right-clicking the mouse (or tap-and-hold on a tablet) on the grid displays the context menu:

Performing the same action on a mimic displays the mimic context menu:

The possible actions from the context menu are:

14.3 Calibrating a Channel

Calibrating a channel is necessary in order to estimate object parameters such as height, area, speed and classification. If the height, tilt angle and vertical field of view corresponding to the installation are known, these can simply be entered as parameters in the appropriate fields in the control panel.

If however, these parameters are not explicitly known this section provides a step-by-step guide to calibrating a channel.

14.3.1 Step 1: Find People in the Scene

Find some people, or some people-sized objects in the scene. Try to find a person near the camera, and a person further away from the camera. It is useful to use the play/pause control to pause the video so that the mimics can be accurately placed. Place the mimics on top of or near the people:

14.3.2 Step 2: Enter the Camera Vertical Field of View

Determining the correct vertical field of view is important for an accurate calibration. The following table shows pre- calculated values for vertical field of view for different sensor sizes.

Focal Length(mm) 1 2 3 4 5 6 7 8
CCD Size (in) CCD Height(mm)
1/6" 1.73 82 47 32 24 20 16 14 12
1/4" 2.40 100 62 44 33 27 23 19 17
1/3.6" 3.00 113 74 53 41 33 28 24 21
1/3.2" 3.42 119 81 59 46 38 32 27 24
1/3" 3.60 122 84 62 48 40 33 29 25
1/2.7" 3.96 126 89 67 53 43 37 32 28
1/2" 4.80 135 100 77 62 51 44 38 33
1/1.8" 5.32 139 106 83 67 56 48 42 37
2/3" 6.60 118 95 79 67 58 50 45
1" 9.60 135 116 100 88 77 69 62
4/3" 13.50 132 119 107 97 88 80
Focal Length(mm) 9 10 15 20 30 40 50
CCD Size (in) CCD Height(mm)
1/6" 1.73 11 10 7
1/4" 2.40 15 14 9 7
1/3.6" 3.00 19 12 11 9 6
1/3.2" 3.42 21 16 13 10 7
1/3" 3.60 23 20 14 10 7 5
1/2.7" 3.96 25 22 15 11 8 6
1/2" 4.80 30 27 18 14 9 7 5
1/1.8" 5.32 33 30 20 15 10 8 6
2/3" 6.60 40 37 25 19 13 9 8
1" 9.60 56 51 35 27 18 14 11
4/3" 13.50 74 68 48 37 25 19 15

If the table does not contain the relevant parameters, the vertical FOV can be estimated by viewing the extremes of the image at the top and bottom. Note that without the correct vertical FOV, it may not be possible to get the mimics to match people at different positions in the scene.

14.3.3 Step 3: Enter the Camera Height

If the camera height is known, type it in directly. If the height is not known, estimate it as far as possible and type it in directly.

14.3.4 Step 4: Adjust the Tilt Angle and Camera Height

Adjust the camera tilt angle (and height if necessary) until both mimics are approximately the same size as a real person at that position in the scene. Click and drag the ground plane to change the tilt angle and use the mouse wheel or control panel to adjust the camera height.

The objective is to ensure that mimics placed at various locations on the grid line up with people or people-sized- objects in the scene.

Once the parameters have been adjusted, the object annotation will reflect the changes and classify the objects accordingly.

14.3.5 Step 5: Verify the Setup

Once the scene is calibrated, drag or add mimics to different locations in the scene and verify they appear at the same size/height as a real person would. Validate that the height and area reported by the VCAserver annotation looks approximately correct. Note that the burnt-in -annotation settings in the control panel can be used to enable and disable the different types of annotation.

Repeat step 4 until the calibration is acceptable.

Tip: If it all goes wrong and the mimics disappear or get lost due to an odd configuration, select one of the preset configurations to restore the configuration to normality.

14.4 Advanced Calibration Parameters

The advanced calibration parameters allow the ground plane to be panned and rolled without affecting the camera calibration parameters. This can be useful to visualize the calibration setup if the scene has pan or roll with respect to the camera.

Note: the pan and roll advanced parameters only affect the orientation of the 3D ground plane so that it can be more conveniently aligned with the video scene, and does not actually affect the calibration parameters.

15 Classification

VCAserver can define a moving object’s class, using either deep learning models or by using properties extracted from an object in a calibrated scene.

Both methods of classification are applied through the use of the Object Filter rule, which evaluates an object against it’s predicted class and filters it out if needed.

15.1 Object Classification

Once a camera view has been calibrated, each detected object in that view will have a number of properties extracted including object area and speed.

VCAserver’s object classification performs classification by comparing these properties to a set of configurable object classifiers. VCAserver comes pre-loaded with the most common object classifiers, and in most cases these will not need to be modified.

Channels running the Deep Learning People Tracker or the Deep Learning Object Tracker cannot be calibrated. Therefore, Object Classification is not available when these trackers are selected.

15.1.1 Configuration

In some situations it might be desirable to change the classifier parameters, or add new object classifiers. The classification menu can be used to make these changes.

Each of the UI elements are described below:

To add a new classifier click the Add Classifier button .

Calibration must be enabled on each channel object classification is to be used on. If not enabled, any rules that include an object filter will not trigger.

15.1.2 Classification (Object)

Objects are classified according to how their calibrated properties match the classifiers. Each classifier specifies a speed range and an area range. Objects which fall within both ranges of speed and area will be classified as being an object of the corresponding class.

Note: If multiple classes contain overlapping speed and area ranges then object classification may be ambiguous, since an object will match more than one class. In this case the actual classification is not specified and may be any one of the overlapping classes.

The classification data from object classification can be accessed via template tokens.

15.2 Deep Learning Filter

The Deep Learning Filter is a deep learning solution designed to validate objects tracked by the Object Tracker.

When enabled, as soon as a moving object is detected, it will be evaluated by the Deep learning filter and a classification and confidence level returned.

The model will return one of the following classes

The classification data from the deep learning filter can also be accessed via template tokens.

The Deep Learning Filter can use GPU acceleration, see Deep Learning Requirements for hardware requirements.

Without GPU acceleration the Deep Learning Filter will use the CPU, enabling the Deep Learning Filter on multiple channels which are generating a high volume of events, (more than 1 per second) may result in poor performance of the system and is not advised.

15.3 Deep Learning People Tracker

By the nature of the Deep learning People Tracker’s detection methodology, every tracked object is, by definition, classified as Person. The Deep Learning People Tracker will not track an object unless it is classified as a Person. Additionally, no calibration is required for the tracker’s classification to work.

See Deep Learning Requirements for hardware requirements for this algorithm.

15.4 Deep Learning Object Tracker

By the nature of the Deep learning Object Tracker’s detection methodology, every tracked object is by definition classified as one of the following classes:

The Deep Learning Object Tracker will not track an object unless it is classified as one of the above classes. Additionally, no calibration is required for the tracker’s classification to work.

See Deep Learning Requirements for hardware requirements for this algorithm.

15.5 Object Classification and the Deep Learning Filter

The Deep Learning Filter, Deep Learning Object Tracker and Deep Learning People Tracker does not require the source input to have been calibrated, or the object classifier to be configured. Similarly, the settings of the Deep Learning Filter are entirely independent from Object Classification.

Classification methods (deep learning based or object based) are designed to be used independently. However, Object Classification can be used in tandem with the Deep Learning Filter, when an appropriate rule graph is constructed. However, when using both together care should be taken. For example, as the Deep Learning Filter is trained to detect specific objects, if custom object classes have been configured in the object classifier, e.g. small animal, the Deep Learning filter may erroneously filter those alerts out, as small animal is not a class the Deep Learning Filter is trained to recognise. In these cases, use of the Deep Learning filter is not recommended.

16 Burnt-in Annotation

Burnt-in Annotations allow VCAserver metadata to be overlaid on to the raw video stream. The burnt-in annotation settings control which VCAserver metadata (objects, events, etc) is rendered into the video stream.

Note:

16.1 Display Event Log

Check the Display Event Log option to show the event log in the lower portion of the image.

16.2 Display System Messages

Check the Display System Messages option to show the system messages associated with Learning Scene and Tamper.

16.3 Display Zones

Check the Display Zones option to show the outline of any configured zones.

16.4 Display Line Counters

Check the Display Line Counters option to display the line counter calibration feedback information. See the Rules for more information.

16.5 Display Counters

Check the Display Counters option to display the counter names and values. See the Counters topic for more information.

16.6 Display Deep Learning Classification

Check the Display DL Classification option to show the class and confidence of objects evaluated by a deep learning model.

16.7 Display Colour Signature

Check the Display Colour Signature option to show the current top four colours, of a possible ten, found in a given bounding box.

16.8 Display Tracker Internal State

Check the Display Tracker Internal State option to visualise additional tracker annotations. These can be used to better understand how a tracker works and provide more information to configure rules. The additional annotations will change depending on the currently selected tracker:

16.9 Display Faces

Check the Display Faces option to show the bounding boxes of detected faces. Face detection is only available when the DL people Tracker is used.

16.10 Display Objects

Check the Display Objects option to show the bounding boxes of tracked objects. Objects which are not in an alarmed state are rendered in yellow. Objects rendered in red are in an alarmed state (i.e. they have triggered a rule).

16.10.1 Display Only Alarmed Objects

Check the Display only alarmed objects option to show only bounding boxes of objects which have triggered a rule.

16.10.2 Object Speed

Check the Object Speed option to show the object speed.

16.10.3 Object Height

Check the Object Height option to show the object height.

16.10.4 Object Area

Check the Object Area option to show object area.

16.10.5 Object Classification

Check the Object Class to show the object Classification.

17 Scene Change Detection

The scene change detection module resets the object tracking algorithm when it detects a large, persistent change in the image. This prevents the tracking engine from detecting image changes as tracked objects, which could be potential sources of false alarms.

The kinds of changes the scene change detection module detects are as follows:

17.1 Scene Change Settings

There are 3 options for the scene change detection mode:

17.1.1 Automatic

This is the default setting and will automatically use the recommended settings. It is recommended to use the automatic setting unless the scene change detection is causing difficulties.

17.1.2 Disabled

Scene change detection is disabled.

Note that when the scene change detection is disabled, gross changes in the image will not be detected. For example, if a truck parks in front of the camera the scene change will not be detected and false events may occur as a result.

17.1.3 Manual

Allows user configuration of the scene change detection algorithm parameters.

If automatic mode is triggering in situations where it’s not desired (e.g. it’s too sensitive, or not sensitive enough), then the parameters can be adjusted to manually control the behaviour.

In the manual mode the following settings are available:

When both the time and area thresholds are exceeded the scene is considered to have changed and will be reset.

If false scene change detections are a problem, the time and/or area should be increased so that large transient changes, such as a close object temporarily obscuring the camera, do not cause false scene change detections.

17.2 Notification

When a scene change is detected, the scene is re-learnt and a message is displayed in the event log and annotated on the video

18 Video Preview

The video preview menu provides information on the channel view currently open.

18.1 Video Status

The Video Status is a UI overlay that presents real time information on the channel view currently open. Importantly, this is not a burnt-in annotation, the provided information is not visible in the RTSP stream for this channel.

Statistics included in this overlay are:

19 System Settings

The system settings page facilitates administration of system level settings such as network configuration, and authentication.

19.1 Network Settings

The network configuration of the device can be changed in the network settings configuration section:

19.2 SSL

The VCAserver web server that hosts the UI, the REST API and the SSE metadata streams, by default is unencrypted using HTTP. To secure these connections, SSL can be enabled allowing for a self managed end to end encrypted connection between your browser and the back end services.

Once a certificate (.pem) and key (.key) files is uploaded, the web server will switch to HTTPs and provide a link to the new URL for the user to follow.

19.3 Host IP Settings

On specific platforms, the network settings for the host system is exposed to allow configuration of the network devices.

19.4 System Information

The system information section shows the Uptime of VCAserver (how long the application has been running without restarting) as well as the device CPU and Memory usage:

19.5 GPU Devices

The GPU devices section shows information on all the detected graphics processing units. Name and vendor information are provided for reference, with the current temperature, overall utilisation and memory usage:

These values, combined with the system information, can be used to determine if the current configuration is overly stressing the available hardware.

19.6 Authentication Settings

VCAserver can be protected against unauthorised access by enabling authentication. By default, authentication is enabled and the default credentials must be entered when accessing the device for the first time. Authentication applies to all functions including the web interface and API, RTSP server and discovery interfaces.

19.6.1 Enabling Authentication

Click the Enable button to enable authentication.

The password must be confirmed before authentication can be enabled, in order to prevent the user being locked out of the device.

19.6.2 Changing the Password

Click the Change Password button to change the password.

Enter the new password, and confirm the current password in order to apply the changes.

WARNING: If the password is forgotten, the device will not be accessible. The only way to recover access to a device without a valid password is to perform a physical reset, as described in the Forgotten Password section.

19.6.3 Disabling Authentication

Click the Disable button to disable authentication and allow users to access the device without entering a password. The password is required to disable authentication.

19.6.4 Default Credentials

The default credentials are as follows:

19.6.5 Forgotten Password

If a system becomes inaccessible due to a lost password, the only way to recover access to the device is to delete the configuration file VCAserver is using. This process differs between platforms:

19.6.6 Configuration

Under configuration, buttons to allow the management of VCAserver’s configuration are provided:

Current version information is also provided.

19.6.7 Metadata

VCAserver produces metadata accessible though various APIs but also through the action’s token system. One aspect of that metadata is the X and Y coordinates for objects in a camera view.

Under the metadata section, the definition of aspects of this metadata can be specified:

19.6.8 ONVIF

An internal ONVIF service allowing for VCAserver’s RTSP video and compliant event data to be accessed using the ONVIF standards.

19.6.9 Digital Input

If digital inputs are available, the input sensors can be configured in two different modes:

20 Support

The support page provides a repository for tools which can be utilised to help debug issues.

20.1 Logs

The Logs section provides a list of download links to the currently available logs. Logs are user-readable text files which log VCAserver messages. These logs can be submitted to the VCA Technology support staff to help resolve issues. New log files are created when VCAserver is started. If a log file reaches a certain size then it will be split into separate files.

The list of log files can be reloaded using the Reload Logging data button. Only a limited number of files can be stored with the oldest being replaced if that storage limit is met.

20.1.1 Minimum Log Level

The minimum log level defines the granularity of log entries based on a hierarchy of logging messages. Each selected level will include messages from the level above, where Fatal will have the fewest and only most severe messages and Trace including every log message available.

Logging Level
Fatal
Error
Warning
Info
Debug
Trace

20.2 Diagnostics

The Diagnostics section provides a list of download links to the currently available crash dumps. These crash dumps can be uploaded to the VCA Technology support staff to provide more in depth system state information.

The list of core dump files can be reloaded using the Reload Logging data button. Only a limited number of files can be stored, with the oldest being replaced if that storage limit is met.

21 Template Tokens

VCAserver can be set up to perform a specific action when an analytic event occurs. Examples include sending an email, TCP or HTTP message to a server.

VCAserver allows templated messages to be written for email, TCP and HTTP actions which are automatically filled in with the metadata for the event. This allows the details of the event to be specified in the message that the action sends, e.g. the location of the object, type of event, etc.

21.1 Syntax

The templating system uses mustache, which is widely used and well-documented online. A brief overview of the templating syntax will be provided here. Templated messages can be written by using tokens in the message body. For example:

Hello {{name}}!

is a template with a name token. When the template is processed, the event metadata is checked to see if it has a name entry. If it does, the {{name}} token is replaced with the name of the event. If it isn’t present, the token will be replaced with blank space.

If an event with the name Presence occurs, the processed template will be Hello Presence! but if it doesn’t have a name, it will be Hello !

Some tokens may also have sub-properties which can be accessed as follows:

It happened at {{start.hours}}!

21.1.1 Conditionals

Tokens can also be evaluated as boolean values, allowing simple conditional statements to be written:

{{#some_property}}Hello, world!{{/some_property}}

In this example, if some_property is present in the event metadata, then “Hello, world!” will appear in the message. Otherwise, nothing will be added to the message.

If some_property is a boolean, then its value will determine whether or not the conditional is entered. If some_property is an array property, it will only evaluate as true if the array is not empty.

21.1.2 Arrays

Finally, tokens can also be arrays which can be iterated over. For example:

{{#object_array}}
{{name}} is here!
{{/object_array}}

This template will iterate through each item in object_array and print its name, if it has a name property. For example, the array [{"name": "Bob"}, {"name": "Alice"}, {"name": "Charlie"}] will result in the following output:

Bob is here!
Alice is here!
Charlie is here!

21.2 List of tokens

Lower case names represent tokens that can be used with the {{token}} syntax. Upper case names represent boolean or array properties that should be used with the {{#token}}...{{/token}} syntax.

21.2.1 {{#Armed}}{{armed}}{{/Armed}}

The armed state of VCAserver. It has the following sub-properties:

Example:

{{#Armed}}
Armed State: {{armed}}
{{/Armed}}

21.2.2 {{name}}

The name of the event

21.2.3 {{id}}

The unique id of the event

21.2.4 {{type.string}}

The type of the event. This is usually the type of rule that triggered the event

21.2.5 {{type.name}}

This is a boolean property that allows conditionals to be performed on the given type name.

For example, to print something only for events of type “presence”:

{{#type.presence}}My text{{/type.presence}}

21.2.6 {{start}}

The start time of the event. It has the following sub-properties:

The iso8601 property is a date string in the ISO 8601 format.

The offset property is the time zone offset.

21.2.7 {{end}}

The end time of the event. Same properties as {{start}}

21.2.8 {{host}}

The hostname of the device that generated the event

21.2.9 {{ip}}

The IP address of the device that generated the event

21.2.10 {{#Channel}}{{id}}{{/Channel}}

Properties of the channel that the event occurred on. It has the following sub-properties:

Example:

{{#Channel}}
Channel ID: {{id}}
Channel Name: {{name}}
{{/Channel}}

21.2.11 {{#Channel}}{{name}}{{/Channel}}

The name of the channel that the event occurred on

21.2.12 {{#Zone}}

An array of the zones associated with the event. It has the following sub-properties:

Example:

{{#Zone}}
id: {{id}}
name: {{name}}
channel:{{channel}}
colour: ({{colour.r}}, {{colour.g}}, {{colour.b}}, {{colour.a}})
{{/Zone}}

21.2.13 {{#Object}}

An array of the objects that triggered the event. It has the following sub-properties:

Example:

{{#Object}}
id: {{id}}
width: {{width}}
height: {{height}}
Top left corner: ({{outline.rect.top_left.x}}, {{outline.rect.top_left.y}})
{{/Object}}

21.2.14 {{outline}}

The bounding box outline of an object or zone. It has the following sub-properties:

Using a combination of these four coordinates, any corner of an object’s bounding box can be obtained.

21.2.15 {{#CountingLine}}

An array of line counter counts. It has the following sub-properties:

Example:

{{#CountingLine}}
rule_id: {{rule_id}}
calibration width: {{width}}
position: {{position}}
count: {{count}}
direction: {{direction}}
{{/CountingLine}}

21.2.16 {{#Counter}}

An array of counter counts. It has the following sub-properties:

Example:

{{#Counter}}
id: {{id}}
name: {{name}}
count: {{value}}
{{/Counter}}

21.2.17 {{#Area}}

The estimated area of the object. This token is a property of the object token. It is only produced if calibration is enabled. It has the following sub-properties:

Example:

{{#Object}}{{#Area}}{{value}}{{/Area}}{{/Object}}

21.2.18 {{#CalibratedPosition}}

The estimated position relative to the camera. value.x is the estimated distance (+/-) from the centre of the calibration grid in meters, where 0 is the centre of the grid. value.y is the estimated distance from the camera in meters, where 0 is the camera position. This token is a property of the object token. It is only produced if calibration is enabled. It has the following sub-properties:

Example:

{{#Object}}{{#CalibratedPosition}}
X: {{value.x}} 
Y: {{value.y}}
{{/CalibratedPosition}}{{/Object}}

21.2.19 {{#Classification}}

The classification of the object. This token is a property of the object token. It is only produced if calibration is enabled. It has the following sub-properties:

Example:

{{#Object}}{{#Classification}}{{value}}{{/Classification}}{{/Object}}

21.2.20 {{#DLClassification}}

The classification generated by a deep learning model (e.g. Deep LEarning Filter or Deep Learning Object Tracker). This token is a property of the object token. The algorithm must be enabled in order to produce this token, but calibration is not required. It has the following sub-properties:

Example:

{{#Object}}{{#DLClassification}}
Class: {{class}}
Confidence: {{confidence}}
{{/DLClassification}}{{/Object}}

21.2.21 {{#GroundPoint}}

The estimated position of the object. This token is a property of the object token. It is only produced if calibration is enabled. It has the following sub-properties:

Example:

{{#Object}}{{#GroundPoint}}Position: ({{value.x}}, {{value.y}}){{/GroundPoint}}{{/Object}}

21.2.22 {{#Height}}

The estimated height of the object. This token is a property of the object token. It is only produced if calibration is enabled. It has the following sub-properties:

Example:

{{#Object}}{{#Height}}{{value}}{{/Height}}{{/Object}}

21.2.23 {{#Pixels}}

The total pixels (px) contained within the tracked object’s bounding box. This value is relative to the channels input source resolution, e.g. the value will change for the same object, in the same scene if the resolution was different. This token is a property of the object token. It has the following sub-properties:

Example:

{{#Object}}{{#Pixels}}{{value}}{{/Pixels}}{{/Object}}

21.2.24 {{#Speed}}

The estimated speed of the object. This token is a property of the object token. It is only produced if calibration is enabled. It has the following sub-properties:

Example:

{{#Object}}{{#Speed}}{{value}}{{/Speed}}{{/Object}}

21.2.25 {{#Text}}

The text data detected and associated with an object. This token is a property of the object token. It has the following sub-properties:

Example:

{{#Object}}{{#Text}}
Object Text: {{value}}
{{/Text}}{{/Object}}

21.2.26 {{#ColourSignature}}

The colour signature of the object. This token is a property of the object token. It has the following sub-properties:

Example:

{{#Object}}{{#ColourSignature}}
{{#colours}}
Colour: {{colour_name}}, Proportion: {{proportion}}
{{/colours}}
{{/ColourSignature}}{{/Object}}

21.2.27 {{#SegmentedColourSignature}}

The colour signature for each segment of a Person. This token is a property of the object token. It is only produced if the object has a classification of Person. It has the following sub-properties:

Example:

{{#Object}}
{{#SegmentedColourSignature}}
{{#segments.torso}}
Torso:
{{#colours}}
Colour: {{colour_name}}, Proportion: {{proportion}}
{{/colours}}
{{/segments.torso}}
{{#segments.legs}}
Legs:
{{#colours}}
Colour: {{colour_name}}, Proportion: {{proportion}}
{{/colours}}
{{/segments.legs}}
{{/SegmentedColourSignature}}
{{/Object}}

21.3 Examples

The following is an example of a template using most of the available tokens:

Event #{{id}}: {{name}}
Event type: {{type.string}}
Start time (ISO 8601 format): {{start.iso8601}}
End time:
day: {{end.day}}
time: {{end.hour}}:{{end.minutes}}:{{end.seconds}}.{{end.microseconds}}
Device: {{host}}
Channel: {{#Channel}}{{id}}{{/Channel}}
{{#type.presence}}
{{#Object}}
Object ID: {{id}}
{{#Classification}}Object Classification: {{value}}{{/Classification}}
{{#Height}}Object Height: {{value}}m{{/Height}}
Object bounding box: [
  ({{outline.rect.top_left.x}}, {{outline.rect.top_left.y}}),
  ({{outline.rect.bottom_right.x}}, {{outline.rect.top_left.y}}),
  ({{outline.rect.bottom_right.x}}, {{outline.rect.bottom_right.y}}),
  ({{outline.rect.top_left.x}}, {{outline.rect.bottom_right.y}})
]
{{/Object}}
{{/type.presence}}

{{#Counter}}
Counter triggered.
id: {{id}}
name: {{name}}
count: {{count}}
{{/Counter}}

{{#LineCounter}}
rule_id: {{rule_id}}
calibration width: {{width}}
position: {{position}}
count: {{count}}
direction: {{direction}}
{{/LineCounter}}

In this example, the object information is only printed for events of type “presence”.

This template might result in the following message:

Event #350: My Bad Event
Event type: presence
Start time (ISO 8601 format): 2017-04-21T10:09:42+00:00
End time:
day: 21
time: 10:09:42.123456
Device: mysecretdevice
Channel: 0

Object ID: 1
Object Classification: Person
Object Height: 1.8m
Object bounding box: [
  (16000, 30000),
  (32000, 30000),
  (32000, 0),
  (16000, 0)
]

Counter triggered.
id: 10
name: My Counter
count: 1

rule_id: 350
calibration width: 1
position: 1
count: 1
direction: 0

22 RTSP Server

VCAserver supports an RTSP server that streams annotated video in RTSP format.

The RTSP URL for channels on a VCA device is as follows:

rtsp://\<device ip\>:8554/channels/\<channel id\>

23 Sureview Immix

VCAserver supports the notification of events with annotated snapshots and streaming of real-time annotated video to Sureview Immix.

23.1 Prerequisites

The following ports need to be accessible on the VCA device (i.e. a VCAbridge or an instance of VCAserver) from the Immix server:

23.2 Limitations

23.3 Immix Configuration

23.3.1 Add VCA Device

The first step is to add the VCA device.

In the Immix site configuration tab, click Manage Devices and Alarms, then Add Device:

On the Add Device page, set the following options:

23.3.2 Add Camera

Once the device has been added, channels from the VCA device can be added.

Note: Immix currently supports only one VCA channel per device. To support more channels, simply add more devices.

Click the Cameras tab and Add a Camera to add a new channel:

On the Camera Details page set the following options:

23.3.3 Setting the Input in Immix

In order to set the Input value correctly in Immix, the following steps should be followed:

CHANNEL ID

Channel Id in VCA Input in Immix
0 1
1 2
2 3
5 6
100 101

The reason that the Immix Input is 1 higher than the VCA channel Id is that Immix uses one-based inputs but VCA uses zero-based channel Ids.

23.3.4 Retrieve a Summary

Generating a summary provides a single document with all of the details necessary to configure the VCA device. Click the Summary tab and a PDF report is created:

Make a note of the email addresses highlighted in red. These email addresses need to be entered in the VCA device configuration (see next section).

23.4 VCAserver Configuration

Once a device and camera are configured in Immix, the email addresses generated as part of the summary need to be added to the VCAserver configuration.

VCAserver notifies Immix of events via email, so each channel configured for Immix needs to have an email action configured. For more details on how to configure Actions or Sources see the corresponding topics.

23.4.1 Add an Email Action

Add an Email action with the following configuration:

Once this is done, add the correct source to the email action.

23.4.2 Event Type Mappings

The event types reported in the VCAserver interface are slightly different to the event types reported in the Immix client. The events are mapped as follows:

Event in VCA Event in Immix
Presence Object Detected
Enter Object Entered
Exit Object Exited
Appear Object Appeared
Disappear Object Disappeared
Stopped Object Stopped
Dwell Object Dwell
Direction Object Direction
Speed Object Speed
Tailgating Tailgating
Tamper Tamper Alarm

24 ONVIF Support

VCAserver has inbuilt support for a subset of ONVIF profile S and Profile M endpoints. To date, these provide the following functions using the ONVIF interface:

More detail on each ONVIF function is given below, screenshots are provided using the ONVIF Device Manager implementation varies application to application.

24.1 Discovery

ONVIF device discovery retrieves information about the ONVIF enabled device including the following data:

The above image shows the ONVIF Device Manager’s Identification interface with a VCAserver instance running on 192.168.0.23 (authentication disabled). Some of these Identification Variables are configurable, see Redistribution for more information.

24.2 RTSP Streams

VCAserver supports an RTSP server that streams annotated video in RTSP format. These streams are also discoverable through ONVIF.

24.3 Events

The ONVIF events service allows a third-party application to pull a list of events from the VCAserver platform. An event is defined as any logical rule, with Can Trigger Actions enabled, or Other Source, such as interval or DI, which triggers. Neither the logical rule nor the Other Source has to be configured with an action, to be included within ONVIF event service cache.

The ONVIF Device Manager’s Events interface with a VCAserver instance running on 192.168.50.65 (authentication disabled), where the data component of each event is populated with the above properties.

24.4 Object Metadata

The ONVIF metadata service allows the streaming of object metadata from the VCAserver platform. Currently supported object metadata include: Bounding box, ground point, object classification and speed.

24.5 Unsupported Features

Due to the nature of VCAserver as an application, a number of mandatory Profile M and S features are not supported.

24.5.1 Creating / Removing / Modifying Profiles and Configurations (Profile M and S)

A media profile and the relevant pre-defined configurations are provided for each channel that is configured in VCAserver. This profile configuration is defined by the channel source and is therefore not configurable.

24.5.2 Create/Delete Users or Change Password (Profile S)

VCAserver only supports a single user so user creation is not supported. Modifying the password is not currently possible via ONVIF.

24.5.3 Change Network Settings (Profile S)

It is currently possible to get the network information via ONVIF, but not to make changes.

24.6 ONVIF Device Manager

Device Manager is a third-party, open source windows application available at ONVIF Device Manager. Due to the age of the application, only basic discovery is supported within this application unless the authentication in the VCAserver UI is disabled, in which case, RTSP streams and the events will be visible within ODM.

If you require more information on ONVIF profiles please refer to the ONVIF documentation.