23.3 C
Tuesday, January 31, 2023

Buy now

Home Blog

AXIS Q3819-PVE Panoramic Detailed Camera Review

The AXIS Q3819-PVE panoramic day/night camera featuring 14MP resolution and a 180-degree angle of view, has four 1/2.5-inch sensors, a combined resolution of 8192 x 1728 at 30ips, and features Axis Forensic WDR and minimum illumination numbers of 0.02 lux in colour.

AXIS Q3819-PVE Panoramic’s 4 camera heads feature relatively long fixed focal lengths of 5.89mm with an aperture of F1.88, which is quite fast considering lens length, and the 180-degree wide views have a vertical depth of 38 degrees, making this camera ideal for large areas. However, it’s going to be a narrow vertical plane in our jam-packed street application. Other features include PTRZ, Zipstream compression with H.264 baseline, high and main, H.265 compression and Motion JPEG.


This is a robust unit – the casing is rated IP66/IP67 and NEMA 4X against water and dust, and IK10-rated for impact-resistance. The cast aluminium body has a polycarbonate hard coated clear dome with a dehumidifying membrane and the camera has an operating range of -40 to 65C. It’s not light for a dome, either, as I discover when shoehorning its 2.4kg bulk onto the Magic Arm and draping it over the balcony. A key piece of operational functionality is built-in motors for remote pan, tilt and roll. These aren’t just handy for tweaking angles of view, they help with quirky installations, too.

There’s Axis Zipstream, H.264 and H.265 (I stick with H.264), and Motion JPEG, and bitrate is configurable. It’s possible to mount 2 cameras back-to-back for a complete 360-degree overview using the AXIS T94V01C Dual Camera Mount and that would be epic, though displaying it might stretch a standard video wall.

Image Settings

Image settings include saturation, contrast, brightness, sharpness, advanced WDR imaging of up to 120 dB depending on scene, white balance, day/night threshold, exposure mode, compression, dynamic text and image overlay, orientation aid, exposure control, noise reduction, fine tuning of behaviour at low light, and polygon privacy. There’s audio input/output, with audio encoding options including 24bit LPCM, AAC-LC 8/16/32/48 kHz, G.711 PCM 8 kHz, G.726, ADPCM 8 kHz, Opus 8/16/48 kHz, 44.1 kHz ACC-LC and LPCM.

On the security side of the AXIS Q3819-PVE, there’s password protection, IP address filtering, HTTPS encryption, IEEE 802.1X (EAP-TLS) network access control, digest authentication, user access log, centralized certificate management, brute force delay protection, signed firmware, protection of cryptographic keys with FIPS 140-2 certified TPM 2.0 module, and Axis Edge Vault with Axis device ID.

AXIS Q3819-PVE Panoramic Camera

AXIS Object Analytics are comprehensive for humans and vehicles, with trigger conditions including line crossing and object in area, with up to 10 scenarios. There’s metadata visualized with colour-coded bounding boxes, polygon include/exclude areas, perspective configuration, and ONVIF motion alarm event. Applications include AXIS Object Analytics, AXIS Fence Guard, AXIS Motion Guard, AXIS Loitering Guard, AXIS Video Motion Detection, active tampering alarm, audio detection, advanced gatekeeper, and gatekeeper.


Ubiquiti 16-Port POE Switch
Ubiquiti 16-Port POE Switch

Power is PoE – IEEE 802.3 at Type 2 Class 4 with a typical draw of 12W maxing at 22.5W with the integrated heater activated. There’s microSD/microSDHC/microSDXC and NAS support. Operating temperature range is -40C to 65C, which is outstanding. Dimensions are 170 x 195mm as tested here – with the weather shield it’s 221 x 206mm. Recommended mounting height is 4 metres – we are at about 3.5 metres in this application. I have Zipstream set to low. We’re also using a different switch – it’s a 16-input Ubiquiti with PoE on all ports.

Download the latest Axis manual for the Q3819-PVE

Test Driving AXIS Q3819-PVE

My first impressions of this AXIS Q3819-PVE camera are its massive angle of view. It really reaches from one end of the street to the other through 180 degrees. It’s stupendous and it takes a long time to get used to it. This is one of those cameras that makes me hate the little BenQ 1920 x 1080-pixel monitor.

I need a video wall to appreciate what I’ve got going on here – you could fill 4 large monitors with this view. And while angle of view is strong, depth of field is right behind it. You’d expect DoF to be strong when you’ve got 4 camera heads, each offering reasonably high resolution, but I’m still surprised by the fine detail of its performance.

For a narrow street view like this, the vertical angle of view I have to play with is a little shallow at 38 degrees – this AXIS Q3819-PVE camera is better suited to larger areas, but it still does an amazing job. Over the far side of the road, I can see people’s lower legs going by if I want to see most the near-side pavement, but it’s possible to tilt and get one or the other – this comes down to my application – depth of field opposite the camera is only 20 metres. There’s a rectilinear curve in the centre of the image on the near side but it’s giving me more of the scene I need.

Untitled85 1
Image A
Image B

Something I’m able to do every time I return to my workstation, is survey the entire street at one glance. It’s an amazing ability and the stitched image is very seamless, with uniformity of camera performance also strong.

I find myself thinking about the quality of performance at the edges, but they are not edges of sensor and lens in the way we would think of them, thanks to those 4 camera heads. Operationally, I’m able to see all the way down the North end of Bellevue Street where traffic is moving on Albion Street. And I can see all the way down to the South end to Foveaux Street – that’s variable levels of situational awareness from 200 metres end to end.

Image B – Variable levels of situational awareness from 200 metres end to end.

On one screen I can see people sitting at the pub. I can see people moving in broad contact with each other at 70 metres, I can see vehicles turning a corner, while pedestrians are crossing the street. Something to bear in mind here is that I’ve got this performance across the entire angle of view, all the time.

Simultaneously, I see someone leave the pub, while another person walks into the university building to the left and someone is paying for a parking ticket 40 metres away at the opposite end of the street, and there’s a bicycle going by on the far side of the road, while a car approaches from the opposite end. This situational awareness is just so impressive – the AXIS Q3819-PVE will really please operators in applications that suit it.

Image A
Untitled14 1
Image B – Balanced optical performance.

Meanwhile, the optical performance of the camera is nicely balanced – colour rendition, sharpness, resistance to noise – these are all very good. As the day transitions into night the image holds together well. Faces and plates aren’t easy but there’s no difficulty working out what sort of car that you’re looking at – the make, the model, the accessories. There’s also no excessive blooming or flaring. I note some motion blur with this camera when vehicles move at right angles but that’s to be expected.

Image A
Untitled60 1
Image B – Transitioning into lower light.

Monochrome Performance

Monochrome performance of the AXIS Q3819-PVE is solid – tone is contributing to sharpness, depth of field is excellent. There are no dark holes in the scene. When a headlight points straight at the camera I don’t have any blooming, or smearing, though I do see wee bit of blooming around a streetlight in the distance, though perhaps this is a coma aberration.

By now the shutter speed will have come back to its minimum – it’s set for 1/25 of a second. I try for face recognition with light almost gone but can’t snare it, even quite close to the lens when people are moving. But I’ve got a lot of detail in terms of clothing, accessories and footwear and ascertaining if one person is masked and another is not.

Image A
Image B – Examples of motion blur.

Overall, motion blur is well handled by the AXIS Q3819-PVE, and I don’t see too much in the way of tone mapping – no one’s dragging a tail. There’s not much in the way of amplification artefacts. It’s a busy night and I am missing nothing on this street. From the point of view of monitoring and investigations, there’s full context and I can see where people are in relation to each other. That’s great supporting evidence, though you’d need some tighter angles of view from cameras at choke points for face recognition.

Switching back to colour, there’s some colour casting from the low-pressure sodium lamps at either end of the street but generally, this is a relatively low contrast night-time image. It’s not overwrought, the reds are a little dark, but I should point out again that it’s sub 4-lux at the lens. It’s about quarter to seven in the evening and deep winter here in Sydney, so light is scarce. Looking at vehicles going past I’m not getting plates but I’m getting everything else, colour, model, make. I try to snare a plate close to the lens but can’t. Regardless, I can see it’s a silver Mercedes SUV.

Image A
Untitled43 1
Image B – AXIS Q3819-PVE – Monochrome vs Colour and field of depth.

Depth of Field

As before, I’ve got strong depth of field all the way from Albion to Foveaux Streets (see below map image). Looking at the colour image, I think that maybe monochrome gives me slightly better sharpness deep into the scene, but the colour image is still good, and it pays off in other ways with extra detail. Something I’ve been doing throughout the test is dipping into applications, setting loitering alarms and line crossing alarms. While no one activated the loitering detection zone I set up around the uni entryway, it was a simple process and it’s nice to know that you have video analytics bases covered in Axis Camera Companion.

Axis shot
From Albion to Foveaux Streets – A distance of 186m.
Image A
Untitled78 1
Image B – Video analytics – AXIS Q3819-PVE

Next morning, I spend some time checking out WDR performance – this is the only time I notice any variation in the stitched images – in this case the camera on the far left has a slightly different hue as it wrestles with 65,000 lux. This should come as no particular surprise. Once full sun shifts out of the frame, the stitched image returns to its composite ways.

During this part of the test, I focus on the near side pavement and again enjoy that epic angle of view. You just don’t miss a single thing with this camera, and I can’t help wishing I could take it down to Darling Harbour to try it out on a bigger scene. Something else I find in full daylight is that I am able to get court admissible faces and static plates to about 10-12 metres from the lens – closer is better. This comes down to the fact the 8192 x 1728-pixel resolution is being spread across the entire scene.


The AXIS Q3819-PVE Panoramic camera is a relatively specific surveillance tool that’s designed for applications in public surveillance and on large sites where operators need full situational awareness to facilitate fast and fully informed decisions. The camera’s ability to deliver awareness across 180-degrees is epic, day and night. The camera is rugged – it’s designed mix it in the toughest environments. It would be as happy dockside monitoring the unloading of a ship as it would be installed under an eave monitoring a public square.

Fundamentally, the AXIS Q3819-PVE is a story-teller, with the ability to fill in knowledge gaps in a way tight angles of view never can. It’s only a slight exaggeration to say it ensures operators miss absolutely nothing in a scene, allowing them to track multiple incidents in real time on a single camera, while delivering zero-latency updates to security or law enforcement teams on the ground. If you need a wide view, especially in live monitoring applications, this camera comes highly recommended.


LSC Celebrates 2 Years With RISCO

LSC Celebrates 2 Years With RISCO

LSC celebrates 2 years distrubting RISCO solutions.

LSC Celebrates 2 Years With RISCO – National security distributor LSC is celebrating 2 years supporting electronic security manufacturer, RISCO.

“We’re proud to be Australia’s most trusted national distributor of RISCO intrusion products,” said Edward Manyan, product manager at LSC.

“Over the past 2 years, we have invested heavily in both our TechED training and technical support services to improve installer confidence and knowledge of the RISCO range.”

In the past 12 months, LSC has rolled out 2 popular RISCO courses – the classroom based LightSYS+ course and the fully remote WiComm Pro course.

“These courses have been designed to be engaging as well as professional,” said Manyan. “The positive feedback we’ve received from customers has been validating for our team, and we look forward to delivering more training throughout the year ahead.”

LSC will kick-off its RISCO training in the next few weeks, with the LightSYS+ course scheduled to take place on 14 February at its distribution centre in Melbourne.

You can contact estraining@lsc.com.au for more information about this upcoming course or find out more about LSC here.

More news from SEN.

“LSC Celebrates 2 Years With RISCO”

New TVT 2MP Wireless Doorbell

New TVT 2MP Wireless Doorbell

New TVT 2MP wireless doorbell released by CSM.

New TVT 2MP Wireless Doorbell – TVT 2MP Wireless Doorbell is a stand-alone wireless, app-connected video doorbell that features a door release with several unlock methods.

The doorbell allows home and small business owners to answer and unlock their doors from anywhere using TVT’s Australian-based app and to receive push notifications whenever someone is at their door.

The 2MP IR camera and supplemental white light illuminator offer visibility over the front door and also offers intelligent analysis including face detection and face capture. Opening options include fob, app (remote) or mobile (NFC) door release.

If you want more info on The TVT Video Doorbell or any product from the range of awesome TVT CCTV solutions, contact or visit your local CSM branch for more details.

See the datasheet here or see more news from SEN.

New TVT 2MP Wireless Doorbell Features

* Max. resolution: 2MP (1920×1080)
* Wide field of view achieving doorway security monitoring
* Access control function
* Noise suppression and echo cancellation
* Visual intercom function: 2-way remote communication between doorbell/mobile app
* Support door opening by swiping card or mobile APP
* Support IR and white LED lights
* Support tampering alarm and door contact alarm
* 2.4G Wi-Fi
* Built-in micro SD card slot, up to 256GB
* Intelligent Analysis: video exception detection, face detection, face capture, etc.

“New TVT 2MP Wireless DoorbelL”

Technical Security Solutions Apprentice

Technical Security Solutions Apprentice.
Technical Security Solutions Apprentice sought.

Technical Security Solutions Offers Apprenticeship.

Technical Security Solutions Apprentice – Murarrie Queensland-based security integrator Technical Security Solutions is seeking a full-time apprentice.

The TSS team wants a keen and enthusiastic applicant, who will learn all aspects of the electronic security industry including CCTV, access control systems, intercom systems, and alarm systems.

Technical Security Solutions provides services and support to many businesses and organisations in the greater Brisbane area. Day-to-day to duties in this role would include cabling and installation of CCTV and security devices, commuting between various locations, liaising with customers and site managers and providing assistance to TSS technical staff.

Technical Security Solutions Apprentice

The successful candidate needs a current driver’s license, a commitment to safe work practices, an interest in technology and IT, be computer literate, double vaccinated (mandatory), and available for an immediate start.

The successful candidate will receive training and experience leading to a trade certificate in a new and emerging field with strong career prospects. If this technical security solutions apprentice role sounds like sounds like the opportunity for you, please do not hesitate to apply here.

More news from SEN.

“Technical Security Solutions Apprentice”

Less Lethal Use Of Force

Less Lethal Use of Force
CSNSW seeks less lethal use of force.

Less Than Lethal Use Of Force Sought By NSW Prisons.

Less Lethal Use Of Force – Corrective Services New South Wales (CSNSW) is investigating alternative use of force applications in its correctional environments and external operations.

Corrective Services New South Wales (CSNSW) is seeking advice about less lethal and less-than-lethal options which may be available to provide increased safety and security options, personal protection in the effective management of non-compliant inmates in custody, and furthermore the prevention of inmate escape while in transit in the community.

DCJNSW is undertaking this research into less lethal use of force to enhance its understanding of technologies available in the market to provide additional use of force options for the agency and its divisions, including Corrective Services NSW.

This RFI can only be accessed and responded via the department’s end to end procurement system, Procurement Central. Any submission lodged outside of Procurement Central will not be considered.

Less Lethal Use Of Force

Less Lethal Use Of Force

This RFI for information about less lethal use of force can only be accessed and responded via the department’s end to end procurement system, Procurement Central. Any submission lodged outside of Procurement Central will not be considered.

If you require assistance with registering on Procurement Central, you may contact the customer support if you are in Australia on 02 8074 8627.

Corrective Services New South Wales (CSNSW) is responsible for the state’s prisons and a range of programs for managing offenders in the community. The state has 36 prisons, 33 run by CSNSW and 3 privately operated. CSNSW has 11,500 staff and an annual budget of $A2.2 billion.

This tender closes on February – you can find out more here, or see more news from SEN here.

“CSNSW Seeks Less Lethal Use Of Force”

Smart Sensors Teach AI

Smart Sensors Teach AI

Smart sensors can teach themselves AI models.

Smart Sensors Teach AI – A new technique enables on-device training of machine-learning models on edge devices like microcontrollers in sensors and IoT devices, which have very limited memory. This could allow edge devices to continually learn from new data, eliminating data privacy issues, while enabling user customization, and enhancing AI capabilities of security and automation systems.

Microcontrollers, miniature computers that can run simple commands, are the basis for billions of IoT connected devices, including sensors. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it challenging to train artificial intelligence models that work independently from central computing resources.

Training a machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. However, the training process requires so much memory that it is typically done using powerful computers at a data centre, before the model is deployed on a device. This is more costly and raises privacy issues, since user data must be sent to a central server.

To address this problem, researchers at MIT and the MIT-IBM Watson AI Lab developed a new technique that lets smart sensors teach AI to themselves via on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use more than 500 megabytes of memory, greatly exceeding the 256-kilobyte capacity of most microcontrollers.

The intelligent algorithms and framework the researchers developed reduce the amount of computation required to train a model, which makes the process faster and more memory efficient. Their technique can be used to train a machine-learning model on a microcontroller in a matter of minutes.

This technique also preserves privacy by keeping data on the device, which could be especially beneficial when data are sensitive, such as in medical applications. It also could enable customization of a model based on the needs of users. Moreover, the framework preserves or improves the accuracy of the model when compared to other training approaches.

“Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning,” said Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and senior author of the paper.

“The low resource utilization makes deep learning more accessible and can have a broader reach, especially for low-power edge devices.”

A common type of machine-learning model is known as a neural network. Loosely based on the human brain, these models contain layers of interconnected nodes, or neurons, that process data to complete a task, such as recognizing people in photos. The model must be trained first, which involves showing it millions of examples so it can learn the task. As it learns, the model increases or decreases the strength of the connections between neurons, which are known as weights.

Smart Sensors Teach AI

The model may undergo hundreds of updates as it learns, and the intermediate activations must be stored during each round. In a neural network, activation is the middle layer’s intermediate results. Because there may be millions of weights and activations, training a model requires much more memory than running a pre-trained model, Han explains.

Han and his collaborators employed 2 algorithmic solutions to make the training process more efficient and less memory-intensive. The first, known as sparse update, uses an algorithm that identifies the most important weights to update at each round of training. The algorithm starts freezing the weights one at a time until it sees the accuracy dip to a set threshold, then it stops. The remaining weights are updated, while the activations corresponding to the frozen weights don’t need to be stored in memory.

“Updating the whole model is very expensive because there are a lot of activations, so people tend to update only the last layer, but as you can imagine, this hurts the accuracy. For our method, we selectively update those important weights and make sure the accuracy is fully preserved,” Han said.

Their second solution involves quantized training and simplifying the weights, which are typically 32 bits. An algorithm rounds the weights, so they are only 8 bits, through a process known as quantization, which cuts the amount of memory for both training and inference. Inference is the process of applying a model to a dataset and generating a prediction. Then the algorithm applies a technique called quantization-aware scaling (QAS) to help smart sensors teach AI, which acts like a multiplier to adjust the ratio between weight and gradient, to avoid any drop in accuracy that may come from quantized training.

Smart Sensors Teach AI

The researchers developed a system, called a tiny training engine, that can run these algorithmic innovations on a simple microcontroller that lacks an operating system. This system changes the order of steps in the training process, so more work is completed in the compilation stage, before the model is deployed on the edge device.

“We push a lot of the computation, such as auto-differentiation and graph optimization, to compile time,” Han explained. “We also aggressively prune the redundant operators to support sparse updates. Once at runtime, we have much less workload to do on the device.”

Their optimization only required 157 kilobytes of memory to train a machine-learning model on a microcontroller, whereas other techniques designed for lightweight training would still need between 300 and 600 megabytes.

They tested their framework by training a computer vision model to detect people in images. After only 10 minutes of training, it learned to complete the task successfully. Their method was able to train a model more than 20 times faster than other approaches.

Now that they have demonstrated the success of these techniques for computer vision models, the researchers want to apply them to language models and different types of data, such as time-series data. At the same time, they want to use what they’ve learned to shrink the size of larger models without sacrificing accuracy, which could help reduce the carbon footprint of training large-scale machine-learning models.

This smart sensors teach AI work is funded by the National Science Foundation, the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, Amazon, Intel, Qualcomm, Ford Motor Company, and Google.

More news from SEN.

“Smart Sensors Teach AI”

Gallagher Launches New Training Badges

Gallagher Launches New Training Badges

Gallagher Has Launched A New Training Badge Program.

Gallagher Launches New Training Badges – Gallagher has partnered with digital credential provider Credly, to award verified badges to Australian and Papua New Guinean channel partners and end-users who complete Gallagher training courses.

Gallagher training badges will allow awardees to easily share and showcase their skills, capabilities, and achievements digitally. All badges are verified by Gallagher security and contain information that describes training course participants qualifications and the process required to earn them.

Gallagher Launches New Training Badges

Gallagher digital badges are earned by completing one of Gallagher’s participating channel partner or end-user training courses.

“Representing your skills as a badge gives you a way to share your abilities online in a way that is simple, trusted, and can be easily verified in real time,” said Gallagher’s training manager for Australia, Danielle Mitchell.

“We’ve teamed up with Credly to launch the digital badges because we believe it provides additional value to Gallagher training course participants through being able to demonstrate to employers and peers concrete evidence of what trainees had to do to earn their credential and what they’re now capable of.

Gallagher Launches New Training Badges 2
Gallagher launches new training badges.

“We hope that recipients who have completed a Gallagher training course will let people know about their new badge by proudly sharing it on LinkedIn, Facebook, Twitter, via email, and embedding it in their online resume, personal website, and email signatures.”

Digital badges work as an electronic representation of achievement that is visual, available online and embedded with metadata that provides context, meaning, and the result of an activity. They are easy to manage, verified in real-time, and able to be shared online.

Credly, a comprehensive global solution for recognising skills, capabilities, and achievements. The technology Credly uses is based on the Open Badge Standards maintained by IMS Global. This enables you to manage, share and verify your competencies digitally.

For further information on Gallagher’s digital badge programme click here.

More news from SEN.

“Gallagher Launches New Training Badges.”

New DAS Branch At Tullamarine

New DAS Branch At Tullamarine

DAS opening new branch At Tullamarine.

New DAS Branch At Tullamarine – DAS will relocate its Victorian branch currently in Coburg to Tullamarine on February 1.

New DAS Branch At Tullamarine

The new branch will be located at 7 Elata Drive, Tullamarine and much like its new Seven Hills site, DAS Tullamarine will be complete with a revamped interior and layout for an easier in-store experience.

With more stock to browse in the showroom, upgraded shelving and informative display pods, purchasing products in DAS branches will be easier than ever. And of course, their friendly representatives will be in the Tullamarine branch to assist with project requirements.

The last day of trading at DAS Coburg will be January 31. In-branch opening specials are also running during the first 2 weeks of the Tullamarine opening. The DAS VIC team cannot wait to see you at their new site – directions to DAS Tullamarine can be found here.

More news from SEN.

“New DAS Branch At Tullamarine.”

Multicom Joins SecTech 2023

Multicom Joins SecTech 2023

Multicom Systems has joined SecTech Roadshow 2023.

Multicom Joins SecTech 2023

Multicom Joins SecTech 2023 – Multicom Systems has joined SecTech Roadshow 2023, which takes Australia’s latest security products and technologies to 5 Australian cities each May.

Multicom Systems’ alarm communicators add redundancy to existing alarm system and alarms delivery using fixed and wireless transmission technologies.  The Multicom network offers faster transmission speeds via a dedicated fibre link on the Telstra and Optus networks with scalable data capacity. Sends alarm transmissions via 4G, Cat M1, NB-IoT (5G) and IP.

SecTech 2023 Roadies now include Multicom Systems, U-Prox Security & Safety, GSA Systems, SCSI, EKA Cyberlock and Smartlock Digital, Gallagher, JCI, Uniview, Alarm.com & Openeye, TP-Link, Bluechip Infotech, Hikvision, ASSA ABLOY, C.R. Kennedy, Stentofon, Dicker Data DAS, VSP, BGW Technologies, Allegion, Dahua, Video Alarm Technologies, Nedap, LSC, ISCS, S1 Monitoring, Network Optix, Uniview, and Secusafe.

Multicom Joins SecTech 2023

Now in its 8th year, SecTech Roadshow is a touring tradeshow that covers 5 Australian state capitals over 2 weeks and draws 2500 high-quality attendees. The compact size and local venues make SecTech Roadshow the perfect opportunity for installers and end users to get face time with leading suppliers and manufacturers and their latest solutions in a vibrant half-day.

SecTech exhibitors deliver show crates to us, and we’ll get them each city, then unload the trucks and help with setup and pack up. With no stand larger than 6 x 3m and no built stands allowed, no company dominates the floor – new products and quality face time with old and new customers are the key attractions at SecTech – same as they have always been.

SecTech trucks into Brisbane on May 2 at the RICC, Sydney on May 4 at the Hordern Pavilion, Melbourne on May 9 at the MCEC, Adelaide on May 11 at the Royal Adelaide Showgrounds and Perth on May 16 at Crown Perth.

Call Monique Keatinge on 61 2 9280 4425 or contact her here to book your space! There’s more information about SecTech here and more news from SEN here.

“Multicom Joins SecTech 2023”

BGW Technologies Launching New Branch

BGW Technologies Launching New Branch

BGW Technologies launches new branch in South Australia.

BGW Technologies Launching New Branch

BGW Technologies Launching New Branch – BGW Technologies significant growth has created the demand for the opening of a new branch in Adelaide South Australia in March.

Paul Amato – state manager for BGWT SA said while the existing branch was in Beverley, BGW Technologies’ large new branch will be located more conveniently in Welland.

“We have had tremendous growth over the last 5 years, and we simply had to move to a new premises to allow us to support our suppliers and service our every growing customer base,” Amato said.

“The new property is purpose built for our needs and includes a customer friendly welcome area, a large warehouse, a project staging area, a large training, and demonstration area on the 1st floor and 5 customer parking spots with loads of free street parking.

“We will provide more information including the address and our grand opening date later in February,” Amato said.

More about BGW Technologies here and more news from SEN.

“BGW Technologies Launching New Branch”

Automating A Warm Secure Welcome

Automating a warm secure welcome

Automating A Warm Secure Welcome with automated visitor management.

Automating a warm secure welcome – How can organisations create safe working environments that people want to visit, asks Ian Holmes*.

For many, visitor management systems are the answer. Visitor management systems are becoming ever more commonplace as part of facility management strategies for government buildings, workplaces, schools and medical facilities. When combined with self-service technology, these systems become an important first step in a visitor’s journey, offering a pleasant and frictionless experience for customers, contractors and employees.

Safety First — The Problem of Human Error

Yesterday’s reception areas ran entirely on human power. A visitor would enter the building and head to security or a welcome desk, where they told the attendant or receptionist the purpose of their visit while providing identification. That attendant would then verify the visitor with the building occupant and type the visitor’s information into the system to create a visitor pass.

Unfortunately, humans are prone to error. Information can easily be keyed incorrectly, the building employee could be hard to find, and the visit may be unexpected. With the possibility of dozens of visitors arriving at once, each needing access to different areas to connect with different people, a receptionist or security attendant could easily become overwhelmed.

In addition, they’re not only expected to be on top of the visitor schedules, but also able to at spotting fraudulent IDs from different states and countries. It’s easy to see how an individual could be granted access to the wrong area or allowed to enter somewhere that should be restricted to vetted employees and contractors.

Automating A Warm, Secure Welcome

On the other hand, an automated visitor system virtually eliminates user error, establishing a more secure building for visitors and staff alike. Sophisticated self-service systems allow visitors to register themselves at multiple kiosks rather than waiting for a security attendant. Modern self-service kiosks often feature a document reader that simply scans a credential verifying their visit and, depending on the level of security needed, scans their identification.

Removing the task of manually entering information into a security system speeds up the registration process of automating a warm secure welcome, leaving reception and security staff free to better cater to the needs of visitors and staff alike.

Understanding Access Credentials

What we refer to as access control credentials are typically either a document, card or data token issued to an individual by a third party that grants them access to the premises while on site. These credentials can include a visitor badge, printed or digital QR code (2D barcode), a radio frequency identification card or government-issued IDs such as passports or driving licenses.

Following a visitor’s registration, either online or at the point of entry, one or more of these credentials or tokens can be used by an individual while navigating the building. This includes a final check-out before leaving the site — a crucial last step for fire safety and contact tracing purposes.

HID Table
There are many credential options, each with different advantages when automating a warm secure welcome.

For more secure applications in data centres, schools or government buildings, government-issued IDs can also be used to enrol a visitor before providing them with access to secure areas on-premises.

Understanding Access Control Readers

Once a visitor has registered and has been issued a credential, that data token or ID document needs to be read by a device at a point of entry. These readers are usually positioned on a reception desk or fitted to an unattended self-service kiosk alongside other peripherals for printing badges or issuing access cards.

Readers are often connected via USB to a host PC with the visitor management or access control software running locally. Embedded computing and Internet of Things (IoT) technology allow highly capable devices to communicate directly with a server or cloud service via a local area network for easy deployment and integration. IoT-ready devices are widely used today in airports and on public transport networks to improve passenger flow and experience.

The best visitor management systems can accept whichever credential an organisation is currently using. For example, a building with multiple tenants may have multiple security systems. In one company, the internal person attaches a QR code to the meeting invite; when the visitor arrives, they’ll scan that code at the kiosk. Meanwhile, another may simply request a visitor arrive, register on the enrolment kiosk and scan their driver’s licence or National ID card.

Security requirements can also be dependent on the type of visitor. After all, a person attending an interview poses a much lower threat to an organisation than a contractor, so the credentials they are required to provide at entry should reflect this. In this case, an organisation may wish to issue the prospective employee a temporary barcode to be read from their phone or a printout when they arrive on site. However, as the contractor will have access to secure areas, it would be prudent to enrol them using their government-issued ID. Ideally, both the temporary barcode and government ID can be read and verified on the same device.

Accurately & Efficiently Capture Data

The use of multi-modal devices such as the ATOM identity document reader when automating a warm secure welcome allows an organisation to automatically capture personal data, barcodes and high-resolution images of the presented ID.

Using sophisticated optical character recognition technology, personal data such as the holder’s name, document number and address can be automatically read from the ID and output to the visitor management system. This data can then be used to enrol a new visitor or cross-check against a list of pre-registered visitors or employees.

Images of an ID captured using multiple wavelengths of light (white, infrared and ultraviolet) to expose the printed visible and invisible security features can be used to complete automatic authentication. A wide range of authentication techniques is used to determine whether the document is genuine or not, including the detection of optically variable inks, UV pattern matching, and cross-checking of personal data in the machine-readable zone, visual inspection zone and biometric chip.

For further assurance in self-service or partially attended applications, a facial image extracted from the document data page or biometric chip can be used to complete a 1-to-1 face match with a live image of the document holder to ensure the visitor is who they report to be.

This ability to read different national IDs, barcodes, RFID cards and even EU Digital COVID Certificates on a single device allows a visitor management system to accommodate a much wider range of visitors.

Future of Visitor Management Starts Now

The world is becoming ever more connected due to global trade & cultural exchange. As a result, organisations and workplaces are transforming from national to international entities at an unprecedented rate. The systems used to manage the multitude of individuals attending a site need to be as flexible and dynamic as the organisations deploying them. When automating a warm secure welcome, the use of multi-modal devices provides a simple, single touchpoint for all types of visitors across an entire global organisation for a safer — and simpler — world.

Ian Holmes is presales solution engineer, Access IS, part of HID Global. Ian is based at Reading in the United Kingdom.

More SEN news here.