CES 2022 Day 2 Technical Notes
Here are some of the exhibitors I had the opportunity to meet on the second day of this year’s show.
At Eureka Park, EAIGLE showcased a complete AI-based ‘all-in-one’ kiosk with contactless visitor management, automated wellness screening, vaccine verification, people counting, capacity management and screening. heat from the crowd.
Essence Security, one of the world’s largest providers of alarm system solutions, has won two CES Innovation Awards with its MyShield 5G connected smoke generator and 5G Umbrella connected personal security alert device, capable of connect directly to public security response points (PSAP) and to the central station monitoring providers. Be sure to check out this article from Security company chief magazine Paul Rothman to learn more about Essence’s Umbrella solution and its expansion into commercial enterprise security deployments.
If you’re driving an Audi S7 Sportback, BMW M760Li xDrive, or 2021 Cadillac Escalade, you’re already using InfiRay’s uncooled IR sensors for driver-assisted guidance. At CES, the company unveiled the first 8m uncooled thermal camera sensor, which could have far-reaching industrial potential for things like body-worn cameras.
For the security and identification market, Isorg presented its Fingerprint-on-Display (FoD) modules for better fingerprint authentication of smartphones and improved dry finger performance in harsh conditions. The sensor modules support FAP30 and FAP 60; up to four fingers simultaneously touching the screen of a smartphone. Four-finger sensors scan four fingers of each hand followed by the thumbs (4-4-2). Each 10-print profile produces a seamless full record, which is why these scanners are the fastest option and the FBI’s preferred choice for enrollment. Four-finger scans also provide increased accuracy for identification operations.
Isorg’s next step is to showcase these innovations as a trusted partner for smartphone and security solution providers involved in mobile banking, border control, first responders and electronic access control.
CES 2022 has become a living catalog of AI accelerators, systems on a chip (SoC), and for some, like Femtosense, the introduction of an emerging category – the hyper-efficient AI processor for the edge. on-board, also known as the Sparse Processing Unit (SPU).
About six years ago, when coding for ADAS was deployed in luxury vehicles to help owners park, saving lives by braking early and avoiding people in reverse and recognizing objects approaching faster than Driver reaction time, the emphasis seemed to be on making stable projects, even if that meant millions of lines of code. Today’s vehicles can require over 100 million lines, a complex structure, and a dense neural network. Higher density means more processing power, resulting in cost, heat, power, and ultimately for an electric vehicle, less mileage.
For a 911 operator trying to hear if the call is an unarmed domestic argument or involves gun violence, having an SPU with an effective code that recognizes multiple people talking amid background noise can make the difference between the wrong team. intervention sent or lives saved.
Femtosense AI, with its SPU, was able to demonstrate speech recognition in the extremely noisy environment of trade shows, resulting in unaltered speech and background noise in real time. Legacy noise cancellation technologies are still sold in consumer electronics today, and they are closer to sound suppression than preservation of audio frequencies.
With AI-based video processing with objects moving in different directions amid complex burning buildings, such as during a riot, video evidence may not be rendered accurately. With the Femtosense AI SPU, the scarcity of settings and activation can reduce power requirements by 100 times and memory used by 10 times.
In addition to using ultra-efficient neural network processors, Femtosense AI provides everything needed to move from neural network model to SPU, tasks typically performed by a solution provider who may not be as familiar with it. processor development.
Visual Behavior works with companies like Waymo and NVIDIA, creating types of robotic perception including automated guided vehicles (AGVs), advanced driver assistance systems (ADAS), and unmanned aerial vehicles (UAVs).
Visual Behavior uses a remarkable new paradigm focused on scene representation rather than sensors. It is an internal, persistent, symbolic representation of the world that is continually being updated. Their core technology is an artificial visual cortex, AI-powered software for understanding the scene.
Use cases include better driver safety in poor environmental conditions, better avoidance of multiple obstacles, and even object tracking among similar objects.
About the Author:
Steve Surfaro is Chairman of the Security Industry Association (SIA) Public Safety Working Group and has over 30 years of experience in the security industry. He is an expert in smart cities and buildings, cybersecurity, forensic video, data science, command center design, and first responder technologies. Follow him on Twitter, @stevessurf.
Comments are closed.