AI Video Analytics
Distributed AI With Data Privacy
Artificial Intelligence built on world leading research
Industry Leading CCTV Video Analytics
Vision Semantics are a leading Deep Learning AI solution provider focusing on computer vision machine learning. We develop AI powered video analytics surveillance software solutions, and innovative deep learning software that is enhancing the video surveillance & CCTV industry.
Our proprietary video analysis and dynamic scene understanding software enables dynamic search of the virtual environment to impact outcomes in the physical world.
In solving the most complex Deep Learning AI computer vision problems, we have developed a unique approach to Decentralised Machine Learning, the next generation AI platform, based on a dynamic lifelong learning AI model to “search and learn”.
AI today is based on Big Centralized Data, which essentially means that the algorithm learns what it is programmed and then can forget the programming in order to learn something new. Big Centralized Data is stored on a powerful CPU, having all of the data stored in one place means that a security data breach can be a concern. Because Big Centralized Data uses a powerful CPU to store its data, it requires a high power consumption, resulting in higher running costs and the need for additional resources. The edge is the next stage of the evolution of AI technology because of the physical constraints, the cost constraints, and the practical constraints of running all AI applications in the cloud. Our approach, perfected in computer vision, enables it to deliver a decentralized AI platform, more advanced than Google Federated Learning.
It’s the future platform for AI with multiple sector applications.
Artificial Intelligence Products Built on World-Class Research
Vision Semantics began as a spin-off from Queen Mary University of London's Computer Vision Department, led by founder Professor Sean Gong. The team has been conducting world-class research in computer vision and machine learning for over 20 years, internationally renowned for its work on unusual behaviour recognition, person re-identification, multi-camera tracking, and face analysis in video and images. This research has led to over 400 published papers, over 27,000 citations and filing of 9 families of international patents, with 34 underlying patents, providing barriers to competitors exploiting computer vision commercially.
The main areas of application include person re-identification, action and behaviour analysis, attribute recognition, clothing analysis, image and video super-resolution, crowd analysis and counting, video semantic search and annotation, video summarisation, and privacy by design image analysis. We take our research and commercialize it so that it is robust, scalable and can be implemented by partners in real world situations.
Deep learning video analytics is transforming computer vision analysis
We've developed world-class video analysis and dynamic scene understanding software that employs innovative computer vision and machine learning methods focused on deep learning. It has developed a commercial grade software platform for patented automatic video analysis of vast quantities of video & images running up to 100X real-time.
We've also developed Person Re-identification, the ability to find a person in time and space in a vast quantity of video data from distributed cameras, even if there is no clear view of the face or lighting is dark. We are unique in being able to find people within low resolution and poor quality video footage.
AI Powered Person Re-Identification
The video surveillance industry is experiencing a massive growth spurt as we increasingly use cameras to prevent crime, secure evidence, protect our property, and improve public safety in the home and at large. The increase in video surveillance means more data being generated by digital surveillance cameras which needs to be processed and analysed. The amount of data being stored can be intimidating and either impossible or commercially impractical for human operators to make sense out of it in real time, and in a timely manner.
A major challenge is Person Re-Identification (RE-ID). Re-Identification is the process to find matching images of a person (in a gallery of images) for a given probe image whilst not using facial imagery features nor any other personal specific biometrics such as gait (which are often obscured). RE-ID explores features found on the entire body from clothing, style and carried objects. RE-ID is designed to work when there is no training example (Zero-Shot Learning). Facial recognition works by pinpointing and measuring facial features from any given image. We use N-Shot Learning algorithms for Facial Recognition, which works by exhaustive enrollment of training samples from every individual in a database (N-Shot Learning). Facial recognition works best when the subject is close enough to the camera, their face is well-lit and are facing towards the camera. When dealing with CCTV footage however, the subject's face cannot be so easily captured and recorded. CCTV captures footage of people out in public and always on the move, where faces are often much more difficult to capture clearly on camera. Re-Identification does not use facial imagery and does not rely on private data in order to identify subjects. Re-Identification preserves privacy and it does not identify who the subject is using private data.
How Does Re-IDentification Work?
Vision Semantics provides the ability to search for people using Re-Identification when facial recognition will not work within an unstructured and uncontrolled environment, whether it be indoors, outdoors, close-by, far-away, day or night. Our Re-Identification solution can rapidly find people very quickly at any time of day. Re-Identification finds subjects with great accuracy over large-scale, crowded environments that public safety agencies operate in, with the ability to handle low resolution as well.
Re-Identification applies a Zero-Shot learning approach, a process by which a machine learns how to recognize objects in an image without ever seeing the image before, Zero Shot learning helps in the machine's classification of the image.
In other words, Zero-Shot learning aims to help machines categorize objects or interactions that they have never seen before. This makes the solution scalable - it can be deployed in cities internationally without data training.
We uniquely apply re-identification to human-in-the-loop “search and learn” AI algorithms, bridging transfer learning, reinforcement learning, semi-supervised and unsupervised deep learning, to enable rapid and scalable re-identification in unknown target domains from an open-set world.
Our re-identification software has grown beyond research lab PhD experimental code to become commercially mature and tested. Our re-identification technology is underpinned by world-class algorithms that have led academic benchmarking not only on supervised, but also on unsupervised and domain transfer testing, on both the largest benchmark Market-1501 and the hardest benchmark GRID. We've gone beyond market testing into making the technology scale for real commercial deployments.
We are the driving force within this machine learning market, with a 12-18-month sustainable technology lead in re-identification over the nearest competitor. Our technology is backed by 9 international patent families with 34 underlying patents. This is combined with 10 years of commercial work with international Government and law enforcement agencies to ensure the technology is accurate, fast and scalable for real world deployments.
Capabilities and Roadmap
Today, re-identification is applied to post event analysis where we search for a person in time and space over a network of CCTV cameras video footage. This capability is being enhanced by adding attribute search to re-identification, so we can now search for a person with an object (bag, umbrella, etc.) and a vehicle (car, motor bike or bike).
We released Real Time re-identification in 2020, which transforms the capability of public safety to act in real time and to trigger events detected within video footage. If police and public safety officials can monitor and respond in real time, then a crime or social disturbance can be recognised and prevented.
We are currently working on a behaviour video analysis module which will work within the framework. This will enable us to re-identify behaviours which serve as a trigger event to a predicted outcome. The module facilitates abnormal behavior profiling and can then facilitate event alerting to identification of lost or self-harm passengers and group social distancing violations. This will enable future automation of the process flow in protecting threats to the public.