Computer Vision Trends That will Dominate the Industry in 2023

There is a lot that the eyes of the machine can see
Listen to this story

Computer vision is the eyes of the machine. AI models are built to recreate living beings’ capability of looking at the world around them and interpreting and understanding it. Machines do this by analysing images, videos, and objects around them. 

Recent developments like Tesla’s Optimus Robot and Full-Self Driving majorly relied on computer vision for object detection and image tracking. Even 2D to 3D models use computer vision for image analysis and interpretation. The Conference on Computer Vision and Pattern Recognition (CVPR) 2022 saw a total of 8,161 submissions and thousands of them tried to solve a different problem in AI/ML. 

Seeing these advancements and developments through computer vision, let’s look at some of the predictable trends in the field.


Sign up for your weekly dose of what's up in emerging technology.

Read: Top AI Predictions for 2023

Autonomous vehicles

The objective of achieving self-driving vehicles has been a long-running one. One of the most important aspects of achieving these autonomous vehicles is identifying the objects around the vehicle for it to traverse and navigate safely. This is where computer vision-based algorithms come into the picture. Companies like Tesla have been adopting techniques like auto-labelling to further their autonomous driving vehicles.

Download our Mobile App

The same technology can be useful for other transportation-based applications like vehicle classification, traffic flow analysis, vehicle identification, road condition monitoring, collision avoidance systems, and driver attentiveness detection.

Increased use of edge computing

As the demand for real-time processing of visual data increases, there will likely be a trend towards using edge computing to perform computations closer to the source of the data. Traditionally, computer vision tasks have been performed on centralised servers or cloud-based systems, which can be time-consuming and require a stable internet connection. Edge computing enables these systems to make quick and accurate decisions based on visual data without sending the data back and forth to the cloud for processing.


One of the main areas where computer vision is expected to play a significant role in robotics is in enabling robots to navigate and manipulate objects in their environment by using algorithms to analyse images and video from cameras, robots to detect and identify objects, as well as understand their shape, size, and location. This can allow robots to perform tasks such as grasping and moving objects, as well as avoiding obstacles and navigating through complex environments.

Robots can understand and respond to human behaviour through computer vision by analysing facial expressions, body language, and other visual cues. As a result, robots could potentially be used in applications such as customer service, education, and healthcare.

Healthcare, safety, & security

  • Medical image analysis: Computer vision can be used to analyse medical images, such as X-rays, CT scans, and MRIs, to detect abnormalities or diseases. For example, a computer vision system could be trained to recognize the presence of a tumour in an MRI scan.
  • Diagnosis and treatment planning: Computer vision can be used to assist with diagnosis and treatment planning. For example, a computer vision system could be used to analyse medical images and recommend the most appropriate treatment for a patient based on their specific condition.
  • Monitoring patient health: Computer vision can be used to monitor patient health by analysing vital signs such as heart rate, respiration rate, and blood pressure.
  • Robotic surgery: Computer vision can be used in robotic surgery to assist surgeons in performing complex procedures. For example, a computer vision system could be used to guide the movement of a surgical robot, ensuring that it stays on course and avoids damaging any surrounding tissue.


Shops and retail stores can be installed with cameras to analyse items on shelves and automatically detect the stock and also recognise which items sell the most. Apart from inventory management, AR can also be used to create “virtual fitting rooms” or “virtual mirrors” to try out items without touching them or even going to the story, much like how filters work on Snapchat or Instagram by superimposing items on top of the person in front of the camera. 

Data-centric AI

Optimising the quality of data is as important as increasing the quantity of data when it comes to training models and building algorithms. Image recognition models are built for enabling machines to identify and classify pictures of different objects and labelling these images is important for extracting the correct information from the data. Therefore, unsupervised and automated computer vision technology would increase the accuracy and information when there is less availability of data.

3D reconstruction

In 2022, we witnessed text-to-image models and that eventually led to text-to-3D models. This further led to 3D reconstruction models using methods like Neural Radiance Fields (NeRF) that could recreate 2D images into 3D meshes that can be used for recreation of scenes and also for building models in the metaverse. This can also be used for creating immersive virtual and augmented reality experiences, allowing users to interact with digital environments in a more realistic and natural way.


Apple has computer vision based  applications that can detect objects in the sky if you point your phone towards it. This is just one of the use cases of computer vision in the space industry. By analysing imagery and data collected by satellite or aerial sensors we can accurately map and analyse the Earth’s surface and environment. Moreover, by analysing geospatial data through the satellites, we can predict future disasters such as earthquakes and hurricanes and then effectively work to reduce their impact. 

Computer vision can also be used for space exploration by locating and identifying space objects and also detecting their various characteristics. Identifying these objects can also be used for cleaning up the space, for which NASA, ISRO, and all other big-tech companies are planning projects.

Read: It’s Time for ‘Swachh Antariksh Abhiyan’

More Great AIM Stories

Mohit Pandey
Mohit is a technology journalist who dives deep into the Artificial Intelligence and Machine Learning world to bring out information in simple and explainable words for the readers. He also holds a keen interest in photography, filmmaking, and the gaming industry.

AIM Upcoming Events

Early Bird Passes expire on 3rd Feb

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

Is AI sexist?

Genderify, launched in 2020, determines the gender of a user by analysing their name, username and email address using AI.