Welcome to the World of Machine Vision

It feels like machine vision is cropping up everywhere these days, but what is it?
Machine vision is a system that uses a camera coupled with a computer and image interpretation software to create data. The data is in turn processed and used to decide on an action. Essentially, a machine is programmed to look for a specific thing and then perform an action when it sees what it is looking for. 
For example, a camera can be set up on a factory’s packaging line to look for defects in packaging. When it sees a defect in an item, the machine can reject that item. All of this can happen much faster than if a human were doing the same thing.
The definition of machine vision can also include all types of machines that exist to create images for interpretation, which is why they perform roles in manufacturing, security, surveillance, and more. Machines with cameras are being used in a wide variety of tasks, such as Celestial Tea using cameras to perform packaging inspection on the conveyer line, or Tesla using the technology to have their cars read speed limit signs and adjust their speed accordingly. Another prime example of machine vision is in the security sector: when you go through the customs’ check at the airport a camera snaps your photo and uses facial recognition to look for people who have been flagged.

Machine vision used in Google Translate
Machine vision and pattern recognition power Google Translate's on-the-fly language swap.

Machine vision is here to stay. As the cameras and technology become less expensive, they will be integrated into more places and doing more tasks in a faster and more reliable way than a human. As photographers, we will have more access to the technology to help us do our work. The technology will save us from doing the more tedious tasks associated with our work and help us process and catalog large amounts of information.
Machine vision uses visible light and a camera to take a picture. The pixels in that photo are then processed by software that searches contrast areas or pixels of specified values. Some examples of parameters the software looks for are:
  • Edge detection: finds object edges
  • Color comparison: searches for color within a specified RGB range
  • Pixel counting: counts the number of light or dark pixels
  • Pattern recognition: pixels in a specific arrangement
  • Depth measurement: using images to model three-dimensional space
There are many more parameters that algorithms can hunt for in a sea of pixels. The exact process depends on the application. In fact, the development of the software is often developed in tandem with the role it will perform.
The field of machine vision abounds with intriguing and exciting examples of applications both current or just on the horizon. A great example is a camera that can give a visually impaired person information about an object, building, or just about anything that is in front of them. You can already use a smartphone’s camera to recognize a painting in a museum and have it tell you the name of the painter. You can even use your smartphone to translate foreign text on a sign.

Optical character recognition translating text
Google Translate app using OCR to translate text.

Other examples include:
Facial Recognition
  • Unlock your devices and computers
  • Surveillance
  • Tagging images in Adobe Photoshop Lightroom
Optical Character Recognition
  • Serial number reading
  • License plate readers
  • Project Gutenberg (book digitization)
  • Cars that read speed limit signs
Inspecting on a production line
  • Packaging can be inspected for the correct sealing
  • Parts can be inspected for defects
  • Measuring liquid levels in a bottling line or measuring part sizes
  • Counting to make sure that a box of 24 has the correct amount
  • Crop maintenance and irrigation
  • Harvesting
  • Post-harvest quality control
  • Cell analysis
  • Mapping and GIS analysis
  • Weather modeling
By now you may be worried that machines with cameras will replace you. I assure you, your job is probably safe for the foreseeable future. Human photographers take photos for humans to interpret, share, and enjoy. Machines are not quite able to understand all the nuances that can be read in a photograph yet, which is why they will be stuck in the industrial world for a while longer. 
On the other hand, there are some troubling developments in the field if you are a professional retoucher or editor. Google's artificial intelligence can now automaticallycombine and retouch images in some disturbing ways, and create and edit stories. Maybe that seems like a gimmick now, but so did photography itself in it's early days.
However, putting our existential anxiety aside for a second, there are many ways you can use machine vision to your own advantage as a photographer.  
One promising way to use machine vision technology is auto-tagging, where the software attempts to look at the content of an image and list out the things that are present. For example, if you have it analyze a photo of strawberries, the software might return the tags: berries, fruit, strawberries, fresh. This automated process promises to eliminate many hours of manually tagging images with keywords. The photo sharing site Flickr, for example, automatically tags for you on upload.
Another machine vision technology that is available to photographers right now is facial recognition as part of Lightroom 6, Apple Photos, and Picasa. Facial recognition assesses your photographs and looks for faces. It then groups similar faces together that the program thinks belong to the same person. You still have to go in and put a name to the face and sort out false matches but this technology can help you find and keep track of all of the images you have taken for clients.

Facial recognition in Adobe Lightroom
Using facial recognition in Adobe Lightroom to tag people.

Perhaps your next photo assignment may involve creating images that will be subjected to some form of machinery to collect data. If that’s the case, you will need to know what that system is looking for to help the process along. For example, if the software is looking for changes in contrast, you would want to know how to adjust your lighting to better reveal contrast.
Machine vision is a promising area of technology that can be very helpful in many industries and roles, including for photographers. While there is no imminent danger of being replaced by a machine with a camera yet, there’s amble opportunity to use the new technology to improve your workflow. The digital revolution has led to a proliferation of cameras and images, possibly more images than we can makes sense of without using some sort of machine vision to help us interpret.


Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation.


Copyright @ 2013 KrobKnea.

Designed by Next Learn | My partner