Details about semantic image segmentation have been released.
We already know that Google is committed to bringing the best in technology – this is especially true with the Android operating system, which is already running on a bulk of all smartphones in the world. Additionally, the company’s machine learning technologies are open-sourced, meaning that developers can utilize these methods to power their own apps or services.
One such technology is ‘semantic image segmentation’, which is the reason why Google’s Pixel 2 smartphone can take excellent portrait images on only one lens (most smartphones with portrait mode effects are only possible with a dual camera setup).
Semantic image segmentation can assign various labels on every pixel on an image taken from a camera. From there, a process called categorization can identify if an image is a road, the sky, person or an animal. This method can also identify which part is the subject and which part is the background.
Putting that into a photography perspective, this technology can now crop out a subject in a middle of an image when you’re taking a portrait shot, and then apply the depth-of-field effect that gives it that blurred background look.
Google hopes that by sharing this information with everyone, they can also utilize it to other apps or even systems that could power machines and devices in the future.