LG rolled out an Android 9.0 Pie beta to G7 ThinQ owners in its home market, but there has been little news on when customers outside of Korea would receive the update. That changes today, as the manufacturer has detailed that the stable Pie update will commence rolling out globally from this week.
The OTA should start rolling out first to users in South Korea followed by global markets. The update will bring all the new additions in Pie as well as LG’s own customizations. The G7 picked up a substantial update last month that delivered a host of fixes and the latest security patch, and the Pie rollout will undoubtedly be an added bonus.
There’s no exact timeline as to when the update will start rolling out in global markets, but we’ll let you know once we hear more. In the meantime, start hitting that update button if you’re rocking the G7.
It seems like almost every phone these days offers some sort of object detection within its camera software. As mobile photography shifts towards relying more on computational data rather than sheer megapixel count and sensor size, phones from the likes of Samsung, LG, Huawei, and others are doing their best to identify the subject of your shot, whether it’s a beautiful landscape or just your hot meal.
Object detection doesn’t need to be a user-facing feature:
By understanding what you’re taking a picture of, your phone’s camera software can begin automatically changing settings like saturation, exposure, shutter speed, and so one to best fit the specific situation. If your phone detects a lot of grass and trees in your shot, it’ll ramp up the saturation of green channels. Likewise, it might up the shutter speed if you’re taking a picture of your dog, reducing motion blur in case the dog gets too excited and starts jumping around.
Point-and-shoot cameras have been doing this to some extent for years now, though in most cases it wouldn’t be quite as streamlined; there was often a dial for you to manually tell the camera what type of photo you were taking, and it would adjust the automatic settings accordingly. Smartphones are a bit … well, smarter than that now, and can handle the legwork for you, and in almost every case the camera UI will alert you to the type of scene it’s detected with a small icon by the shutter button. But is that really necessary?
In every phone I’ve tried that features this sort of object detection, ranging from the Galaxy Note 9 to the Huawei P20 Pro and the LG G7 ThinQ, you really can’t do anything once the phone identifies the subject, beyond just taking the picture. If my phone incorrectly identifies, say, a dog as a landscape, for example, I can’t tap the landscape icon and correct it; I can either take the photo anyway and hope for the best, or I can disable object detection entirely.
So what good is telling me what kind of object the phone thinks I’m shooting? Sure, some people might just feel better seeing that their phone is correctly identifying the scene, but if you’re unable to fix it either way, is that really still helpful? If you’re unable to make any adjustments either way, I’d rather just let the object detection happen in the background without me ever knowing what it’s changing — you can always disable object detection if you don’t like the results, and those after more control can shoot in manual mode anyway.
How do you feel about computational object detection? Do you like that it tells you what it’s detected, or would you rather not see it since you can’t change it anyway? Let us know in the comments!