There is no denying that machines have drastically improved our lives and if our ancestors were presented with them they would undoubtedly believe them to be magic. Seeing machines start to compose symphonies and slowly replace humans in automated tasks makes us wonder, what next? The exponential growth of technology over the last few decades has shown us that there is no limit to what is possible with computers but if you break down the ways computers operate they’re still behind humans in numerous areas. For example computers can crunch millions of bits of information and look for patterns and correlations in the data but when it comes to something more visual like image or video understanding the machine begins to struggle. The development in this field was so poor that up until a few years ago a machine could not distinguish between a picture of a cat and a dog yet it was robust enough to send a robot to explore Mars.
This is all about to change as Google has detailed a new detection system that can easily spot lots of objects in a scene, even if they’re partly obscured from view. This works through a neural network that rapidly refines and changes detection parameters for far deeper scanning and object classification, classification with localization and detection. Computers are still miles away from attributing context to the images they see but there is no doubt that developments in this field are likely to impact major industries from Google Search (imagine getting more information about something simply by looking at it in Google Glass) and self driving cars (object detection/understanding).
If you’re interested in future technologies such as automation, the singularity, self driving cars and bitcoin I have started a little Tumblr blog called Futurology Division where I will be publishing some of my thoughts and articles on the subject. Check it out!