To see how we see – Cortexica Vision Systems releases its VisualSearch API

This really is the stuff of Sci-Fi movies: Build a computer program that can see as the human eye does. Based on the principles of the human visual cortex, Cortexica Vision Systems claims to have done just that.

The London startup, which was spun out of Imperial College London in February 2009 after six and a half years of research to understand how humans see and two years building algorithms to accurately mimic human visual recognition, today releases its VisualSearch API, which has been in private Beta for a while. It’s aimed at brands who want to “directly engage with consumers” via their mobile device while bypassing the need to use QR codes or other barcodes or more traditional text search.

Along with being able to “see and process images in the same way as a human brain would”, Cortexica says that the software “can even follow the brush strokes of an oil painting”, which certainly sounds quite impressive, although admittedly I don’t have an oil painting to hand. It’s also said to be able to compensate for poor lighting conditions and “can identify an object of interest even when occupying only a small part of an image.”

It’s also the same technology that powers Cortexica’s existing two proof-of-concept iPhone apps. WINEfindr, which enables price comparison of wine using Visual Search, and BrandTrak, which offers a way to measure the impact of brands across broadcast, online and print media. With today’s announcement, the company is making this technology more broadly available to others.