Travel UGC: There’s more to it than meets the eye

Travel user generated content holds untapped marketing potential that, only when properly classified, presents a golden opportunity for publishers to monetise this content and make it more targetable for brands.

Holidaymakers aren’t waiting a week for the snaps taken on their disposable camera to develop, they are posting them in real-time via their mobile or laptop computer straight to their social media channels.

In fact, a new infographic released this week from Stackla highlighted that, as of June 2015 more than 47 million #travel photos have been posted to Instagram alone.

The good news for brands is that 40% of millennials rely specifically on this sort of user-generated content to inform their future travel plans.

Herein lies the advertising opportunity. It is now not sufficient enough to simply use contextual data to validate a picture based on the text around it, but only by utilising image recognition technology can visual content be fully understood and turned into valuable data.

They say a picture paints a thousand words so implementing an image classification solution offers the ability to translate that visual content into text. This means that publishers, platforms and advertisers can maximise its potential by allowing the content to be traded and made more targetable for brands, giving it value and context.

In the same way as classifying a holiday photo or video as a IAB20 Travel, neural networks have the ability to recognise key characteristic and convolutional layers contained within that image.

For example, if someone uploads a family shot standing on a beach, in front of their rented holiday jeep, the taxonomy relating to that wouldn’t just recognise ‘Travel’, but ‘Jeep’.

The web is becoming increasingly visual. Many ad technologies fail in such an environment as they are unable to determine the content of an image or video due to the lack of descriptive meta data.

Travel user generated content can be a goldmine but only if you can leverage it in the right way. There is a huge opportunity for websites, platforms and hosting companies to monetise and understand their consumers better by turning these photos into new data points.

 

WeSee harnesses the power of neural networks to revolutionise image recognition

Viztech industry pioneer transforms digital image and video search and tagging, providing a way to meet the UK PM’s demands over terrorist content, among other key applications.

Computer vision innovator WeSee has launched a unique and powerful AI-based technology that can process, search and categorise video, as well as still images, quickly and efficiently, handling information just like the human brain does but up to 1,000 times faster.

It has enabled WeSee to develop the world’s most advanced adult and violence filter.
One of its many applications includes the policing of terrorist and other dangerous and inappropriate online visual material answering recent calls from UK Prime Minister Theresa May and helping make the web more child- and brand-friendly.

Powered by deep learning and neural networks, similar to the technology behind the iPhoneX’s facial recognition system, WeSee’s Visual Intelligence Engine (VIE) pioneers a whole new industry sector, Viztech, which will ultimately transform the way we work, live, appear and interact with each other, according to the company’s CEO David Fulton.

“WeSee doesn’t just see visual content, it understands every multi-layered element within images and videos in the same way humans do, using biologically inspired artificial intelligence,” he said. “It allows organisations to automatically harness the huge opportunities and value hidden inside all images and videos.”

Unlike other systems that are based on open-sourced frameworks, WeSee’s unique smart approach to video classification combines machine and deep learning with the company’s own proprietary rule-based algorithms, alongside specialists dedicated to data collating, sorting and tagging.

“It has the power to answer the Prime Minister’s recent demands over terrorist online content,” said Fulton. “Plus the sky really is the limit when it comes to other applications that could transform broadcasting, insurance, branding, law enforcement and more.”

Although still at an early development stage, WeSee’s technology can already be used by broadcasters as a kind of video search engine. It could also help them categorise and tag video content on-the-fly, quickly and easily – something that would ordinarily take days to be done manually can now be done automatically in seconds. It should soon also be possible for WeSee to determine whether an individual is telling the truth or not through technology that takes facial recognition to a new level. This has obvious implications for insurance claims and criminal prosecution.

“We are only just scratching the surface of what is possible with Viztech,” added Fulton.

Originally posted online