Breakthrough model in attention prediction

Thomas Z. Ramsøy

October 19, 2020

After adding thousands of new eye-tracking images, the latest Predict model is now up. This model improvement leads to a substantial gain in attention prediction accuracy. Here, we go through the main differences with a focus on object recognition, better discrimination, and metaphorical elements.

Predict has just launched the latest major update to the AI model. This model improvement is a substantial increase in accuracy, and follows our development process.

Most notably, we can distinguish between three types of accuracy improvements in the new model: object recognition, better discrimination, and metaphorical elements. Let's take them in turn.

Object recognition

If you follow the way in which the brain's visual system is working, there is a divide after the primary visual cortex. Here, one stream of processing goes upwards, the so-called dorsal route, which is related to object position and usability.

The other route is through the brain's lower end, the so-called ventral route, at the underbelly of the temporal lobe. Processes here are related to object category and recognition. Objects, faces, cars, brands, etc. are processed here.

With the increasing data amount and new advances in our AI models, we can now see that the AI model better recognizes faces across a variety of conditions and angles. It is interesting to see how the AI models gain a function that in human brains are what we can call object permanence. That is, that we perceive an object as the same regardless of the angle at which we look at it.

In earlier models (bottom), face detection was a limitation, but with increased training, a combination of face detection, text recognition, and precision has been obtained (middle) compared to eye-tracking (top).

Better discrimination

The second type of attention prediction precision comes from the reduction of errors. Making a better model is not only about boosting the true positives of the model. You also need to show that other parts of the model are being more accurate, including:

  • higher true negatives -- when the model says "this part of the picture is not seen" and this is true when looking at the eye-tracking data
  • lower false positives -- reducing the times that the model says that there is something when there is nothing in the eye-tracking data
  • lower false negatives -- reducing the times where the model says that there is nothing when there actually is in the eye-tracking data

A large part of good modeling is also to have true negatives, i.e., to say that there is no attention to a part of the picture when there is no attention in the eye-tracking data. Earlier models (bottom) had the issue of over-selecting targets. This is now substantially improved in the new model (middle).

Metaphorical elements

Many ads play on metaphorical elements that are relevant to human viewers. However, models are not particularly good in understanding these features, and it is even more difficult to hard code into a model.

Still, the latest Predict model has improved substantially in the processing of metaphorical ads. Here, the precision has reached an astounding new level, matching non-metaphorical contents.

Many ads play on indirect meaning, subtle cues, and metaphors. This is caught when people process the picture (top) but this is rarely detected by AI models (bottom). In the new Predict model (middle), these factors are learned properly.

Better attention prediction

Taken together, the new Predict model update leads to a substantial increase in the precision of attention prediction. These increases have even exceeded our expectations for what machine learning models can do. However, these advances clearly demonstrate that a combination of high-quality eye-tracking data together with world class machine models is the only way forward!

Check out the updated Predict today. Book a demo with us to see the tool in action.

Breakthrough model in attention prediction

Neurons Icon

Ready to drive revenue with creatives that work?

Get a free demo