BARCELONA, ESP – One of the ‘innovations’ of the latest generations of smartphones have been interactive avatars that devolve into emoji. By using a bit of photography and some real-time face mapping, users on certain devices can create mini-avatars of themselves or emulate other non-humanoid avatars. The way that the companies differ is in the implementation of these features, how interactive they can be, and how they are presented. As part of the Samsung S9 launch event here at Mobile World Congress, I tried the ‘AR Emoji’ feature. It went something like this.
After taking a single selfie, the system took about 20 seconds to map the image to a model. It asks if the model is male or female, and then the user can fine-tune things like skin color, hair color, hair style, and accessories like glasses. It certainly uses better algorithms than some ‘create your own character’ features used in some sports video games, and the software was even accurate enough to pick out the red pimple that was developed on the right of my nose. It got my hair as being from right to left as well, although it did put my hair as grey.
All of this seemed reasonable – as an interactive character, it did follow my facial features such as cheekbones, eyebrows, and chin. Where it fell down, however, is with the eyes. It did not react to my eyelids at all, and it seemed (when looking at other people’s AR Emoji) that the eyes were a fixed feature on all implementations. It slowly became a sort of cold death stare that makes the most joyous of people into workaholics with no emotion. There’s also the aspect of not reacting to a tongue, either.
This is one of the problems when dealing with life-like reality enhanced avatars: even the slightest thing that seems off will seem obviously so. The human mind is trained and has evolved so well to recognize human faces, and the defects within, to a sometimes unnerving degree. This is why drawing faces can be difficult – one slight mishap and it looks very wrong. Apple skirted around this issue by keeping its ‘Animoji’ limited to animals only, and not dealing with actual human-like avatars. But as I said before, the implementation that Samsung has is definitely better than any video-game ‘create a character from your picture’ tool that I’ve ever seen.
The goal of the AR Emoji is similar to how other systems like this are done – use the avatar to create custom emotions and messages that can be shared with other users. In the iOS ecosystem, this is fairly easy, but for Android this is a little more difficult, requiring all third party apps to use the same API calls in order to do so. In order to use AR Emoji, Samsung has a custom implementation, meaning that applications have to partner with Samsung in order to use them. Nonetheless, the Galaxy S9 does support ARCore. Samsung also provides a series of 15 emoji stickers based on the avatar that can be used with popular chatting applications like WhatsApp, WeChat, and Line.
I think what surprises me about the feature is that with a single photo, it made a reasonable attempt at an avatar in around 20 seconds. I have seen other journalists report on the feature with less than desired results in how their avatars turned out. This is in contrast to how Sony did its 3D modeling feature, shown last year, by which a panned scan from ear to ear is needed to get a model. I think that in this case, Samsung should offer the opportunity to do a more detailed scan of a person by using multiple pictures, to help get color and dimensions more accurate.