Personal Project

Tech & Tool/

Machine Learning, AR,  Modeling, Xcode, Twitter API

Emoji Bot (ver.1 & ver.2)


2 versions of Emoji bot based on AR and Twitter. Play around with machine learning and API training, and some steps on starting to combine AR and ML together to create a symbolic world.

Ver.1 AR + Machine Learning

The first version of Emoji Bot is an exploration of combine AR and ML, and I want to addresses these issues below

  • We always try to sum up a lot of things with symbols and icons, so what do we feel when the symbols from virtual world are actually in our real lives instead of those original object?

  • Since symbols and graphics are supposed to be easy to understand, are more icons means much easier to understand?

  • Emoji is considered as a language that most people use in private conversations, with strong privacy meaning. Different people have a different understanding of emoji and they use emoji differently. Is this concept also make sense in the real public world? Is this concept also make sense in the prediction process of ML?


ARkit: Using ARworldtracking session to explore the surroundings through a phone camera



Core ML: Feeding captured items to inception v3 ML model, detect the main object



Emoji module: Add related emoji tag on the screen with emoji translate module

Ver.2 Twitter Bot

The 2nd version of Emoji Bot is to read people's tweet and translate into emoji, then tweet back.

  • Read the tweet from people who @ it.

  • Translate people's tweet into emoji language and tweet back

  • Reply to some tweets with emoji.

Some of the questions that arose during the conceptualization phase were:

  • Since we are so sensitive to our social media information and conversation content, how people feel when all content been translated into emoji symbol?

  • Can people still get the meaning of the original tweet from the emoji sentence?


From keyword to emoji?


️ ? coffee?

☕️ is "beverage", "caffeine", "latte", "coffee"