New Machine Learning Application by Tech Giants
Demystify the machine learning models behind new AI applications by Google, Facebook, Amazon, Twitter, and Pinterest.
We can regard the year 2020 as a hallmark when AI stopped being a buzzword to being actually used in real-time applications in many different organizations all over the world. One may wonder why is it so? Is it possible that all of these organizations are now able to grasp the complex ideas put forward by machine learning and AI? Yes, this is true to a certain extent. However, this is also due to the fact that it is almost impossible to compete in the market without AI-based products. Therefore, not only the tech giants such as Google, Facebook, and Amazon are using AI-based products, but the smaller companies and startups are joining the suite. In this article, we will look at a few new AI-based products developed by different companies.
Shop the Look: Building a Large-Scale Visual Shopping System at Pinterest
Image-based search queries have grown popular in recent times and form the basis of many applications. This is due to the fact that visual search provides appropriate means to search for products as compared to text-based search. Shop the Look at Pinterest makes use of an image-based search to allow the users to search and buy the products in a given image. Shop the look comprises two modules, namely, scene decomposition (detection) and candidate retrieval (embedding). The scene decomposition module uses an object detector for identifying the major objects in an image. The module outputs a bounding box and the category label for each object present in the image. Each of these detected objects is displayed as a white dot in the UI.
The candidate retrieval module allows the customers to tap the white dot on the object to show intent in buying the selected product. The module also crops out the selected product to fetch the visual nearest neighbors which are filtered by the predicted object’s category. In order to improve the visual embeddings for shopping, Pinterest uses multi-task unified embedding.
BusTr: Predicting Bus Travel Times from Real-time Traffic
All over the world, public transit systems are the most common mode of mobilization after walking. However, it is noticeable that most of the bus transit systems do not have real-time tracking. Therefore, the bus arrival time, which is significantly influenced by traffic and other factors is always ambiguous.
The BusTr application provides the approximate time of bus arrival by predicting the bus travel times in real-time traffic. It uses regression-based models to perform predictions and is used by Google maps to provide the bus arrival time, where no in-vehicle tracking module is available.
The BusTr model predicts two things. 1) The time consumed to traverse each road segment. 2) The time consumed to service a stop. Thus, the prediction of total time consumed for a trip is
Where Q represents the set of road segments and stops in a trip and t^q is an estimate of tq.
The model comprises one unit for each quantum q that combines to form a trip interval. Each unit in the model has a single fully connected hidden layer with ReLU activation functions. The single set of hidden layer weights are shared across all the stop units and road segment units. During the training phase, the fine-grained spatial and route features are removed to improve the model’s generality. The model of BusTr is trained twice. In the first pass, L1 regularization is applied to the average embedding in each layer. In the second pass, the model is trained without regularization. These two passes not only allow to move the spatial embedding weights up to coarser cells, but also assist in reducing the model size. The model is trained using Adam Optimizer with MSE loss function.
TIES: Temporal Interaction Embeddings for Enhancing Social Media Integrity at Facebook
With the growth of Facebook users, the number of rouge users and disparaging content has also increased. The entities on Facebook interact with each other in numerous ways. The information of these interactions is useful in curbing the malicious activities. In the past, Facebook leveraged the static information, such as the number of likes, number of friend requests, and malicious IP connections for the detection of abusive activities. This type of feature engineering is not very efficient and is not scalable as well.
Temporal interaction embeddings (TIES) provide a means to capture the dynamic interactions among the various entities . TIES model is based on two types of embeddings, i.e., graph-based and temporal based. In order to analyze the behavior of entities, these embeddings are constantly trained. The TIES model first uses the large-scale Facebook graph to depict static behavior. These graph-based models are then used to initialize the temporal model, which captures the dynamic behavior of entities. At each instant, the triplet (source, target, action) is transformed into a feature vector. This feature vector comprises trainable action embeddings, pre-trained source and target embeddings, and other miscellaneous features. These feature vectors are then used to train the deep sequential learning model that has the capability to capture the dynamic behavior of entities.
SimClusters: Community-Based Representations for Heterogeneous Recommendations at Twitter
The recommendation system lies at the heart of various applications. Twitter designed and used different customized recommendation systems for each problem, such as relatable and interesting tweets, users to follow, breaking news stories, and topics. However, these recommendation systems do not scale well and are unable to perform general-purpose recommendations.
Twitter developed SimClusters in order to mitigate the aforementioned problems. The core idea of SimClusters is performed in two stages. The first step is to design a representational layer on the basis of overlapping communities. The input data of the first stage comprises bipartite graphs of user influencer connections. Influencers represent the subset of all users. In the second stage, these communities, along with the heterogeneous content are captured as sparse and interpretable vectors for performing a wide variety of recommendation tasks. For dynamically changing entities, such as tweets, these vectors are updated in real-time. On the other hand, the vectors for more stable entities are computed in batches. The spare vectors allow us to easily compute the nearest neighbors and maintain the reverse indices.
AUTOKNOW: Self-Driving Knowledge Collection for Products of Thousands of Types
Knowledge graphs are a useful resource that can be used to perform search and query answering. Thus, designing a knowledge graph that can capture the information of all the products on retail websites can be extremely useful. However, designing such knowledge graphs has many hindrances, such as sparsity and noise in product data, domain complexity, and the scale at which the number of online products is increasing.
Amazon presents AUTOKNOW which is automatic (uses machine learning), scalable, annotation free, and integrative, and has the ability to effectively address these challenges . AUTOKNOW takes the user logs as the input. These logs may comprise noisy and missing features. The AUTOKNOW fixes the incorrect values and imputes the missing values. Additionally, AUTOKNOW also extends the taxonomy and finds the synonyms. The resultant knowledge graph represents the products in an effective way, thus, leading to better search results and query answers. AUTOKNOW possesses self-guidance and updates itself in accordance with the customer search queries.