}
site logo
subscriber logo
search logo
X
x
Scrabbl
Think beyond ordinary
Subscribe to our newsletter to explore all the corners of worldly happenings

Channel

Your Details

Tech

Artificial Intelligence and Machine Learning Empower YouTube, the #1 Video Sharing Platform

Machine learning or Artificial Intelligence, as the technology industry often likes to call it, involves training algorithms on data so that they become capable of spotting patterns and take actions by themselves, without human intervention. This is the secret of YouTube’s popularity.

Artificial Intelligence and Machine Learning Empower YouTube, the #1 Video Sharing Platform

According to statistics, over 1.9 billion users log into YouTube every single month watching more than a billion hours of video daily, which is half the internet. Organizations are integrating video creation and video sharing with their marketing strategies. Creative artists and designers upload more than 500 hours of video to the platform every single minute of the day. As on date, YouTube supports 80 different languages, which also adds to its popularity. Cisco predicts that by 2022, video will consume 82 percent of all internet traffic. 


For more technology insights, follow me @Asamanyakm


Considering the massive number of users, high volume of activities and richness of content, it makes sense for YouTube to take advantage of artificial intelligence (AI) and machine learning (ML) to add efficiency to its operations. Here are a few ways YouTube, owned by Google, uses these exponentially growing technologies today. 


Automatic removal of objectionable content 


Protecting users from malicious and objectionable content is YouTube’s top priority, according to Cecile Frot-Coutaz, head of EMEA. In quest of this urgency, the company invested not only in human SMEs (subject matter experts) but also in machine learning technology to support the endeavor. AI has contributed significantly to YouTube's ability to rapidly identify objectionable content. Before implementing artificial intelligence, only 8% of videos containing violent extremism (banned on YouTube) were flagged and removed before ten views had occurred. But after machine learning was used, more than half of the videos removed had fewer than ten views. 


In the first quarter of this year, 8.3 million videos were deleted from YouTube, and 76% were automatically identified and flagged by artificial intelligence classifiers. More than 70% of these were identified before there were any views by users. Machine learning involves training algorithms on data so that they become able to figure out patterns and take actions by themselves, without human intervention. In this case, YouTube uses the technology to automatically identify objectionable content.


While the algorithms are not infallible, they scrutinize through content much faster than if humans try to monitor the platform unaided manually. In some cases, the algorithm pulled down newsworthy videos mistakenly visualizing them as violent extremism. With the massive volume of videos on YouTube, sometimes algorithms make the wrong choice. When it’s brought to the attention of the concerned authorities that a video or channel has been taken down mistakenly, YouTube team acts quickly to reinstate it. This is just one of the reasons Google has full-time human SMEs employed to work with AI to address violative content. Google is hiring specialists with expertise in violent extremism, counterterrorism, and human rights, also expanding regional expert teams


One of the primary drivers for YouTube's meticulousness in removing objectionable content is the pressure from brands, agencies, and governments and the repercussion that's experienced if advertisements appear alongside offensive videos. In the year 2017, when ads started appearing next to YouTube videos supporting racism and terrorism, many renowned brands started pulling their advertising dollars. As a subtle gesture, in response, YouTube deployed advanced ML and even partnered with third-party companies to help provide transparent insights to advertising partners.


Google has implemented an Artificial Intelligence software known as "trashy video classifier" that constantly examines tons of YouTube videos on its own and blocks videos from the home page of the website and home screen of the app, which appears problematic for the platform. It scans for YouTube's homepage, and watch the next panels for recommended videos. It looks at the feedback from viewers who might report a misleading title, malicious or other objectionable content.  


According to cyber analysts, the company can handle the objectionable content, but it has only taken those concerns seriously where the company's revenue was at stake, or when the external world's pressure has compelled it to act. The trashy video classifier was partially motivated by financial purposes but has been a success for the company. Google recently told the advertisers that the watch time on YouTube’s homepage and application reached sky high in the last three years. YouTube doesn't share its financial details but according to RBC Capital Markets the company earned more than $20 billion last year. Advertising and marketing partners believe the company can deal effectively with brand safety concerns.


Special effects on photos and videos 


Google's new segmentation technology allows creators to replace and modify the background, effortlessly increasing videos' production value without using any specialized equipment. Google uses machine learning in its video segmentation technology, which can determine background imagery that can be replaced with something more attractive or humorous.


The technology shows how adaptable AI and ML are for solving complex computational problems. Neural network technology doesn't have to know endless rules, for example, faces have two eyes, located above the nose, unless the face is viewed in profile, in which case there may only be one eye visible, unless the subject is wearing glare glasses. It must be trained on enough photos, labeled by humans who know what a face is, so that the neural network eventually learns the patterns. It's proven a remarkably useful technology for everything such as screening out email spam, predicting what word you are trying to type into your phone, swapping out backgrounds on videos and much more. 


Power of “Up Next” feature 


Have you ever used YouTube’s Up Next feature? If yes, you have benefited from the platform’s artificial intelligence. Since the dataset on YouTube is constantly getting updated as its users upload hours of video every minute, the AI required to power its recommendation engine needed to be different than the recommendation engines of other platforms like Netflix or Spotify. It had to be able to handle real-time recommendations while new data is continually added by users. Google’s solution that works seamlessly is a two-part system. The first part comprises of candidate generation, where the algorithm evaluates the YouTube history of the user. The second part constitutes the ranking system that assigns a score to each video. 


Guillaume Chaslot, a former Google employee and founder of an initiative urging greater transparency known as AlgoTransparency, explained that the metric used by YouTube’s algorithm to determine a successful recommendation is watch time. This is good for the platform and the advertisers, but not so beneficial for the users, he mentioned. This situation could augment videos that have outlandish content, and the more users spend time watching it, the more it gets recommended.  


Training on depth prediction for augmented reality


YouTube videos provide a lush training ground for artificial intelligence algorithms with so much data availability. Google AI researchers used more than two thousand mannequin challenge videos posted on the platform to create an AI model with the ability to determine the depth of field in videos. The mannequin challenge had groups of people in a video stand still as if frozen while one person goes through the scene shooting the video. Ultimately, this skill of depth prediction is helping in propelling the development of augmented reality experiences. 


With the continuing crisis of mass shootings plaguing America, President Trump necessitated that social media companies build tools that can detect mass shooters before they strike. With the assistance of artificial intelligence, YouTube, Facebook and Twitter already toil to delete terrorism based violent content, but what's new in the President's request is that they work with the Department of Justice and law enforcement agencies. There are many questions about how such a partnership would work, if social media channels could detect actual terrorists before they act and the potential to impact the civil rights of innocent citizens. Whether YouTube and other social media companies could use AI to stop terrorism while not encroaching on the rights of civilians is yet to be ascertained.  


For more technology insights, follow me @Asamanyakm


Get our hottest stories delivered to your inbox.

Sign up for Scrabbl Newsletters to get personalized updates on top stories and viral hits.