LinkedIn is adding machine learning technology from parent company Microsoft in order to help improve the quality of feeds by detecting and removing more inappropriate content.
Detailed on the LinkedIn Engineering blog, the new process will enable LinkedIn to boost its content detection capacity by adding in Microsoft’s ‘Content Moderator’ system, which is able to catch potentially offensive material as it’s posted.
As explained by LinkedIn:
“Content Moderator’s machine-assisted scanning covers text, images, and videos. LinkedIn’s [existing detection process] utilizes LinkedIn’s in-house knowledge base and capabilities to classify images, text, and videos along similar categories. Despite their similarities in purpose, the two tools have unique components that, when combined, are extremely beneficial to us. First, Content Moderator’s classifiers are trained on content previously unseen on the LinkedIn feed, which allows us to increase the volume of inappropriate content we can successfully classify. In other words, by combining LinkedIn and Content Moderator classifiers, we hope to improve both recall (i.e., the total amount of poor quality content caught) and precision (i.e., keep the number of false positives low).”
It’s another way for LinkedIn to utilize Microsoft’s more advanced tools, particularly in regards to machine learning, to help improve its platform. And while this update is specifically focused on detecting and removing inappropriate material from feeds, it may also point to future advances for LinkedIn’s algorithms, helping to provide more context, relevance, and importantly, timeliness to the updates included on your home screen.
I don’t know about you, but I still regularly see LinkedIn posts in my feed which are well beyond relevance because they were posted so long ago.
The amount of activity on this particular post may be what’s kept it around, but still, I see others which have no reason to repeatedly come up, including reminders for events which keep appearing weeks after the actual date.
That’s likely difficult for LinkedIn’s systems to detect, but surely, with the advanced capabilities of Microsoft’s AI tools, they’d be able to better contextualize the wording and post time of such updates, and better understand their relationship to time.
And this is just one aspect - LinkedIn’s feed algorithms still seem like they have a way to go before they enable the main feed to become a truly relevant, optimally helpful tool. In the mobile app, where LinkedIn has put more emphasis (LinkedIn has more mobile users than desktop), it is better, but still, further alignment with Microsoft’s more advanced tools will no doubt help.
It’s yet to be seen if that’s the direction LinkedIn's headed with such updates, but the gradual integration of Microsoft’s tech into such areas holds a lot of promise. Starting with the detection of potentially offensive content is likely just the beginning of this process.