Leveraging Google’s Artificial Intelligence to Build a Smart Drupal 8 CMS
The Use Case
At the outset, we were interested in exploring how AI might be applied to automatically generate meaningful ALT text to describe images for blind and low vision users. For websites with a long tail of legacy content, we envisioned being able to update their archives and make them more accessible. However, while image analysis has gotten quite smart and can identify the content of an image with increasing reliability, our investigation revealed that the technologies (at the time of this writing) are still not quite smart enough to understand context, and to generate meaningful sentences.
Pantheon co-founder, Josh Koenig, and Kalamuna CEO, Andrew Mallis, discussed many experimental applications, and were keen to find a real-world business problem to solve for the partnership. They devised to build a custom Drupal 8 module that employed machine learning to block or flag image uploads that may have inappropriate content – a feature Patch.com, one of Pantheon’s clients, was deeply interested in.
Patch.com utilizes citizen journalism to source hyperlocal news, which includes 6 million registered users uploading vast numbers of images with up to 1000 stories published each day across 900 hyperlocal “patches”. They stand at risk of liability and embarrassment if those image uploads contain inappropriate or offensive content. Given that it is economically unsustainable to scale enough human editors to review all of the image uploads, leveraging artificial intelligence is a far more effective approach to scale their limited human resources. While automating curation helps limit liability, it also ensures that content stays current – a key attribute of the news.
As the Drupal 8 research and development partner in this project, Kalamuna was excited to integrate Google Cloud’s AI solutions into Drupal because it was an amazing opportunity not only to demonstrate Drupal’s extensibility as a platform, but also the powerful business applications that could emerge from bringing machine learning and artificial intelligence to web content management systems.
We started by building a proof of concept, leveraging the Drupal 8 Umami installation profile. Umami provides a Drupal 8 instance designed as a food magazine website. While generally useful for demonstrating Drupal’s many features, it was specifically useful in this context because it provided the necessary elements of a real website editing experience so that we could focus solely on the Google Cloud AI integration.
We developed a module that integrates with the Google Cloud Vision API to analyze uploaded images. The Google Cloud Vision API puts the power of machine learning into a REST API developers can programmatically identify the content of an image.
Per the Google Cloud Vision website:
[The Google Cloud Vision API] quickly classifies images into thousands of categories (such as, “sailboat”), detects individual objects and faces within images, and reads printed words contained within images. You can build metadata on your image catalog, moderate offensive content, or enable new marketing scenarios through image sentiment analysis.
The use case for Patch.com was identifying offensive visual content, so our module passes image uploads to the Cloud Vision API, which returns an analysis of the “safety” of the image content. The API can return values of varying types, including Adult, Violent, and Racy, among others.
In cases where the Cloud Vision API returns a value of “violent” or “adult” – i.e. the AI evaluation places a high likelihood the image contains potentially offensive content – Drupal rejects the upload, delivering a message to the user that the content has been rejected due to being potentially offensive, and instructs them to contact a site administrator if they believe the image has been rejected in error.
In cases where the Cloud Vision API returns a value of “racy,” we instructed Drupal to place the image in a workflow that requires review and approval by a website editor. This means that any content Google’s AI flags as “not obviously offensive, but potentially offensive“ is not immediately rejected, but must go through a review process. Within Drupal 8, we were able to build out the required interfaces in the backend to support the workflow for reviewing and approving (or rejecting) “racy” images.
A Fly in the Ointment that Was Actually Soup
In the course of testing the Google Cloud Vision API, we learned there are still a few shortcomings to the machine learning at the heart of this technology. Most amusing to us, we learned that the API will regularly flag tan leather car seats as being “racy.” We don’t assume the artificial intelligence is making light of a clever pun, in this case. We also noted that painted or colorized nudes could evade detection. Ultimately, this meant some level of human oversight would continue to be necessary, but it is likely only a matter of time before the technology resolves these issues.
Once we had an operational proof of concept for the Google Cloud Vision integration, we needed to build workflows around it so that it would work in a real-life publishing environment like Patch’s. Once an image was uploaded, the system would approve, reject, or flag it for moderation based on the evaluation performed by Google Cloud Vision. Websites like Patch.com would need an editorial workflow for editors to review images flagged by Google, and then approve or reject them. We built the workflow to allow for editors to log in, review, and approve/reject posts flagged by the API.
Ultimately, Patch.com determined that this level of editorial review was not going to be more efficient than the model they have in place currently, especially given that the Google Cloud Vision API was not 100% reliable in the images it approved. Patch.com would still need the manual review of “approved” images anyway. This shows how the technology still has improvements to make before certain commercial applications can rely on it.
The opportunity to work on this project was an awesome experience for our team. Working with Google’s AI technology demonstrated to us that there are innumerable applications for integrating machine learning into Drupal to make a smarter CMS.
How many business processes may be streamlined with the introduction of artificial intelligence to sift through large datasets, identify outliers and flag them for human review?
How may speech-to-text and translation alter the way we interact with websites along with the underlying content management systems?
These are really exciting questions to consider. We look forward to being part of the community that starts to answer these questions for our clients.