Notifications started as a simple way to alert LinkedIn members when something important to them happened on our network. However, as the range of activities and actions available to members increased, so did the use cases and sheer volume of notifications. It became clear that we needed a new technical architecture for notifications that would let us scale from serving just a handful of notification types—owned and maintained by a single team—to an end-to-end platform serving potentially hundreds of notification types—owned and maintained by dozens of partner teams.

In this post, I share details about how we at LinkedIn tackled the challenge of transforming notification development from a one-off process to a standardized, platform-centric approach. With this transformation, we had to rethink how we spec and design a new notification, how we architected the frontend and how we modeled notification data in the backend.

Notification creation challenges

Initially, all notification work was custom. A partner team requesting a new notification had to create a product spec and design for the notification from scratch. The engineering effort involved six systems: a notification producer, the Notification Service, the front-end API, and three clients—Android, iOS, and web. Additionally, an engineer was required to design a new model schema for the notification type and to pass the new schema through our model review committee.

All in, about 12 weeks of engineering effort was required to launch a new notification type across four engineering specialties: apps, Android, iOS, and web. It was occasionally more challenging for teams to meet the skill set requirement than the total level of effort, because not all teams at LinkedIn have mobile specialties. Due to the effort and skillset required, several instances occurred where a notification that would have provided value to our members was delayed or outright abandoned.

. At the time, our front-end API primarily provided data-based models and required the clients to separately translate the API data model to a view model for use in view layer code.

The new approach we jointly developed is called Render Models, and it works by sending view models directly from the front-end API—but with a slight twist. When data needed to remain consistent, like a member name or profile image, Render Models included that data model and referenced it from the view model. We collaborated with the Mobile Infrastructure team to pilot Render Models for notifications, enabling us to completely remove client-side development as a requirement to launch a new notification, as well as building utilities to efficiently work with Render Models when creating new notifications.

For the third bottleneck, we used existing notification schemas as a baseline to model a new generic notification schema that holds all of the information needed to create, process, and format a notification. This was largely possible due to LinkedIn’s distributed architecture. We did not need the generic schema to store all possible notification data; instead, downstream systems store most of this data, referenced by Uniform Resource Names (URNs). At format time, we use these URNs to decorate the full notification data.

Notification producers all produce to the same Kafka topic with this new generic schema to request the creation of a new notification. We gave the Notification Service the ability to store, aggregate, and retrieve a notification generically from these creation requests. We were hoping to completely remove the Notification Service as a touchpoint to onboard a new notification; however, in the end, the system we developed requires partner teams to add a single line of code that associates a notification type with its method of aggregation. We built three methods of aggregation to make this aspect of the process easier.

Platform enhancements

The ambition of Project One inspired other teams to help out. The Relationships Team enhanced its notification producer by creating a platform called Beehive that can quickly produce a new notification type for any team from any offline data. Similarly, the Feed Data Platform Team and Relevance Team jointly created Concourse, a new platform to enable the quick production of new notifications from online data. With Beehive and Concourse, partner teams no longer need to create and maintain their own notification producers and can instead simply use one of these two platforms, depending on their data need for offline versus online data.

We needed to ensure our members would not be overwhelmed by the new notifications our platform would enable, so we introduced the (ATC) service as a permanent gatekeeper and filter for all new notifications. ATC is designed to manage the frequency and channel selection of external communication to our members. Our new solution treats in-app notifications as another communication channel, and enables us to send only the most relevant notifications to our members.