ServiceNow Deployment Pipeline - Part 5: Object-oriented Programming - Is it worth?
This article explores whether adopting object-oriented principles in ServiceNow development is worth the effort. In the context of my previous articles around my Deployment Pipeline Application (Part 1 / 2 / 3 / 4), I now take a step back to reflect on code architecture and design principles. What are the real benefits of applying OOP while developing a large ServiceNow application? Or does it introduce unnecessary complexity?
Use Case & Challenge
Error handling plays a significant role in my Deployment Pipeline application, with error messages prominently displayed throughout the user interface. The list view for deployments serves as the first touchpoint, where red indicators clearly highlight missing or misconfigured settings:
A red banner with comprehensive error information is displayed in the detailed view until all problems have been resolved:
As we will see later, these error messages can also originate from deeper layers and must be propagated to the central deployment record to ensure transparency and traceability. Initially, I implemented a central method called "isValidDeployment()", but it quickly became overly complex and difficult to maintain due to the sheer number of potential error conditions that needed to be handled. This led to the need for a new design approach - one that applies the important software engineering principle of Separation of Concerns. An object-oriented solution proved to be particularly well-suited for this purpose.
Software Design
Object-oriented programming (OOP) is a foundational paradigm in software development, organizing code around objects that encapsulate both data and behavior. Languages like Java or C# are deeply rooted in OOP principles and offer native support for key concepts such as inheritance, encapsulation, and polymorphism. While ServiceNow runs on a Java-based runtime and exposes object-like constructs, such as script includes and built-in classes like "GlideRecord", its development model leans more toward functional or procedural approaches. As a result, the platform only touches the surface of true object-oriented design and does not naturally promote full-fledged OOP practices in the way traditional programming languages do.
1st Iteration: Identify Business Objects
Before embarking on any configuration or development activities in ServiceNow (and basically in every software development project - independent of the underlying technology), it is essential to identify and model the core business objects that will be processed within the software system. A business object is a representation of a real-world entity or concept that encapsulates business logic and data relevant to a specific domain or process.
This preliminary analysis serves as the foundation for successful software architecture and prevents costly redesigns later in the development lifecycle. By creating simple diagrams that illustrate the relationships between the identified objects, development teams gain a shared understanding of the problem domain and can make informed decisions. This modeling process reveals hidden dependencies, clarifies business rules, and ensures that the software accurately reflects the real-world processes it aims to support. Furthermore, visual representations of domain objects facilitate communication between technical teams and business stakeholders, reducing the risk of misunderstandings and ensuring that the final implementation aligns with actual business requirements.
For my Deployment Pipeline Application, a first denormalized version of such a business object diagram looks as follows:
The diagram is inspired by UML notation and incorporates the arrow types "Association," "Aggregation," and "Composition." Adding labels to the arrows clarifies their business meaning even more. By using these relationship types deliberately, initial assumptions about handling and underlying data structures are already implied.
ℹ️ Readers are encouraged to familiarize themselves with the fundamentals of UML, as this article does not provide an introduction to the subject.
2nd Iteration: Normalize Business Objects
The normalization of business objects involves transforming a domain-centric representation into a more technical model, where business-oriented relationship types are resolved and replaced with technically feasible references. During this process, the direction of relationships may be reversed - for example, in the case of an incident and its related incident tasks:
From a business perspective, one would typically state that an incident has multiple incident tasks.
Technically, however, each child Incident Task holds a reference to its parent Incident -effectively inverting the direction of the relationship.
For a simple directed arrow ("association") this is straightforward. However, what about the arrows indicating "aggregation" and "composition"? While these also have to be mapped to standard references, they require an additional layer of tables to reflect the many-to-many relationship between the associated records. In ServiceNow, this is achieved using M2M (many-to-many) tables, which are excluded from the customer table count (so-called exempted tables). Well-known examples of such M2M tables in ServiceNow include "sys_user_grmember" and "sys_user_has_role".
The resulting diagram is now significantly larger, with each box clearly corresponding to a specific ServiceNow table. And the green boxes continue to represent the business objects managed within the Deployment Pipeline application:
3rd Iteration: Transfer to JavaScript Classes
Let’s return to the original challenge, as illustrated in the previous diagram: Imagine a large deployment package, containing several dozen artifacts of various types, has been finalized and is ready to be deployed to the target instance. At the last moment, a developer opens one of the update sets included in the deployment but forgets to close it. As a result, the deployment becomes invalid and must produce an appropriate error message indicating this issue:
Now, one could technically validate all business objects in a single, centralized method. This is the path that 99% of developers would take, and it’s exactly how I initially approached it in my deployment pipeline application. However, this quickly turns into a bloated method with several hundred lines of code, making it unmaintainable and difficult to extend. Moreover, such an approach centralizes the specific rules and logic of many diverse business objects in one critical place - an antipattern in object-oriented design.
If we shift perspective and consider the deployment object itself, wouldn’t it be sufficient for it to simply ask its referenced business objects, such as the target instance or the reference to the update set, whether they are valid or not? And if not, it would be enough to receive a corresponding error message from the invalid object. The root cause of the problem is irrelevant to the deployment object, as it is not its responsibility:
This is precisely where object-oriented design excels: in delegating responsibility to the objects that own the relevant logic.
In object-oriented programming, objects are instantiated at runtime using the new operator based on their underlying class definitions. For this reason, the subsequent modeling is done using a UML class diagram, and an initial draft might look as follows:
ℹ️ The last diagram is missing a class for the Approval Group. For the sake of convenience and time, I decided to place all business logic related to managing the Approval Group within the "HttpConnectionImpl" class.
4th Iteration: Refactor & Optimize Classes
One of the core goals of object-oriented development is to avoid code duplication and to place business logic within the classes that are actually responsible for it. Among the five classes representing the deployment artifacts, there are numerous properties and methods that apply equally to all of them and making them ideal candidates for centralization within a shared position in the class hierarchy.
In a compiled high-level language like Java, this would typically be achieved by introducing an abstract base class from which all other classes inherit ("ArtifactReferenceImpl" in below diagram). This concept of elevating shared business logic to a common superclass also needs to be applied to the two classes "CustomApplicationReferenceImpl" and "StoreApplicationReferenceImpl", as the overlap between them is particularly significant:
5th Iteration: Handling Data Access and Database Operations
ServiceNow is a data-centric platform, and while developers are shielded from direct database interactions through abstractions like the "GlideRecord" API, they still operate in close proximity to the data layer.
Upon analyzing typical implementation patterns in ServiceNow, it becomes evident that the majority of the code is focused on executing CRUD operations. Domain-specific business logic often remains secondary and is tightly coupled with persistence logic, thus leading to redundancy as well as reduced reusability and testability.
In a fully object-oriented programming environment, such as Java or C#, a clear separation of concerns would be applied: business logic would reside in domain-specific classes, while data access responsibilities would be encapsulated in dedicated data access layers. For example for a User object, you might have:
UserBO: the business object representing the user and containing all the business logic.
UserDAO: the data access object responsible for fetching and persisting User data.
While ServiceNow doesn’t use classic DAOs as seen in other languages, similar patterns can be implemented using script includes that encapsulate GlideRecord access for specific table data. These act like DAOs by isolating data access from business logic.
For this reason, I introduced a dedicated class named "GlideRecordImpl", which serves as the base class for the entire object hierarchy and acts conceptually as a facade to the internally referenced GlideRecord object. GlideRecordImpl is a base class for custom classes that can serve as either pure data access objects or business objects with additional functionality. By centralizing all pure CRUD operations within this single class, data persistence logic is cleanly separated from domain logic. This approach eliminates redundancy, improves maintainability, and ensures that low-level data access is not scattered across hundreds or even thousands of lines of unrelated code:
🎁Get the GlideRecordImpl class for free! 🎁
To help you get started with your own efforts to switch to an object-oriented design, I have made the code for the GlideRecordImpl class available for free on GitHub. Please keep in mind that this class is just the foundation and may require further expansion or adaptation.
Transfer & Implementation in ServiceNow
Now that we’ve explored the underlying theory and architectural considerations, it's time to take a closer look at how these concepts are applied in practice. In the following section, I will provide concrete implementation insights from my "Deployment Pipeline" application that demonstrate how object-oriented design patterns can be effectively realized within the constraints of the ServiceNow platform.
The error message shown in the second screenshot at the beginning of this article indicating an issue when displaying a deployment recor, can be implemented in a clean and concise way if the design is following object-oriented principles:
Instead of dealing with the internal implementation details of the "isValid()" method, the implementer can rely on reusable methods - built by developers - that equip the ServiceNow application with the required functionality and flexibility.
These few lines of code represent only the tip of a much larger iceberg that has largely remained beneath the surface until now. While I cannot share the entire implementation, as noted at the beginning of this article series, I will provide selected deep dives and examples. The goal is to shed light on key implementation patterns that bring object-oriented design to life within the ServiceNow platform and make the underlying architecture more accessible and understandable.
Class Inheritance
The following lines of code represent the core structure of the "DeploymentImpl" class, which inherits from the "GlideRecordImpl" class. By doing so, it gains full CRUD capabilities and reflects the DAO (Data Access Object) aspect of the design.
A key element is the "initialize()" method, which serves as a substitute for a OOP constructor in ServiceNow's JavaScript-based environment. Within this method, the initialization of the parent class is explicitly invoked using "GlideRecordImpl.prototype.initialize.call", with "this" passed as a reference to the current object instance.
By applying this pattern, complex class hierarchies can be created in which parent classes pass down their properties and methods to their child classes.
Method Inheritance
Object-oriented design is not only about structuring data and responsibilities across classes. It also enables extensibility through method inheritance and overriding. While JavaScript, as used within the ServiceNow platform, does not support classical inheritance in the way languages like Java or C# do, it still allows for robust inheritance patterns through prototype-based delegation.
Child classes can override methods defined in parent classes to implement specialized behavior while still preserving shared functionality. This technique becomes particularly important in situations where the base class offers general-purpose logic, but a subclass needs to apply additional validation, filtering, or behavior that is context-specific.
The already introdcued "isValid()" and "getValidationResult()" methods serve as practical examples. These methods are defined in the shared base classes such es "ArtifactReferenceImpl" or "ApplicationReferenceImpl" and can be selectively overridden in child classes such as "UpdateSetReferenceImpl", or "StoreAppReferenceImpl", depending on the validation logic required for each type of deployment artifact.
The following code example illustrates how the getValidationResult() method can be overridden in a subclass while still reusing the foundational logic provided by the parent class.
In this example, the child class begins by calling the "getValidationResult()" method on its parent using "ApplicationReferenceImpl.prototype.getValidationResult.call(this)". This ensures that all shared validation logic is executed first. Afterward, the child class performs its own validations and adds an error message if required, enriching the base behavior with artifact-specific rules. This pattern provides a clean and maintainable way to extend functionality without duplicating code or violating the principle of separation of concerns.
However, there are also important considerations to keep in mind. As the number of inheritance levels increases, debugging can become more complex, especially when method overrides span several classes. In ServiceNow, where tools for tracing prototype chains are limited, this can lead to a loss of transparency. Furthermore, repeated method chaining can introduce minor performance overhead, which may become relevant in large-scale data operations or complex process flows.
It is therefore advisable to keep class hierarchies relatively shallow, clearly document overridden methods, and use consistent naming conventions. This ensures that the codebase remains understandable, testable, and easier to maintain over time.
Lazy Loading
In large-scale applications, especially those involving multiple interrelated records, it is common to encounter performance issues caused by redundant evaluations of the same logic. This applies in particular to validation processes that involve complex lookups, conditional checks across multiple tables, or heavy iterations over related records. While applying object-oriented design and method inheritance helps distribute logic effectively, it does not in itself prevent repeated execution of costly operations.
To address this, the concept of "lazy loading" becomes highly relevant. It refers to the practice of deferring the execution of an expensive computation or database query until the result is actually needed and caching the result so that it is only computed once during the lifecycle of an object instance.
In the context of validation logic, this means that a method like "getValidationResult()" should perform its full computation only on the first call. On subsequent calls, it should return the previously computed result without repeating the entire logic.
The following code demonstrates a simple approach to implementing lazy loading and internally caching the result:
In the example above, a simple existence check for the private instance variable "this._strValidationResult" has been introduced. If this variable is undefined, it indicates that the method has not been executed before. By storing the result of the parent class’s validation in this variable, the method ensures that subsequent calls can return the cached value directly, without invoking the parent method again. This pattern ensures both correctness and performance. The method remains idempotent from the caller’s perspective, while internally guarding against wasteful computation.
By consistently applying lazy loading in all critical validation methods, the overall responsiveness of the application improves significantly. However, lazy loading is not a universal solution for every scenario, and thoughtful developers should take the following considerations into account.
When applying lazy loading in ServiceNow, it is important to ensure that cached values are scoped to the current object instance and do not unintentionally persist across different GlideRecord contexts. Lazy loading should only be used for deterministic logic, meaning functions that produce consistent results based on a given input state. It is generally not suitable for time-sensitive or non-deterministic operations, where repeated execution might yield different results or depend on real-time system conditions. Additionally, developers should exercise caution when combining lazy loading with asynchronous processes such as REST calls or background jobs, as improper caching in these contexts may lead to race conditions or stale data.
Object Factory & Caching
As ServiceNow applications grow in complexity, maintaining consistency and efficiency in object instantiation becomes increasingly important. In particular, when working with rich object models that encapsulate both behavior and data it is often necessary to refer to the same logical object multiple times within a single transaction or script execution context.
Without a centralized mechanism for managing instances, there is a risk of creating multiple objects for the same underlying ServiceNow record. This not only leads to unnecessary memory usage and performance overhead but can also result in inconsistent state or duplicated effort, especially if those instances perform costly validations or computations.
To address this, the concept of an Object Factory can be introduced. An object factory is a dedicated script include responsible for instantiating and returning objects based on ServiceNow records, while internally managing a cache to ensure that each record is only represented by a single object instance per execution context.
The following pattern illustrates a simple but effective object factory for managing deployment-related objects:
In this example, the factory exposes a single static method "DeploymentPipelineFactory.get()", which returns a "DeploymentImpl" object for the specified GlideRecord instance in "grRecord". If the object has already been created in the current execution context, it is returned directly from the cache. Otherwise, the corresponding GlideRecord is passed into a new object instance, which is then stored and reused.
Using an object factory in this way offers multiple advantages:
It streamlines the implementers’s work, as there is no longer a need to reference or manage the exact class name. Instead, a fully initialized object can simply be requested and returned by the factory.
It improves performance by avoiding redundant object instantiations .
It enforces a one-to-one relationship between ServiceNow records and object instances during a transaction, reducing the risk of inconsistent state.
It simplifies downstream logic by ensuring that any validations, lazy-loading, or flags set within the object are preserved throughout its lifetime within the script.
Finally, it aligns with clean architecture principles by decoupling object creation from business logic, making the overall design more modular, reusable, and testable.
Bringing It All Together
If you’ve made it this far, thank you for staying with me through the more theoretical parts of this article. I appreciate your perseverance. The reward for this effort becomes evident when we put the presented concepts into practice. Referring back to the very first screenshot that introduced the initial challenge, let’s now look at the concrete implementation of a field style that displays the red indicator in the "Number" field whenever a deployment record is considered invalid.
OOP in ServiceNow: Game Changer or Overkill?
So, Is It Worth?
Absolutely, but with intention and balance!
Adopting object-oriented principles in ServiceNow is not a decision to be made lightly. The platform doesn’t enforce OOP paradigms, nor does it provide native tooling to support class hierarchies, constructors, or encapsulation in the traditional sense. Still, as demonstrated in this article, thoughtful application of OOP design yields tremendous benefits: separation of concerns, improved maintainability, clearer responsibilities, and a scalable structure that can evolve with business requirements.
From my own experience building the Deployment Pipeline application, introducing object models was not just a theoretical exercise - it solved real architectural pain points. Refactoring massive procedural functions into distributed validation logic across dedicated classes brought clarity and flexibility. It enabled reusable patterns while remaining within the boundaries of the ServiceNow scripting model.
More importantly, this approach aligns with what we increasingly see across the ecosystem: mature ServiceNow implementations adopting layered designs, leveraging factories, validators, services, and even domain-driven patterns. ServiceNow's scripting layer might not require OOP to function, but large-scale enterprise solutions demand architectural discipline. OOP helps bridge that gap.
That said, it's not a silver bullet. Overengineering is a real risk. Shallow hierarchies, clear naming, and pragmatic scoping are essential to avoid creating a maze of indirection. Junior developers unfamiliar with the paradigm may initially struggle. Debugging becomes more abstract. And because the platform itself doesn’t "speak" OOP, you’re creating your own conventions - requiring a strong internal development culture.
So asked again, is it worth it?
👍Yes - if you're solving complex problems, working in larger teams, or building applications with longevity in mind. The effort pays off in maintainability, testability, and clarity. You start writing applications, not just scripts. For ServiceNow professionals striving to mature their development practices, embracing object-oriented thinking isn't just worth it. It's often the next logical step.
👎No - if you're only building simple utilities or one-off automations. These include quick scripts, field-level validations, data fixes, or small integrations that solve isolated problems without much business logic or long-term relevance. In such cases, the overhead of setting up classes, inheritance, and object factories outweighs the benefits. Simpler procedural code is often faster to write, easier to understand, and entirely sufficient for the task at hand.
Looking ahead
The ServiceNow ecosystem continues to evolve, and with it, so should our development practices. If you're experimenting with object-oriented design, domain-driven patterns, or modular architecture on the platform, share your experiences! I’m always interested in exchanging ideas and learning from others in the community.