x

    Optimisation matters and a Case Study that proves it

    Imagine a bustling city where the traffic lights are poorly timed and the roads are filled with unnecessary detours. The result? Frustrated commuters, delayed deliveries, and a city that grinds to a halt. Similarly, in the world of ServiceNow, inefficient scripts and poorly optimized database queries can lead to slow performance, increased loading times, and a subpar user experience.

     

    In today's fast-paced digital landscape, organizations rely heavily on IT solutions to deliver seamless and efficient services. ServiceNow, a leader in ITSM solutions, is at the heart of many enterprises' IT operations. However, as the complexity and volume of data increase, the performance and responsiveness of the platform can become a critical concern. This is where the optimization of scripts and database queries becomes paramount.

    Optimizing these elements not only ensures a smoother, faster, and more responsive system but also enhances the overall user satisfaction and operational efficiency. In this article, we'll explore the significance of script and query optimization in ServiceNow, delve into best practices, and provide actionable insights to help you keep your platform running like a well-oiled machine. Whether you are a seasoned developer or a newcomer to the ServiceNow ecosystem, understanding the importance of optimization is crucial for maintaining a robust and efficient IT service environment.

    The Challenge

    As a Solution Architect and Advisor, my role involves designing functionalities that meet business requirements while ensuring they are efficient, secure, and adhere to best practices. I also guide both technical and business teams through a multitude of possibilities and available solutions to their problems.

    A significant part of my job is helping these teams manage technical debt and legacy functionalities accumulated over the years, often as a result of working with multiple vendors. Sometimes, the sheer volume of poor-quality features can become a burden, impacting platform performance and usability.

    In this article I will describe several problems that I have experienced when dealing with legacy integrations, and the solutions that I have applied to fix these problems. I have mentioned integrations, but these problems can be faced in many other areas not necessarily involving other systems.

    Scenario is as follows:
    There are 50 000 employees, every one of them is using a tablet to update their status (since they are working in the field their status can change several times per day) and to request some services. Employee status is aggregated by a 3rd party system (which is also a source for employee data), but services are managed in ServiceNow. Statuses need to be imported into ServiceNow, ideally we would like to have almost real time updates from the field. Integration that has been established with a 3rd party system has several issues and is putting a strain on our system, we need to fix that.

    Problem 1: Bad Integration architecture

    Integration is initiated from ServiceNow every hour, it means that ServiceNow is polling the 3rd party system on a regular basis in order to receive the fresh status. Solution is not ideal, as sometimes status updates are reflected in ServiceNow with 50/60 minutes delay.

    Solution to this problem is to initiate the communication from the 3rd party system, so that the update is transferred immediately. This will result in data being more up to date and less traffic between both systems.
    In simple terms we should change from “Pull” to “Push” method. 

    Optimisation matters -illustration1

    Unfortunately, technical limitations of the 3rd party system make this solution impossible.

     

    Problem 2: Importing data that is not needed

    Since ServiveNow does not know which statuses were updated since last import, it asks for all 50 000 of them every time. Moreover, the 3rd party system sends all the attributes of each Employee, rather than just the status. Solution has been configured in such a way that generates a lot of traffic and processing all received records in ServiceNow is using a lot of resources. This problem requires a couple solutions.

    1. Improved query: ServiceNow should ask for only these statuses that were changed since last import. This should result in getting a couple thousand statuses rather than 50k.
    2. Dividing this Integration into two separate imports:
      1. Asking about recently changing statuses on a regular basis.
      2. Asking for the full import with less dynamic data (locations, departments, email addresses, profile pictures, etc) once every night.

    These solutions were also not possible due to the technical limitations of the data source.
    Instead - I have implemented workarounds to these problems:

    1. All 50 000 records are being returned from the 3rd party system, but instead of being stored in ServiceNow and processed, 90% of them are being discarded based on the “status being the same as in our database” condition.
    2. During the business hours, only statuses are being processed, static data like locations and departments are only processed during nights, when the ServiceNow instance is less busy.

    Problem 3: Processing data record by record

    Another flaw of existing Integration is the method of processing incoming updates. Each record received in ServiceNow is stored in the Import table and processed to the target User table separately. It is not optimal as new Transformation is triggered for each and every record, what is more - it is stored in the history in ServiceNow.

    The correct approach in this scenario is to import all records first, and later trigger the transformation. Instead of 50 000 Transformation records there is only one created. Also ServiceNow benefits from processing records in batches, which makes the import faster.

    Problem 4: Querying the database while knowing the answer

    All Employees have their own managers, are working for one of many departments and have their office located somewhere. There are 20 departments, 5000 managers and 500 locations in total. The problem is that for every single employee, a database query is done to all three tables (which results in 150 000 queries per import)

    Getting all managers, locations and departments before processing the imported records and storing them in cache, results in only 3 queries per import.
    Yes, you got that right, 3 queries instead of 150 000. The improvement is significant.

    Problem 5: Performing heavy queries

    Sometimes it is inevitable to get some data from the database, but the way in which it is queried matters. It is tricky to describe this point without getting too technical, so I will give you an example that should explain it perfectly:
    If you are in the library Is it easier to:

    find all books with titles starting from letter “P”

    or

    find all books with titles containing the letter “A” or containing the letter “C” but not containing the combination “AC” and at the same time…

    I think the answer is simple. Sometimes database queries cannot be avoided, but they can be simplified, especially if they were defined in a hurry without proper “design” and optimization. 

    Another improvement that can be made is indexing columns that are used in queries (like title in the example above). Since we know that we will be using the title to find some books, we can write them down together with the book location. If anyone asks, we can take a look at the list, there is no need to check each and every book separately.

    And lastly, if you need to check the author of each book, getting the entire book seems like an overkill, it would be perfect if the librarian can give us a simple answer rather than the entire book. The same applies to database queries: there is no need to bring all records from the DB to know how many of them are there, there is no need to bring all Departments to understand what is the average headcount in all of them, there is no need to bring all Managers to get the full list of their email addresses.

    Modifying database queries can improve execution time of some scripts or imports, it can also reduce the strain on databases, and we should always remember that the amount of database connections that ServiceNow can make at a time is limited. 

    Problem 6: Reporting issues over and over again

    Imagine a situation where an imported record of an Employee is reporting to a Manager, who does not exist in the database. This is a problem and it should be reported to someone, who can remediate this issue. Existing Integration is equipped with such functionality, but the problem is, that it is not being smart about it. If you remember there are 50 000 employees and only 5000 managers, it means that every manager has roughly 9 direct reports, if the manager is missing from the database - he is missing for all 9 of his direct reports, so the Integration creates 9 Incidents instead of 1. 9 every hour. Everyday.
    Can you imagine what happens if a department is missing, if we can have up to 4000 users working in one department?

    The solution to this problem is to report ONE Incident with missing Manager or Department, list all affected Employees in that Incident, and skip creating exact same Incidents every import unless this one is closed.

    Problem 7: Not following the Best Practices

    Apart from 6 problems listed above I have spotted a lot smaller issues that are too technical to be listed here. All of them were related to not following the ServiceNow best practices. Sometimes developers forget about them (I do as well), sometimes we think that they are not making any difference, sometimes it is not clear why we should use them. The problem is that they were introduced for a reason and all of them are important. A lot of small mistakes can add up to something significant, and it was the case with this Integration as well.

     

    The Result

    After identifying all the problems listed above and solving them (at least these ones that could be solved in a given timeframe and budget) Integration started to perform much better and it was no longer causing problems with instance performance. Thanks to these improvements we managed to schedule the import more often which resulted in more up to date statuses. How big is the improvement? 

    blog-Optimization matters- ill 2

    Conclusion

    Optimization matters. Performance issues do not have to be solved with more resources being allocated, more application nodes being added and more computation power being utilized. Sometimes it is enough to take a step back and do a cleanup. Sometimes our resources are busy performing actions that are not needed. Revisiting old functionalities, optimizing scripts and database queries and applying best practices can significantly reduce the strain on the ServiceNow platform. Often, these optimizations can be made without sacrificing anything.

    Some problems that I presented in this article were extreme, but this was the actual state of things in a production environment somewhere. If you are experiencing some performance issues, feeling that your Integrations are a bit too heavy or simply need assistance with keeping your ServiceNow instance healthy - feel free to reach out.