Economically Increasing Processing Power with NetApp Shared Storage
At Thomson Reuters, our mission is to meet the information needs of businesses and professionals across a wide range of fields, so information technology is critical to everything we do. The key is to be able to identify, package, and serve up data and information for the bankers, lawyers, scientists and investment managers whose livelihoods are increasingly based on not just the quality of their insights, but also the speed, relevance and adaptability of those insights.
And that incessant desire for speed, insight and optimized decision-making—that appetite for intelligent information—has required us to find new ways to create these increasingly intelligent information factories without having to spend more money on IT each year.
Our History with NetApp
In recent years, Thomson Reuters has been busy transforming its infrastructure, creating a flexible private and public cloud foundation on which new products and technologies can be conceived and implemented. With 16 petabytes of information stored worldwide, the company’s requirement to sift through more information in less time and deliver better results was a tall order.
Storage has been foundational to the change. A big part of the upgrade involved changing the storage in the back end so we could change the whole dynamic of search. The NetApp shared storage infrastructure made it economically feasible to increase the processing power applied to each search. As a result, we were able to save roughly $65 million by eliminating the need to build another two megawatt data center.
All of this helped make WestlawNext possible, which fundamentally transformed the customer experience with its large leap forward into human-centric computing. At the heart is a powerful search engine that is intuitive and user-friendly. Lawyers and legal researchers can now pose search queries using more natural human language and get the information they need quicker and the searches are more accurate and comprehensive.
The right technology foundation enabled Thomson Reuters to widen its competitive advantage by propelling legal research to a new level. The company can now use this cloud platform to support other businesses across the organization, including Tax and Accounting, IP & Science, Financial & Risk and more. Our technology foundation is the engine that powers all applications. NetApp technology could do what we needed it to, cost effectively.
Our Current Infrastructure
The key to success for all Thomson Reuters products is to be able to perform searches on massive amounts of data very quickly and with complete accuracy.
The key elements of our infrastructure include:
• Standard building blocks
• A cloud-like search architecture
• Virtualized Web front end
• Replication for disaster recovery
We have approximately 100,000 servers in our data centers, most with 2- or 4-CPU configurations and backed by NetApp storage. Our network infrastructure is almost entirely 10-Gigabit Ethernet. We use these building blocks in both the front-end and back-end configurations.
Our patented Novus architecture is the core of all search operations. The Novus architecture provides a single platform for supporting online services from each of the four Thomson Reuters market groups, including WestlawNext, Checkpoint (our tax and accounting research system), and some brokerage research products from our Financial and Risk business. In all, 30+ applications use the Novus architecture.
Any server in the Novus environment can be reallocated in real time to take on a different function. When we architected this we wanted to make sure that if a peak event happened, we could reallocate resources very quickly so that, for instance, what was a database server five minutes ago could now be a search server. When we do code deploys to Novus, all of the code is deployed to every server for every function. If WestlawNext is getting hit hard, we can allocate more resources specifically to it or to Checkpoint or any other application that needs the resources. Servers don't have to reboot, they simply load the appropriate content into memory from NetApp storage and they are ready for their new role.
This dynamic capability also allows us to build redundancy into the environment, plus it ensures the accuracy of results. We always have extra, idle servers available. If within just a few milliseconds after sending a request we don't get a result back from a server, we dynamically allocate a spare server to take over the failed one’s function. It will then load the appropriate content from the Netapp Storage system into memory and service the request.
The end result is that a server can fail and the user will still get an accurate result with nothing omitted and only a few seconds' delay. The user doesn't have to reissue the request and the recovery happens automatically without administrator intervention.
Under normal operation, we always have two data centers running with very similar infrastructure and identical data. If a disaster takes down one running data center, the other running data center can scale up operations to accommodate the additional search load.
NetApp storage supports the Novus architecture (indexes and Oracle database content stores) as well as the front-end VMware environment. All the indexes that get pulled into our Linux servers plus all the content stored in Oracle databases are kept on NetApp NAS storage accessed via NFS. Novus works because we have thousands of servers sharing access to our storage systems at one time with the ability to dynamically change which servers access which storage on the fly.
We are adding new content all the time and WestlawNext alone has over five billion records on NetApp storage. We’ve been able to grow our content set without having to make significant capital investment by using the compression and duplication technology, helping us add to the bottom line of our company.
But adding content means re-indexing and pushing both the new content and associated indexes out while keeping everything in sync. The NetApp Snap Restore technology allows us to restore access to our databases as quickly as possible should a failure occur while still servicing our customers from the secondary datacenter. The speed of recovery ensures we always have redundancy and high levels of availability for our customers.
We use NetApp deduplication in our VMware environment to eliminate the duplication that comes with having a large number of nearly identical VMs. One division alone has over 9,000 VMware VMs running on NetApp storage, and we've achieved over 160TB of space savings on primary storage through the use of deduplication.
To manage our environment, we use the full complement of NetApp OnCommand management products including Operations Manager, Provisioning Manager, Performance Manager, and OnCommand Insight. This gives us a single set of tools that work across all our NetApp storage to simplify management, speed up provisioning, and identify performance issues. OnCommand Insight gives us a consolidated view of our entire heterogeneous storage environment in terms of capacity, connectivity, configurations, and performance. It also provides alerts on component failures so that we can resolve issues before redundant components experience a second failure.
For professionals around the world, getting the right information is not enough. It has to drive deep, actionable and relevant insights to support the client and it has to be fast. This intelligent information is the core of the Thomson Reuters value proposition and we’re proud to have NetApp in our corner.