When dealing with performance, there is a new control under the AX query system called Query::Insert_Recordset. This allows mutliple records to be inserted into a table, with a single SQL call, in the same way the insert_recordset command works.
While this command is great for performance, there is a problem with the standard crosscompany control when using the query::Insert_recrodset comamnd. The crosscompany controls only build the extended select statement to include the comapny control (dataareaid) when the queryrun class is used. If since the query::insert_recordset does not use the the queryrun class, the crosscompany control does not work, and will not filter the data across companies based on any rules you apply. It will however extract data for all companies.
The only work around that I have found is to add the dataareaid into the query, selecting the companies you want to use, this then includes the company control into the SQL statement ensuring only the companies you are interested in are processed.
There are a few things to think off when writing code, design requirements and performance. If we include the thought process of performance when developing we can often improve our overall code performance at the time of writing.
When you need to import large numbers of item records into AX2012 you will find the time it takes to load them fairly slow. In my example I have 200,000 records to load. First I need to create the Product masters, then I need to release the product to the different companies. I was finding the creation of the product masters was taking about 3 hours to complete with the release to each of the companies around 3 hours each, and I had two companies to release.
The use of RecordInsertList has been well documented for increasing the system performance of inserts records.But I was curious how the use of RecordInsertList would scale and what impact it would have on performance.
To understand the impact on performance and scale-ability I created a table with one field. I then wrote a job that would insert X number of records into that table using first the standard insert method, then use the RecordInsertList method. Each pass would record the number of milliseconds to complete. I did this for 100 records then 1000, 5000, 10000, 20000, 40000, 100000, 1000000 and 10000000.
The use of SANs in today network environment is almost common place, but is the performance requirements of the systems using those SANs considered in the design on the SAN setup.
A typical problem I find is people are not considering how the SAN should be setup to get the best performance, primarily for DAX we are talking about the considerations for SQL server, regardless if SQL is visualized or physical.
Using the Database Engine Advisor to analyse a SQL query can give you an insight into where to look for problems with performance. Recently I have a problem opening the advisor from a query window recieving an error: “Failed to connect to an IPC Port: The system cannot find the file specified”.
The problem was caused by a previous use of the system that had not been shutdown cleanly. The simplest way to resolve this problem is to open the windows task manager and end the process for DTAShell.exe.
The design of your database, along with your indexes will determine much of the performance limitations of your system. The indexes are how the system will locate information, and if these are well designed can produce considerable performance improvements. This blog will look at the index field order and the system fields, and the effects it can have on your system.
During a recent client upgrade from AX4 to AX2012, had a problem with performance with the upgrade wizard. On the Prepare Application for Preprocessing at the Prepare financial dimension framework step the bottom of the screen, dimension numbers, took 20+ minutes when pressing the upgrade validate button.
The client had 300+ companies, each with many financial dimensions, which resulted in building 180,000+ records in the migration tables. When reviewing the code that builds this information, class LedgerFrameworkSetupHelper, method setupDimensionMergeTable, the records were being read and inserted one at a time, and not using the bulk insert controls. this causes high load on SQL and is also very inefficient.
Changing the code to use the RecordInsertList so records could be bulk inserted, reduced load time by 94% to just over 1 minutes. This means the SQL server was much more efficiently being used, and the time spent waiting for data to be loaded significantly reduced.
So if you are planning to do an upgrade from AX4 to AX2012, and have lots of companies, then consider modify the code to use the bulk insert commands.