A very common customer scenario – is where all of a sudden you start getting these 31552 events on the RMS, every 10 minutes. This drives a monitor state and generates an alert when the monitor goes red.
However – most of the time my experience is that this alert gets “missed” in all the other alerts that OpsMgr raises throughout the day. Eventually, customers will notice the state of the RMS is critical, or their availability reports take forever or start timing out, or they notice that CPU on the data warehouse server is pegged or very high. It maybe be several days before they are even aware of the condition.
The 31552 event is similar to below:
Date and Time: 8/26/2010 11:10:10 AM
Log Name: Operations Manager
Source: Health Service Modules
Event Number: 31552
Level: 1
Logging Computer: OMRMS.opsmgr.net
User: N/A
Description:
Failed to store data in the Data Warehouse. Exception 'SqlException': Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. One or more workflows were affected by this. Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance Instance name: State data set Instance ID: {50F43FBB-3F59-10DA-AD1F-77E61C831E36} Management group: PROD1
The alert is:
Data Warehouse object health state data dedicated maintenance process failed to perform maintenance operation
Data Warehouse object health state data dedicated maintenance process failed to perform maintenance operation. Failed to store data in the Data Warehouse.
Exception 'SqlException': Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.One or more workflows were affected by this.
Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance
Instance name: State data set
Instance ID: {50F43FBB-3F59-10DA-AD1F-77E61C831E36}
Management group: PROD1
Now – there can be MANY causes of getting this 31552 event and monitor state. There is NO SINGLE diagnosis or solution. Generally – we recommend you call into MS support when impacted by this so your specific issue can be evaluated.
The most common issues causing the 31552 events seen are:
- A sudden flood (or excessive sustained amounts) of data to the warehouse that is causing aggregations to fail moving forward.
- The Exchange 2010 MP is imported into an environment with lots of statechanges happening.
- Excessively large ManagedEntityProperty tables causing maintenance to fail because it cannot be parsed quickly enough in the time allotted.
- Too many tables joined in a view or query (>256 tables) when using SQL 2005 as the DB Engine
- SQL performance issues (typically disk I/O)
- Using SQL standard edition, you might see these randomly at night, during maintenance as online indexing is not supported using SQL standard edition.
- Messed up SQL permissions
- Too much data in the warehouse staging tables which was not processed due to an issue and is now too much to be processed at one time.
- Random 31552’s caused my DBA maintenance, backup operations, etc..
If you think you are impacted with this, and have an excessively large ManagedEntityProperty table – the best bet is to open a support case. This requires careful diagnosis and involves manually deleting data from the database which is only supported when directed by a Microsoft Support Professional.
The “too many tables” is EASY to diagnose – because the text of the 31552 event will state exactly that. That is easily fixed by reducing data warehouse retention of the affected dataset type.
Now – the MOST common scenario I seem to run into – actually just happened to me in my lab environment, which prompted this article. I this this happen in customer environments all too often.
I had a monitor which was based on Windows Events. There was a “bad” event and a “good” event. However – something broke in the application – and cause BOTH events to be entered in the application log multiple times a second. We could argue this is a bad monitor, or a defective logging module for the application…. but regardless, the condition is a monitor of ANY type starts flapping, changing from good to bad to good WAY too many times.
What resulted – was 21,000 state changes for my monitor, within a 15 MINUTE period.
At the same time, all the aggregate rollup, and dependency monitors, were also having to process these statechanges…. which are also recorded as a statechange event in the database. So you can see – a SINGLE bad monitor can wreak havoc on the entire system… affecting many more monitors in the health state rollup.
While the Operations Database handles these inserts quite well, while the DataWarehouse does not. Each statechangeevent is written to both databases. The standard dataset maintenance job is kicked off every 60 seconds on the warehouse. This is called by a rule (Standard Data Warehouse Data Set maintenance rule) which targets the “Standard Data Set” class, which executes a specialized write action to start maintenance on the warehouse.
What is failing here – is that the maintenance operation (which also handles hourly and daily dataset aggregations for reports) is failing to complete in the default time allotted. Essentially – there are SO many statechanges in a given hour – that the maintenance operation cannot complete and times out, rolling back the transaction. This is a never-ending loop, which is why it never seems to “catch up”… because a single large transaction that cannot complete blocks this being committed to the database. Under normal circumstances – 10 minutes is plenty of time to complete these aggregations, but under a flood condition, there are too many statechanges to calculate the time in state for each monitor and instance, to complete.
So – the solution here is fairly simple:
- First – solve the initial problem that caused the flood. Ensure you don’t have too many statechanges constantly coming in that are contributing to this. I discuss how to detect this condition and rectify it HERE.
- Second – we need to disabled to standard built in maintenance that is failing, and run it manually, so it can complete with success.
For the second step above – here is the process:
1. Using the instance name section in the 31552 event, find the dataset that is causing the timeout (See the highlighted section in the event below)
Date and Time: 8/26/2010 11:10:10 AM
Log Name: Operations Manager
Source: Health Service Modules
Event Number: 31552
Level: 1
Logging Computer: OMRMS.opsmgr.net
User: N/A
Description:
Failed to store data in the Data Warehouse. Exception 'SqlException': Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. One or more workflows were affected by this.Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance
Instance name: State data set
Instance ID: {50F43FBB-3F59-10DA-AD1F-77E61C831E36}
Management group: PROD1
2. Create an override to disable the maintenance procedure for this data set:
- In the OpsMgr console go to Authoring-> Rules-> Change Scope to “Standard Data Set”
- Right click the rule “Standard Data Warehouse Data Set maintenance rule” > Overrides > Override the rule > For a specific object of class: Standard Data Set
- Select the data set that you found from the event in step 1.
- Check the box next to Enabled and change the override value to “False”, and then apply the changes.
- This will disable dataset maintenance from running automatically for the given dataset type.
3. Restart the “System Center Management” service on the RMS. This is done to kill any maintenance already running, and ensure the override is applied immediately.
4. Wait 10 minutes and then connect to the SQL server that hosts the OperationsManagerDW database and open SQL Management Studio.
5. Run the query below replacing the highlighted portion with the name of the dataset from step 1.
**Note: This query could several hours to complete. This is dependent on how much data has been flooded to the warehouse, and how behind it is in processing. Do not stop the query
prior to completion.
USE [OperationsManagerDW]
DECLARE @DataSet uniqueidentifier
SET @DataSet = (SELECT DatasetId FROM StandardDataset WHERE SchemaName = 'State')
EXEC StandardDatasetMaintenance @DataSet
6. Once the query finishes, delete the override configured in step 2.
7. Monitor the event log for any further timeout events.
In my case – my maintenance task ran for 25 minutes then completed. In most customer environments – this can take several hours to complete, depending on how powerful their SQL servers are and how big the backlog is. If the maintenance task returns immediately and does not appear to run, ensure your override is set correctly, and try again after 10 minutes. Maintenance will not run if the warehouse thinks it is already running.
***Note: Now – this seemed to clear up my issue, as immediately the 31552’s were gone. However – at 2am, they came back, every 10 minutes again and my warehouse CPU was spiked again. My assumption here – is that it got through the hourly aggregations flood, and now it was trying to get through the daily aggregations work and had the same issue. So – when I discovered this was sick again – I used the same procedure above, and this time the job took the same 25 minutes. I have seen this same behavior with a customer �� where it took several days to “plow through” the flood of data to finally get to a state where the maintenance would always complete in the 10 minute time period.
This is a good – simple process to try to resolve the issue yourself, without having to log a call with Microsoft first. There is no risk in attempting this process yourself – to see if it can resolve your issue.
If you are still seeing timeout events, there are other issues involved. I’d recommend opening a call up with Microsoft that that point.
Again – this is just ONE TYPE of (very common) 31552 issue. There are many others, and careful diagnosis is needed. Never assume someone else's fix will resolve your specific problem, and NEVER edit an OpsMgr database directly unless under the direct support of a Microsoft support engineer.
(***Special thanks to Chris Wallen, a Sr. Support Escalation Engineer in Microsoft Support for assisting with the data for this article)