Showing posts with label apparently. Show all posts
Showing posts with label apparently. Show all posts

Saturday, February 25, 2012

AMO: Looping to delete partitions

Hi

I am trying - in AMO code - to loop over the partitions in my different measure groups and drop these. But apparently I am not allowed to do it with the following code, since I am modifying the collection which I am looping (because I drop the partitions which exist in the collection).

....

For Each oMeasuregroup in oCube.MeasureGroups

For Each oPartition in oMeasureGroup.Partitions

oPartition.Drop()

Next

Next

...

Can anyone help me with a solution? Thanks.

Hi,

You need to iterate over the partitions collection with integer index.

PartitionCollection partitions = ...;

for( int i=partitions.Count-1; i>=0; --i )
partitions[ i ].Drop();

Adrian Dumitrascu

Thursday, February 9, 2012

Alternative to binary_checksum?

I am using binary_checksum to verify dataintegrity between two servers but the algorithm apparently is not strong enough and returns false positives. Are there any alternative algorithms that can be used instead of or combined with binary_checksum ? I am running on SQL Server 2000.

Rgds

Bertrand

Binary_checksum doesn't guarantee uniqueness. If you find matches then you will have to check the individual columns to eliminate/find the duplicates. There are no other options in SQL Server 2000. In SQL Server 2005, you can use the new hashbytes function to compute MD5 hash which is more closer to providing unique value based on your input. So you should probably compute the MD5 hash on the row outside the database and store it in the table. You can then perform quick comparisons to find the duplicates. And you should still check the individual columns.|||

Thank you, you have a point there.

I would still like to experiment with a new user defined function for calculating alternative checksums. I have different checksum algorithms available but I need to find out how to create a user defined function that behaves as the BINARY_CHECKSUM function. E.g. it should take * as an argument and compute on the entire record contents. Do you have any suggestions on how I could proceed with this? Can it be done in T-SQL?

Rgds

Bertrand

|||A "checksum" will never guarantee a unique value. The best way to track changes it to use a change field and a trigger to update the changed field. Either a datetime or a Y/N or something. Then reset the Y/N or do a > last update date on the refresh.

Alternative to binary_checksum?

I am using binary_checksum to verify dataintegrity between two servers but the algorithm apparently is not strong enough and returns false positives. Are there any alternative algorithms that can be used instead of or combined with binary_checksum ? I am running on SQL Server 2000.

Rgds

Bertrand

Binary_checksum doesn't guarantee uniqueness. If you find matches then you will have to check the individual columns to eliminate/find the duplicates. There are no other options in SQL Server 2000. In SQL Server 2005, you can use the new hashbytes function to compute MD5 hash which is more closer to providing unique value based on your input. So you should probably compute the MD5 hash on the row outside the database and store it in the table. You can then perform quick comparisons to find the duplicates. And you should still check the individual columns.|||

Thank you, you have a point there.

I would still like to experiment with a new user defined function for calculating alternative checksums. I have different checksum algorithms available but I need to find out how to create a user defined function that behaves as the BINARY_CHECKSUM function. E.g. it should take * as an argument and compute on the entire record contents. Do you have any suggestions on how I could proceed with this? Can it be done in T-SQL?

Rgds

Bertrand

|||A "checksum" will never guarantee a unique value. The best way to track changes it to use a change field and a trigger to update the changed field. Either a datetime or a Y/N or something. Then reset the Y/N or do a > last update date on the refresh.