Showing posts with label create. Show all posts
Showing posts with label create. Show all posts

Tuesday, March 27, 2012

Analysis Services 2000 + Reporting Services 2005 + Parameters

Hi All,

I'm trying to create reports in RS2005 using AS2000 as my data source. I understand that if I use RS2005 on AS2000, I wont be able to enjoy the OLAP based parameters as in using AS2005. Does anyone know an easy way to easily use Parameters in RS2005 while still using AS2000?

Regards,

Joseph

I don't think I've tried RS 2005 against AS 2000, but my best bet is that you create an OLE DB connection against your cube, and then learn how to write MDX.

You need to create MDX queries for the parameters that you want to get from your OLAP cube as well as the main query.

All parameterized queries need to be prefixed with =" and ended with ", and to be on one line. (Copy into notepad, turn off word wrap and make it stay on one single line.)

I suggest you read through some articles by Bill Pearson over at Databasejournal.com

Mastering OLAP Reports: Parameters for Analysis Services Reporting ...

http://www.databasejournal.com/features/mssql/article.php/3644661

Mastering OLAP Reporting: Cascading Prompts

http://www.databasejournal.com/features/mssql/article.php/3386441

At least one of them should get you started.

Kaisa

analysis services 2000 (data mining)

i create a model mining

CREATE MINING MODEL [a'S]
( [Cusid] LONG KEY ,
[Orderdetails] TABLE PREDICT ([Productid] LONG KEY )
) USING Microsoft_Decision_Trees

i can show for customer some products other when he choise a product.

ect: he choise A, i show he B, C, D ....

but for that, i must know his cusid.

now i want show B,C,D when one man choise A ( i don't know his cusid). how i do that.

( i'm using sql server and analysis service 2000)

You don't need to supply the customer id for prediction. The customer ID is only ysed to identify records and has no impact on the patterns in the model.|||

i can't do.

i use select query:

SELECT FLATTENED
[t1].[cusid],TopCount( Predict(Angel.[Orderdetails], INCLUDE_STATISTICS, EXCLUSIVE), $ADJUSTEDPROBABILITY, 3)
FROM
Angel
PREDICTION JOIN
SHAPE
{
OPENROWSET
(
'SQLOLEDB.1',
'Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=SIT;Data Source=(local)',
'SELECT "cusid" AS "Cusid" FROM "customers" ORDER BY "cusid"'
)

}
APPEND
(
{
OPENROWSET
(
'SQLOLEDB.1',
'Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=SIT;Data Source=(local)',
'SELECT "cusid" AS "Cusid_1", "productid" AS "Productid" FROM "vwAllAgeSex" ORDER BY "cusid"'
)
}
RELATE [Cusid] TO [Cusid_1]
)
AS [Orderdetails]
AS [T1]
ON
Angel.[Cusid] = [T1].[Cusid] AND
Angel.[Orderdetails].[Productid] = [T1].[Orderdetails].[Productid]

the result:

cusid productid $support $adjustedprobability

1 12 119 0.92

1 320 40 0.34

1 300 23 0.31

3 12 54 0.76

3 19 54 0.76

.......

so when i select product with productid = 12. how i show other relation products

( examples: when i choose a t-shirt product, i want show relation products: shoes, jeans trousers.... )

sql

Sunday, March 25, 2012

Analysis Service: filter fact table ?

I have a big table with several types of transsactions: PO (Puchase Orders, SO,( Sales orders), INV (Invoices) ...
I want to create cubes with only one type of transactions (1 cube for PO,...)
Where and how can I filter the rows I want to use in my cube ?
Thankscreate view, and for fact table in AS point at the view.|||create view, and for fact table in AS point at the view.

analysis service package question

I'm working on a new analysis package using VS 2005, I am able to create the package and cube, etc, but when I go to the Calculation tab for the cube I'm getting the following error message "Unexpected Error occurred: 'Error in application' "

now I uninstalled and reinstalled VS .NET 05 twice, and even uninstalled and reinstalled SQL Server on my box and I'm still getting the error message.

What is causing this error and how can I fix it?

Have you got Office 2007 installed? See:

http://geekswithblogs.net/darrengosbell/archive/2006/11/17/97367.aspx

HTH,

Chris

|||No i don't, I have office 03 installed|||

Hmm, possibly something else has caused the same problem. I'd still go through the steps described to see if it stops the error if I were you...

Chris

|||

I'm getting ready to go through them now actually.

I'll post how it turns out.

Analysis Service on 2005

I'm trying to create a Cube on analysis services, but do not see any
realtions in the diagram between my selected tables. The tables use indexes
rather than primary keys. Do you have any suggestions about how I can specify
the relations from Analysis Services.
Thanksdo you talk about the DSV (data source view) schema?
the system can detect relationships based on column names, you have an
option which allow you to change the way visual studio will detect the
relationship between the tables.
but this works BEFORE you add tables in the DSV
after this you have to manually create the links in the DSV. (its a simple
drag&drop)
"dbach" <dbach@.discussions.microsoft.com> wrote in message
news:E0F20189-17FC-443D-8348-2CC63172F0FD@.microsoft.com...
> I'm trying to create a Cube on analysis services, but do not see any
> realtions in the diagram between my selected tables. The tables use
> indexes
> rather than primary keys. Do you have any suggestions about how I can
> specify
> the relations from Analysis Services.
> Thanks

Analysis Server 2005 to create new data sources for new database

I am working with SQL Server Analysis server 2005 beta 2 (CPT MAY). I was trying to add a new database. I followed the same step like Analysis server 2000. Where I create the new database and select the data soruces tree node for chosing the new data source. However, after creating the database in analysis server 2005, I can see Data Sources Tree node but when I right click and look for new data source property, it is not there. I can only see the Refresh property.
So I am just wandering is it my installation or configuration problem or is there other way to add new data source to a brand new database.
Any help will be appreciated. Thanks in advance.Niaz, did you get an answer to this? In SQL2k, the administration and development functions were combined in one UI - Analysis Manager. In SQL 2K5 the development functions have been put in Business Intelligence Development Studio (aka Visual Studio) and the administrative functions in SQL Server Management Studio.

You probably figured this out long ago...

Bob

Thursday, March 22, 2012

Analysis Server - Incompatible repository

Hi,
We have been using MS Analysis Server with Axapta (a mid-range ERP from MS)
for quite some time. So long we were able to create/run cubes without any
problem.
But today all of a sudden we are getting the following error message -
...................................... .........
Error
Transferring cube(s)\<CUBE NAME>\Connecting to the OLAP ServerMethod
'Connect' in COM object of class '{B492C386-0195-11D2-89BA-00C04FB9898D}'
returned error code 0x80040034 (<unknown>) which means: Incompatible
repository.
...................................... .........
The thing is nothing has changed from Axapta end. So my query is this -
Is there anything on AS that could cause such error message? Can someone
let me know please.
TIA,
Harish Mohanbabu
MBS Axapta - MVP
http://www.harishm.com/
Hello,
To isolate the issue, you may try to check if the issue occurs on different
clients or you could use MDX sample application on server to test.
If it is client side issue, please refer to the following article to
troubleshoot the issue:
288890 PRB: Incompatible Repository Error Message Occurs After Installation
of
http://support.microsoft.com/?id=288890
If the issue persists, you may try to unregister the DSO dll's in the
following order:
msmddo80.dll
msmddo.dll
msmdlock.dll
msmdnet.dll
msmdint.dll
Then register the above Dll's in the reverse order, starting with
msmdint.dll.
If we confirm it is a server side issue, it is more like a database
corruption, you may try to archive the current database, and then restore a
known good one on AS server to test.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
Business-Critical Phone Support (BCPS) provides you with technical phone
support at no charge during critical LAN outages or "business down"
situations. This benefit is available 24 hours a day, 7 days a week to all
Microsoft technology partners in the United States and Canada.
This and other support options are available here:
BCPS:
https://partner.microsoft.com/US/tec...rview/40010469
Others: https://partner.microsoft.com/US/tec...pportoverview/
If you are outside the United States, please visit our International
Support page:
http://support.microsoft.com/default...national.aspx.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
| Thread-Topic: Analysis Server - Incompatible repository
| thread-index: AcXxCdpBYHIh/957S2CAGRK2YCeB6A==
| X-WBNR-Posting-Host: 80.245.107.27
| From: "=?Utf-8?B?SGFyaXNoIE1vaGFuYmFidQ==?=" <Axapta@.online.nospam>
| Subject: Analysis Server - Incompatible repository
| Date: Thu, 24 Nov 2005 07:15:06 -0800
| Lines: 27
| Message-ID: <668B07C4-AE5B-47F6-BD03-682AC746E133@.microsoft.com>
| MIME-Version: 1.0
| Content-Type: text/plain;
| charset="Utf-8"
| Content-Transfer-Encoding: 7bit
| X-Newsreader: Microsoft CDO for Windows 2000
| Content-Class: urn:content-classes:message
| Importance: normal
| Priority: normal
| X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.0
| Newsgroups: microsoft.public.sqlserver.datawarehouse
| NNTP-Posting-Host: TK2MSFTNGXA03.phx.gbl 10.40.2.250
| Path: TK2MSFTNGXA02.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFT NGXA03.phx.gbl
| Xref: TK2MSFTNGXA02.phx.gbl microsoft.public.sqlserver.datawarehouse:21772
| X-Tomcat-NG: microsoft.public.sqlserver.datawarehouse
|
| Hi,
|
| We have been using MS Analysis Server with Axapta (a mid-range ERP from
MS)
| for quite some time. So long we were able to create/run cubes without
any
| problem.
|
| But today all of a sudden we are getting the following error message -
|
| ...................................... ........
| Error
| Transferring cube(s)\<CUBE NAME>\Connecting to the OLAP ServerMethod
| 'Connect' in COM object of class '{B492C386-0195-11D2-89BA-00C04FB9898D}'
| returned error code 0x80040034 (<unknown>) which means: Incompatible
| repository.
| ...................................... ........
|
| The thing is nothing has changed from Axapta end. So my query is this -
|
| Is there anything on AS that could cause such error message? Can someone
| let me know please.
|
| TIA,
|
| Harish Mohanbabu
| --
| MBS Axapta - MVP
| http://www.harishm.com/
|

Analysis Server - Incompatible repository

Hi,
We have been using MS Analysis Server with Axapta (a mid-range ERP from MS)
for quite some time. So long we were able to create/run cubes without any
problem.
But today all of a sudden we are getting the following error message -
.............................................
Error
Transferring cube(s)\<CUBE NAME>\Connecting to the OLAP Server Method
'Connect' in COM object of class '{B492C386-0195-11D2-89BA-00C04FB9898D
}'
returned error code 0x80040034 (<unknown> ) which means: Incompatible
repository.
.............................................
The thing is nothing has changed from Axapta end. So my query is this -
Is there anything on AS that could cause such error message? Can someone
let me know please.
TIA,
Harish Mohanbabu
--
MBS Axapta - MVP
http://www.harishm.com/Hello,
To isolate the issue, you may try to check if the issue occurs on different
clients or you could use MDX sample application on server to test.
If it is client side issue, please refer to the following article to
troubleshoot the issue:
288890 PRB: Incompatible Repository Error Message Occurs After Installation
of
http://support.microsoft.com/?id=288890
If the issue persists, you may try to unregister the DSO dll's in the
following order:
msmddo80.dll
msmddo.dll
msmdlock.dll
msmdnet.dll
msmdint.dll
Then register the above Dll's in the reverse order, starting with
msmdint.dll.
If we confirm it is a server side issue, it is more like a database
corruption, you may try to archive the current database, and then restore a
known good one on AS server to test.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
========================================
=============
Business-Critical Phone Support (BCPS) provides you with technical phone
support at no charge during critical LAN outages or "business down"
situations. This benefit is available 24 hours a day, 7 days a week to all
Microsoft technology partners in the United States and Canada.
This and other support options are available here:
BCPS:
https://partner.microsoft.com/US/te...erview/40010469
Others: https://partner.microsoft.com/US/te...upportoverview/
If you are outside the United States, please visit our International
Support page:
http://support.microsoft.com/defaul...rnational.aspx.
========================================
=============
This posting is provided "AS IS" with no warranties, and confers no rights.
| Thread-Topic: Analysis Server - Incompatible repository
| thread-index: AcXxCdpBYHIh/957S2CAGRK2YCeB6A==
| X-WBNR-Posting-Host: 80.245.107.27
| From: "examnotes" <Axapta@.online.nospam>
| Subject: Analysis Server - Incompatible repository
| Date: Thu, 24 Nov 2005 07:15:06 -0800
| Lines: 27
| Message-ID: <668B07C4-AE5B-47F6-BD03-682AC746E133@.microsoft.com>
| MIME-Version: 1.0
| Content-Type: text/plain;
| charset="Utf-8"
| Content-Transfer-Encoding: 7bit
| X-Newsreader: Microsoft CDO for Windows 2000
| Content-Class: urn:content-classes:message
| Importance: normal
| Priority: normal
| X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.0
| Newsgroups: microsoft.public.sqlserver.datawarehouse
| NNTP-Posting-Host: TK2MSFTNGXA03.phx.gbl 10.40.2.250
| Path: TK2MSFTNGXA02.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGXA03.phx.gbl
| Xref: TK2MSFTNGXA02.phx.gbl microsoft.public.sqlserver.datawarehouse:21772
| X-Tomcat-NG: microsoft.public.sqlserver.datawarehouse
|
| Hi,
|
| We have been using MS Analysis Server with Axapta (a mid-range ERP from
MS)
| for quite some time. So long we were able to create/run cubes without
any
| problem.
|
| But today all of a sudden we are getting the following error message -
|
| .............................................
| Error
| Transferring cube(s)\<CUBE NAME>\Connecting to the OLAP Server Method
| 'Connect' in COM object of class '{B492C386-0195-11D2-89BA-00C04FB989
8D}'
| returned error code 0x80040034 (<unknown> ) which means: Incompatible
| repository.
| .............................................
|
| The thing is nothing has changed from Axapta end. So my query is this -
|
| Is there anything on AS that could cause such error message? Can someone
| let me know please.
|
| TIA,
|
| Harish Mohanbabu
| --
| MBS Axapta - MVP
| http://www.harishm.com/
|

analysis manager sql server2000

dear sir/madam,

i have to create a cube in analysis manager in sql server 2000.as i want to see the data according to my organisation levels.

organisation level:

1.Business unit

2.Team manager

3.Sales manager.

and also i create the three user respectively and assign all user has adminstrator category in user accounts of windows XP.

after create the user and design the manage role to restrict the levels which i have create the dimension.after login which i have to create the user i can see all levels of data.

so pls solution for see data which i have to restrict the dimenssion only..

thanks

kesavanTry this url and read William Pearson's tutorials for answers, some version of them is also in your Analysis Service. Hope this helps.

http://www.databasejournal.com/features/mssql/article.php/10894_1429671_2

Kind regards,
Gift Peddie

Analysis Manager Remote Access

Hi,

I juste begin using Analysis Manager.

I want to know if i can create, process and manage cubes in a distant server.

I have the connection to the database on a distant server and i want to use Analysis Manager (installed on the server) from my PC (which have Anaylisi Manager Too).

I have just build a cube on my Analysis Manager with the server database but the cube does not appears on the Analysis Manager of the distant server.

Please help me, ;)

Thanks

moustachuAnalysis Services is in its second revision from Microsoft. It has a little while to go before it matures.

You can remotely administer an Analysis Services machine, but you need to have all of the datasources available on the remote machine with access to the data. Apparently the MMC application needs to get all metadata/dimension data for itself, rather than rely on the Analysis Services service to serve it up (ok, all together 5 times fast).

Good luck with it. I am hoping that AS makes many changes in the next revision.|||I thank you for your answer,

All the datasources are on the remote machine. But I do not Know How I can remotly administer an Analysis Services machine ?

moustachu

Originally posted by MCrowley
Analysis Services is in its second revision from Microsoft. It has a little while to go before it matures.

You can remotely administer an Analysis Services machine, but you need to have all of the datasources available on the remote machine with access to the data. Apparently the MMC application needs to get all metadata/dimension data for itself, rather than rely on the Analysis Services service to serve it up (ok, all together 5 times fast).

Good luck with it. I am hoping that AS makes many changes in the next revision.sql

Tuesday, March 20, 2012

an sql command that doest work in a page

Hello,

I have a sequense of sql commands in order to recursively update a table that has parents/childs

After I create a temporary table, I need to run an sql command that for some reason is not working. No errors, the command is actually excecuted, but I beieve the rowcount is 0 from the beggining

Here is the command:

Dim

InsertConnectionAs Data.SqlClient.SqlConnection =New System.Data.SqlClient.SqlConnection("Server=myServer;User ID=myUser;pwd=myPSW;Database=myDatabase")Dim SqlInsertCommandAs Data.SqlClient.SqlCommand =New Data.SqlClient.SqlCommand("while @.@.rowcount > 0 " _

&

"begin INSERT INTO submenu" _

& uid &

" (pageid,parentid) SELECT y.pageid , y.parentid FROM submenu" & uid _

&

" i INNER JOIN page y ON y.ParentId = i.pageID LEFT OUTER JOIN subMenu" _

& uid &

" i1 ON i1.pageId = y.pageId WHERE(i1.pageID Is NULL) " _

&

"end", InsertConnection)

InsertConnection.Open()

SqlInsertCommand.ExecuteNonQuery()

InsertConnection.Close()

SqlInsertCommand =

Nothing


If I insert any other SQLcommand there it is excecuted normally.

The command I have is excecuted fine using sql server manager.

Is there any way that a command is excecuted in the SQL manager but not in a page...??

Any ideas would be great...

Thank you

Hello my friend,

I would not use @.@.rowcount outside of Enterprise Manager. Could you describe your database structure and what you are trying to insert. No need to send vb code, just the SQL or some comments on the steps and I can send you the correct SQL that will work from wherever it is used.

Kind regards

Scotty

|||

You use @.@.RowCount in first line of your query but this returns number of rows affected by last select statement in current SQL thread, but your thread is starting so it returns always 0 so your loop is never executed.

You should populate your temporary table in the same select statement to work correctly. The best solution is to create SQL stored procedure which will do all your work at one shot if you can do it.

Thanks

JPazgier

|||

Hi,

The software is a sitebuilder. The particular table holds the page stucrure of each site.

The table is this one:

----

pageid int identify

siteid int

pagename nvarchar(200)

parentid int

----

I need to update / delete all of the tree when the user wants to update or delete a top element. The number of levels is not limited.

I managed to do it, using a variable. The "problem" is that I set it to 1000 times. So if someone has more than 1000 pages under the parent, if will fail. And it's not right in the first place.

The thing is, that this worked fine when I was on an other server that used MS SQL 2000. I didn;t find any differences searching the web from 2000 to 2005

jpazgier, why does it work then when I excecute it using SQL manager..? Isn't this weird? I mean, if the rowcount is 0 from the beggining in the application, should't it be 0 in the SQL manager too?

Thank you

|||

Hello my friend,

I realize now what you are trying to do and I have the answer for you and this will work no matter how many levels you have (no 1000 limit). Run the following SQL, but change tblTree to the name of your table (I did not know what you have called it): -

CREATE FUNCTION dbo.fnGetPages
(
@.PageID AS INT
)

RETURNS @.ChildPageIDs TABLE(PageID INT)

AS

BEGIN
INSERT INTO @.ChildPageIDs (PageID)
SELECT PageID FROM tblTree WHERE ParentID = @.PageID

DECLARE @.TempChildPageIDs TABLE(PageID INT)
INSERT INTO @.TempChildPageIDs (PageID)
SELECT PageID FROM @.ChildPageIDs ORDER BY PageID

DECLARE @.ChildPageID AS INT
SET @.ChildPageID = (SELECT TOP 1 PageID FROM @.TempChildPageIDs)

WHILE (@.ChildPageID IS NOT NULL)
BEGIN
INSERT INTO @.ChildPageIDs (PageID)
SELECT PageID FROM dbo.fnGetPages(@.ChildPageID)
DELETE FROM @.TempChildPageIDs WHERE PageID = @.ChildPageID

SET @.ChildPageID = (SELECT TOP 1 PageID FROM @.TempChildPageIDs)
END
RETURN
END

Now to get all child IDs of page 1 (either direct children of 1, and also children of ones that are children of 1, and so on) I run the following: -

select PageID from dbo.fnGetPages(1)

To delete the page and all of its children I run the following 2 commands: -

DELETE FROM tblTree WHERE PageID IN (SELECT PageID FROM dbo.fnGetPages(1))

DELETE FROM tblTree WHERE PageID = 1

Kind regards

Scotty

|||

Thanks Scotty,

The code to create the function is only run once right?

Then I just select update or do whatever I need to do using the function right?

Thanks

|||

Yes that is correct. You only run the function SQL once. You only need to run this again if you decide to use this functionality within a new database.

You just need to run the commands that use the function and you should be fine.

Kind regards

Scotty

Monday, March 19, 2012

An interesting issue with adding columns.

I've written this procedure that is a meta data manager if you will. It looks at a vocabulary table and will create or alter tables to be in line with the meta data. It works great and has for a couple of years now. However, I just rewrote it in an effort to make it more compact, efficient, etc. Now, our system uses at least three databases. One contains shared data, the next holds only system data for the app like individual user settings, etc (atslogin is the name) then would be an actual database for client data or accounts as we call them. Clients could have one or 100 of those. Please, no comments about moving everything in to one DB as that is not always possible to to hierarchy of multiple business models.

Now, on to the issue. This procedure used to exists in the local database (one or many) now I keep it in atslogin so in effect, it looks "down" on all other databases and does it's thing. It adds columns, expands them, creates tables, views, keys blah blah blah.

Here is the issue. When it creates a new column the ordinal position is out of whack. I've got a table where I keep dropping the last column, run my procedure to add it and find that the ordinal position has increased by one each time. Add it and the value is 48 for example. Drop it and add it again and it is now 49 and all the while there is a gap between say 47 and 49. This is being written to syscolumns and the view INFORMATION_SCHEMA.[columns].

This is a big deal because if I find columns are out of order I wont attempt to alter the table. Trouble is the columns are in the proper order, I just can buy what the system is saying. Anyone ever seen this? Even if I add a column as the db owner I'm seeing this. This is SQL 2k. Below is the exact version

------------------------------------------------------------------------------------------
Microsoft SQL Server 2000 - 8.00.760 (Intel X86)
Dec 17 2002 14:22:05
Copyright (c) 1988-2003 Microsoft Corporation
Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 2)ooohhh dba-one, a padwon problem this is. meaningless the order of the data in the database is as Jedi Kaiser will readily remind you he will.|||Personally, some things are just not meant to be automated in my opinion

You'd have to post some code snipets though on what you are doing.

I use ERWin to do what you are doing, but I usually unload, drop and create the objects, then load|||Yoda say this ordinal problem not be a 0 to 1 issues. The voc file reads 0 for the first position while the system stuffs begins at 1. You didn't really think a Jedi master like me would be so stupid as to over look that did you?|||exec usp_madeof 'wcsub'

select colid,* from syscolumns where object_name(id)='wcsub' order by colid

Both will return the following in the results:

DOHFILING varchar 1 48
FORMTYPE varchar 2 53

(from sysobjects)
48 DOHFILING
53 FORMTYPE

Now, usp_made of is a procedure I wrote. It returns results in a fashion like Oracle's "desc" function. All that procedure does is read the INFORMATION_SCHEMA.[columns] view and order by ordinal position. Now, everytime I drop formtype and re add the column 53 will become 54 and so on. dohfiling will remain 48. This is nuts.|||Personally, some things are just not meant to be automated in my opinion

I very much agree. When I find columns out of order I refuse to automate the process. However, the columns are not out of order technically. I just can believe what syscolumns or other entries read. Even running alter table .. add column ... in the local DB as the owner is causing this so I can even remove my procedure from the matter.|||I tore out a couple hairs this week trying to get a Data Dictionary Collection script to work. Eventually, I just dumped the idea of checking columns by column ID and resorted to sorting the lower cased names of the columns and going by that. I figured if the column name exists, then I am OK. If it is not there at all, then I have to delete the column from the data dictionary. Or, if I find a column that is out of order, it must have been added, so in the data dictionary it goes. If I went by columnID I could be dropping good columns and re-adding them later on, when I find them again at the "end" of the table. Would that work for you?

EDIT: Bottom line is you have to work around the quirks in your RDBMS. And a non-contiguous set of column IDs counts as a quirk to me.|||Here read this.

http://www.mindsdoor.net/SQLAdmin/AlterTableProblems.html

Once I read that, it only re-enforced dropping and re-creating the tables|||This has never happened and it is not an acceptable flaw of any RDBMS. Besides, my application expects things to be in a certain order and if it isn't, it will puke. I didn't design it that way but it is the way of things. If the table is created wrong then fine, manual intervention. If the system tables are wrong then how can I buy anything else they may tell me?|||This application is run all over the place at my clients sites. If it were an in house app I wouldn't mind dropping and recreating tables but that is horribly impractical in this case. Plus, I've had a few programmers here attempt to be clever and do things like that only to see it result in data loss. I can't attempt to automate the recreation of tables and moving data around. Too much risk for my taste.|||Personally, some things are just not meant to be automated in my opinionSo true. Experience is the difference between knowing what CAN be done, and what SHOULD be done. "Fail-safe systems fail by failing to fail safely."|||This application is run all over the place at my clients sites.

I don't understand, I would have a script that would run all the sql and apply the changes.

What kind of development lifecycle do you use?

And what's the frontt end written and how do you deploy that?|||The app is a VB/.Net app for insurance claims. It is a very complicated application as well. Scripts are fine in many cases but calls to my procedure are built in to it. When we make changes to table structures we simply send the meta data table. The app will then call my procedure to inspect and alter if needed, any involved tables. No single client has a standard version of our app because we will customize anything they want but the databases are largely the same with the exception of "user tables"

If we send an update that would say depend on a new table or two or many, we send the meta data file. The app will then test for existence and if they are not there my procedure creates the tables, index, keys, so on. Why send a meta data file and then have to manually do everything?

It seems everyone thinks I'm some kind of dummy for doing this! Our app has been doing things like this for nearly 20 years and the last 10 has been with Oracle or SQL Server depending on which DB the client wants to run it against. I just don't understand how this issue has only appeared now and not before.|||I can't be the only one who has run in to this can I?|||No single client has a standard version of our app because we will customize anything they want...and that is the root of the problem. Trying to be all things to all customers. I saw this as a problem back when I was a consumer of software, and I see it as a problem now that I am a producer of software. Client database get customized out the wazoo until they become administrative nightmares requiring the type of code you have had to implement (albeit for 20 years).
Software vendors should be experts in their industry and code their products according to best practices. Client modification of the database schema should not be allowed.
"You can customize it any way you want" is ultimately a flawed philosophy, and its reductio-ad-absurdum conclusion is an empty box with instructions to install the database server of the client's choice.|||You misunderstand. You may not tamper with the database but we will customize our programs. That is what I meant by that. Our Workers' Comp claim table is called WCCLM1 for example. Every client has this and if they get an update, they may need additional columns or columns expanded. if they add something to it, the app wont see it, etc

Oh and we don't ship a shrink wrapped product and tell them to plug it in to whatever they want. We support either Oracle or SQL Server only. We do a data conversion in most cases as well. We are hardly as stupid or sloppy as some may have understood.

Basically, client modifications to the database are not allowed and there is not a single thing that is arbitrary about the database design. Geez, all I need to do is add some columns or expand them without having to write a script every time. This isn't rocket science or reinvention of the wheel.|||Oh and one more thing. "You can customize it anyway you want" is true as long as you pay us to modify it. We don't ship code. We don't let clients ever have that, much less modify it. This isn't some open source trash. This is a 100k plus application.|||So you still run into the headache of maintaining code for as many versions as you have clients, but you have found a way to charge for the administrative hassle.
I'm not dissing your business plan. Whatever you and your clients agree to and makes you money is fair game. But I don't see how you can be surprised that rolling out updates in such a situation is a pain in the keister. It is difficult by design.|||Well, technically we only have two versions, 5.4 of our old system and 6.424 of out newer .NET version. Both use the same database, both use the same programs at any client site. What may be different is tables for a customized screen for a client claim program, Perhaps a custom AP, HR, etc interface. Our code and/or business model is not the mess some of you think it is. I guess I'm just not explaining it well.

Still, forget about all that. Even if it was a wreck, what does that have to do with what I consider a substantial flaw in this RDMS? Not a damn thing. The bottom line is that I should be able to add a column (oh and this happens even if I add the column manually and not via stored procedure in case anyone missed that) and have that recorded in the proper ordinal position in the system tables and views.

The feed back I'm getting here is that I'm stupid for expecting to actually utilize the alter table command. Someone explain why it's there if it can't be relied upon? You guys can talk all day long about how we distribute our app but no one can seem to offer anything of value with regard to the issue at hand.|||No ... the feedback you are getting is that order has no meaning in a relational database. Each column relates information about the primary key. No matter what order I create and populate columns in the table, if I want them to come out in a specific sequence, I must specify that order in the select statement!|||I think the problem is that when you add a column by altering a table it increments the column number. If you must ensure that the column numbers do not have gaps, then you must use DROP/CREATE.|||No ... the feedback you are getting is that order has no meaning in a relational database. Each column relates information about the primary key. No matter what order I create and populate columns in the table, if I want them to come out in a specific sequence, I must specify that order in the select statement!

Funny how I don't have this issue in Oracle. You don't even know what the hell you are talking about. Since when did I say what order they are in, inside the database matters to me? I never did. The application depends on things being a certain way. I didn't design that shit. Programmers did and now I have to deal with it. Still, how dare I expect things to be correct in SYSTEM TABLES! I'm not an idiot so I wish you people would stop assuming I am.

Thanks for nothing. All of you.|||I think the problem is that when you add a column by altering a table it increments the column number. If you must ensure that the column numbers do not have gaps, then you must use DROP/CREATE.

I already got that but I don't have to do this with Oracle. You have to understand something. A procedure I write in SQL Server, I've got to write a like version for Oracle. There is no way around that. WTF would I want to do things by hand on one RDMS and then have the luxury of automation on another?|||sweet Jesus the booty pleaser.

I bang this drum everywhere I go. You make money by coding a version of your product once and selling it a million times, not by selling your product and coding it a million times. my current employer still does not understand this concept yet either.|||I already got that but I don't have to do this with Oracle. You have to understand something. A procedure I write in SQL Server, I've got to write a like version for Oracle. There is no way around that. WTF would I want to do things by hand on one RDMS and then have the luxury of automation on another?I realize that it doesn't help you much that your code relies on things that it shouldn't, and that is causing your problems. I think that we've all been there with code that was written long ago by folks that made assumptions that don't hold true with newer tools.

If your code relies on specific orders of attributes, and no gaps in the values in system tables (when your code shouldn't even know what those number are or that those gaps exist), then your code is faulty. You can work around the fault in your code by using the drop and create that Blindman suggested.

I'm sorry that you are stuck maintaining this code, but all we are trying to do is help... Getting hostile with us won't make your job any easier, and if you irritate the people that help you, that will probably make your job harder.

-PatP|||See, no one is helping me. That is why I'm getting a bit hot. Your wrote that my code is faulty. No it isn't. I'm under certain constraints of an application. One of them is my procedure being called by the application. Now suppose I do this drop/create deal. What if the table has ten million rows? That will take some time to run. Meanwhile, the user thinks that the app is hung up and does something stupid that maybe even implicit transactions may not be able to help?

Again, I have to work around what programmers have done, not them working around me. Just so everyone knows, I could note care less what order columns are in the table. The programmers, for whatever reason, do. If a table is created or altered outside of the order of the data dictionary table bad things happen. That IS NOT MY DESIGN.

I could create some rolling number sequence in SQL Server to get past this. My point is, why should I have to? I can't irritate people who help me because I've got no help. ya know? Everyone is talking like this is just normal. It just isn't.

Go ahead and lock this thread. It's going nowhere. Fast.|||Oh and before it is locked, if it is. Understand that my gripe isn't with this ordinal position as much as it makes me wonder what else I can't believe from the system. What else is bad? Is that so nuts of me?|||The system tables are correct, based on the rules for SQL Server. Please explain what you can't believe. I don't get what the problem is.

If your code isn't faulty, then it works "as is". If your code is faulty, then Blindman's suggestion will help you work around that fault. Again, I don't get what the problem is.

Plase help me to understand.

-PatP|||Why would you say the tables are correct? There is a gap in the ordinal value? What rules specifically are you speaking of. If I've got a table with ten columns, I add a column, it is eleven in ordinal position. If I drop it for the sake of doing it and re add it, it is still eleven in reality, not twelve as SQL Server will record it. Then on to higher numbers if I kept dropping it and re adding it. Now of course that isn't going to happen (continuous dropping/re add) but I'm just trying to get the point across.

Now in Oracle, I can do this all day long and user_tab_columns will show the correct column_id (1,2,3,4,5,6,7,8,9,10,11) no matter how many times I drop and re add a column. As I've stated before, I could easily create some rolling number while looking at syscolumns or something and just override what the ordinal value is in the table but I just don't like that. I'd rather do "fieldlocation+1<>colid" (note field location in my meta file begins with zero as opposed to 1 in syscolumns) to find something out of order, or a column that needs to be added. If I find something out of order, my procedure returns a message saying I wont do anything. In this case, manual intervention is required and that is how it should be. It is the application that will puke if columns are out of order, not my code. I really have to detect this though.

So, to sum it up, I'd love for colid to be in an order without gaps regardless of a column being dropped and then a new one added. That isn't right that SQL Server does anything else. And I'll say it again, what else is screwy if that is happening? Just like Blinds example about adding length to a column. My procedure does that, too. Now I can't expand a column correctly because maybe what is in syscolumns is actually wrong?

I'm not saying I can't get past this particular issue, I just wonder what else is wrong and I resent the fact I should have to jump through hoops to get a true sequential order from syscolumns.|||Being a user forum, we can really only deal with "what is", rather than "what should be". None of us can change the way syscolumns records column IDs. Perhaps you should take this to Microsoft, and see what they say?|||A procedure I write in SQL Server, I've got to write a like version for Oracle. There is no way around that.Yes. Are you just now catching on to the fact that Oracle and SQL Server are two different database engines? There are a lot of things that are easy to do in SQL Server that are difficult to do in Oracle as well. Frankly, the idea of a single set of code that will run on both Oracle and SQL Server is a myth propogated by software sales people that are either bad programmers, ignorant programmers, or dishonest programmers.
You are complaining that the system tables in SQL Server are not correct.
YES THEY ARE! THEY WORK VERY WELL FOR WHAT THEY WERE INTENDED TO DO. SQL SERVER DOES NOT CARE WHETHER THERE ARE GAPS IN THE VALUES.
The system tables are designed to be used by...wait for it..wait for it...the SYSTEM! Your application has no business using them in the way it does, but if you insist upon it then you must abide by the rules of SQL Server, just as you have to abide by the rules of Oracle when using Oracle's system tables.
DUH.|||I could poop in one hand and wish in the other to see which fills up faster! Those people at MS would be of no use. I'm sure they would offer a work around of "don't do that". I've found all kinds of Crazy things with SQL Server. Like looking at sysindexes in a particular database as another user in a different database. I get totally erratic results. That is just an example.

I've stopped looking for an answer from anyone because I know what I have to do. What got my back up is what I perceived as people thinking I was nuts for expecting things in certain tables to be in what I consider a sensible state. That is all.|||Yes. Are you just now catching on to the fact that Oracle and SQL Server are two different database engines? There are a lot of things that are easy to do in SQL Server that are difficult to do in Oracle as well. Frankly, the idea of a single set of code that will run on both Oracle and SQL Server is a myth propogated by software sales people that are either bad programmers, ignorant programmers, or dishonest programmers.
You are complaining that the system tables in SQL Server are not correct.
YES THEY ARE! THEY WORK VERY WELL FOR WHAT THEY WERE INTENDED TO DO. SQL SERVER DOES NOT CARE WHETHER THERE ARE GAPS IN THE VALUES.
The system tables are designed to be used by...wait for it..wait for it...the SYSTEM! Your application has no business using them in the way it does, but if you insist upon it then you must abide by the rules of SQL Server, just as you have to abide by the rules of Oracle when using Oracle's system tables.
DUH.

Dude, get over yourself. I know that they are two different RDMS. I never said one set of code should run for both. I wrote that functionality that exists for one back end needs to exist for the other. That means two different procedures. However, I'd like to keep things simple. Thanks for the heads up though. And if the system tables are not intended to be used by anything other than the system, How should I determine a tables true structure? You can save your smart ass responses. They've been as little help as the rest of your post.|||I've stopped looking for an answer from anyone because I know what I have to do. What got my back up is what I perceived as people thinking I was nuts for expecting things in certain tables to be in what I consider a sensible state. That is all.

Don't think so. You got upset because you define Oracle as sensible. I disagree. Cursors suck. Since oracle doesn't fit the SQL Server paradigm, Oracle must not be sensible, and a piece of c**p when it comes to the efficient use of limited system resources.

We agree to disagree!|||I never said one was perfect. Getting data from Oracle is like getting blood from a rock at times. Sometimes things are easy over "here" and hard "there" I never said one was better than the other. I mentioned Oracle as something to compare the situation with. Regardless, I've got to deal with things in both Oracle and SQL. It is just how things are.|||dba_one, we can all sympathize with your predicament, and i'm sure many of us can understand your frustration

however, at no point are you allowed to say things like "Dude, get over yourself" or "You can save your smart ass responses"

that's just not allowed|||Heck, that is tame but regardless, I didn't come here to fight. I didn't come here to have anyone just talk down to me, either. Regardless, I was looking for insight on a particular issue but there isn''t much that can be done about this particular thing so I've got to deal with it. So be it. I'm not mad, I'm just not the nice guy all the time. No harm intended and no harm done.|||OK.

Basically, the answer to your post boils down to:
A) The internals of SQL Server was not designed with your application in mind.
B) We can't change the internals of SQL Server.
C) No, not many people on this forum have run into the problem you are experiencing, because we have not designed applications such as yours.
D) We sympathize with your predicament.

Now lets all go back to playing nice.|||What the hell...fow did I miss all of this?|||Did you ever show us the code that is making these changes btw?

That might help out alot|||What the hell...fow did I miss all of this?
fow indeed!

:)

an In List Parameter

Is it possible (And if it is How do I do it) to create a report
parameter which allows a user to select a list of values and then
passes the entire list into the query?
IE. Parameter query returns a list of State abbreviations and the
user can then choose several which then get passed into the query as:
AND STATE IN ('GA', 'CA', 'NY')
TIA for any assistance anyone can provide me. I'm new to reporting
services and the help files haven't been all that much help.See related post:
http://msdn.microsoft.com/newsgroups/default.aspx?dg=microsoft.public.sqlserver.reportingsvcs&mid=aca0f4eb-9766-4437-a5f6-874f7344a827&sloc=en-us.
--
Ravi Mumulla (Microsoft)
SQL Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"Curt" <cmgaviao@.yahoo.com> wrote in message
news:582e9f38.0408190714.544de11c@.posting.google.com...
> Is it possible (And if it is How do I do it) to create a report
> parameter which allows a user to select a list of values and then
> passes the entire list into the query?
> IE. Parameter query returns a list of State abbreviations and the
> user can then choose several which then get passed into the query as:
> AND STATE IN ('GA', 'CA', 'NY')
> TIA for any assistance anyone can provide me. I'm new to reporting
> services and the help files haven't been all that much help.

Sunday, March 11, 2012

An error was encountered in the transport layer

I am fairly new to SSAS 2005. I have just created one cube. I was able to create an Excel pivot table that connected to this cube and had all the funtionality that I expected. I then published the pivot table as a web page. When I view the web page on the server where SQL 2005 is loaded the web page comes up with all of the expected functionality. But when I try to view the web page from another computer on my network I gett the error

The query could not be processed:

An error was encounted in the transport layer

The peer prematurely closed the connection.

Any suggestions on how to proceed.

Even though this is a web page, the page actually contains an ActiveX pivot table control which executes on the client machine. The client machines will need to have the OLEDB provider for SSAS 2005 installed in order to connect to SSAS 2005. You can download the "Microsoft SQL Server 2005 Analysis Services 9.0 OLE DB Provider " from the feature pack download page (about half way down) at http://www.microsoft.com/downloads/details.aspx?FamilyID=df0ba5aa-b4bd-4705-aa0a-b477ba72a9cb&DisplayLang=en

If this still does not help, there is a great article on troubleshooting connectivity issues with SSAS at http://sqljunkies.com/WebLog/edwardm/archive/2006/05/26/21447.aspx

|||I've loaded SSAS OLE DB 9.0. I've allowed anonymous users and I still get the error. Any other suggestions.|||Trying using the domain name in front of your user name (domain\username)|||

You could try tracing the SSAS server while you try to connect to see if there are any error events that appear.

The following thread from the newsgroup might also provide some ideas:

http://groups.google.com/group/microsoft.public.sqlserver.olap/browse_frm/thread/fda9edcc5cabaaa6/ae553b59c97dd4b5?lnk=st&q=&rnum=7#ae553b59c97dd4b5

|||

Hi,

I'm having a similar problem except it's less consistent (*sigh*). The situation: I have SSAS 2005 running on a non-domain machine (it's a test setup) along with Project Server 2007. I then have a few client machines I'm trying to connect with. The hitch is that one of them is able to connect with any method I try (.udl file, Excel PivotTable, or the Data Analysis Web components in the Project Server) - in SQL Server Profiler, I see the correct NTUserName appear (I have a couple of users created on the server). However, when I try to connect with another machine using Data Analysis components with IE7, I get the error "An error was encountered in the transport layer." and "The peer prematurely closed the connection."; the profiler shows the NTUserName is "Anonymous", yet I'm using the same user credentials. To make things interesting, on the problem machine, I can successfully connect using the .udl file method and Excel (the NTUserName is correct). Just for completeness, I have no problems when trying to connect locally from the server.

I'm reasonably sure the settings for Internet Explorer are correct and that I have the correct components installed (like OLE DB Provider for Analysis Services 9.0) otherwise no method would work. I'm just not sure why the user credentials are making it okay from one machine and not the other (and from different methods, to boot). Just for funsies, I enabled Anonymous access to SSAS but that had no difference. For reference, the machine that works is Vista and the machine that does not is XP SP2. Both machines are fully patched.

I have tried the suggestions in both of these following Web pages (and a variety of other postings) to no avail:

http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/cisql2005ascs.mspx

and

http://sqljunkies.com/WebLog/edwardm/

Other things:

The firewall on the server is off "Access data sources across domains" is enabled (IE option) "Automatic logon with current user name and password" is enabled in all browsers I try (IE option)

An error was encountered in the transport layer

I am fairly new to SSAS 2005. I have just created one cube. I was able to create an Excel pivot table that connected to this cube and had all the funtionality that I expected. I then published the pivot table as a web page. When I view the web page on the server where SQL 2005 is loaded the web page comes up with all of the expected functionality. But when I try to view the web page from another computer on my network I gett the error

The query could not be processed:

An error was encounted in the transport layer

The peer prematurely closed the connection.

Any suggestions on how to proceed.

Even though this is a web page, the page actually contains an ActiveX pivot table control which executes on the client machine. The client machines will need to have the OLEDB provider for SSAS 2005 installed in order to connect to SSAS 2005. You can download the "Microsoft SQL Server 2005 Analysis Services 9.0 OLE DB Provider " from the feature pack download page (about half way down) at http://www.microsoft.com/downloads/details.aspx?FamilyID=df0ba5aa-b4bd-4705-aa0a-b477ba72a9cb&DisplayLang=en

If this still does not help, there is a great article on troubleshooting connectivity issues with SSAS at http://sqljunkies.com/WebLog/edwardm/archive/2006/05/26/21447.aspx

|||I've loaded SSAS OLE DB 9.0. I've allowed anonymous users and I still get the error. Any other suggestions.|||Trying using the domain name in front of your user name (domain\username)|||

You could try tracing the SSAS server while you try to connect to see if there are any error events that appear.

The following thread from the newsgroup might also provide some ideas:

http://groups.google.com/group/microsoft.public.sqlserver.olap/browse_frm/thread/fda9edcc5cabaaa6/ae553b59c97dd4b5?lnk=st&q=&rnum=7#ae553b59c97dd4b5

|||

Hi,

I'm having a similar problem except it's less consistent (*sigh*). The situation: I have SSAS 2005 running on a non-domain machine (it's a test setup) along with Project Server 2007. I then have a few client machines I'm trying to connect with. The hitch is that one of them is able to connect with any method I try (.udl file, Excel PivotTable, or the Data Analysis Web components in the Project Server) - in SQL Server Profiler, I see the correct NTUserName appear (I have a couple of users created on the server). However, when I try to connect with another machine using Data Analysis components with IE7, I get the error "An error was encountered in the transport layer." and "The peer prematurely closed the connection."; the profiler shows the NTUserName is "Anonymous", yet I'm using the same user credentials. To make things interesting, on the problem machine, I can successfully connect using the .udl file method and Excel (the NTUserName is correct). Just for completeness, I have no problems when trying to connect locally from the server.

I'm reasonably sure the settings for Internet Explorer are correct and that I have the correct components installed (like OLE DB Provider for Analysis Services 9.0) otherwise no method would work. I'm just not sure why the user credentials are making it okay from one machine and not the other (and from different methods, to boot). Just for funsies, I enabled Anonymous access to SSAS but that had no difference. For reference, the machine that works is Vista and the machine that does not is XP SP2. Both machines are fully patched.

I have tried the suggestions in both of these following Web pages (and a variety of other postings) to no avail:

http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/cisql2005ascs.mspx

and

http://sqljunkies.com/WebLog/edwardm/

Other things:

The firewall on the server is off

"Access data sources across domains" is enabled (IE option)

"Automatic logon with current user name and password" is enabled in all browsers I try (IE option)

Thursday, March 8, 2012

An error has occurred during report processing. (rsProcessingAborted) Get Online Help

An error has occurred during report processing. (rsProcessingAborted) Get Online Help Cannot create a connection to data source 'database03'. (rsErrorOpeningConnection) Get Online Help Cannot open database requested in login 'database'. Login fails. Login failed for user 'sam'.

Hi Raj,

What is your question? The error is fairly clear. The query that returns data in your report cannot access the data source defined in your report for User 'sam' and 'database03'.

An error has occurred during report processing.

  • Hi all,

  • When i try to access the report Manager through the URL I get this error:

  • An error has occurred during report processing.

  • Cannot create a connection to data source 'rs2005sbsDW'.

  • The ConnectionString property has not been initialized.

  • Colud someone help me out.

  • Could some one out there help me out?

    Ronald

  • An error has occurred during report processing.

  • Hi all,

  • When i try to access the report Manager through the URL I get this error:

  • An error has occurred during report processing.

  • Cannot create a connection to data source 'rs2005sbsDW'.

  • The ConnectionString property has not been initialized.

  • Colud someone help me out.

  • Could some one out there help me out?

    Ronald

  • Saturday, February 25, 2012

    An "EXISTS" Problem

    Hey, guys,
    Below are my DDL,
    CREATE TABLE [dbo].[test1] (
    [id] [int] IDENTITY (1, 1) NOT NULL ,
    [A] [varchar] (50) COLLATE Chinese_Taiwan_Stroke_CI_AS NULL ,
    [B] [varchar] (50) COLLATE Chinese_Taiwan_Stroke_CI_AS NULL
    )
    CREATE TABLE [dbo].[test2] (
    [A] [varchar] (50) COLLATE Chinese_Taiwan_Stroke_CI_AS NULL ,
    [B] [varchar] (50) COLLATE Chinese_Taiwan_Stroke_CI_AS NULL
    )
    test1 table
    id A B
    --
    1 c 1
    2 c 2
    3 c 3
    4 c 4
    6 b
    9 d
    10 e
    test2 table
    A B
    --
    c 1
    b 1
    b 2
    d 1
    result table
    id A B
    --
    2 c 2
    3 c 3
    4 c 4
    6 b
    9 d
    Here is my sql to get the result table
    SELECT P.id,P.A,P.B
    FROM test1 P left outer join test2 R on P.A=R.A
    WHERE (NOT EXISTS
    (SELECT *
    FROM test2 Q
    WHERE P.A = Q.A AND P.A + P.B = Q.A + Q.B))
    and R.A is not null
    group by P.id,P.A,P.B
    Can this SQL command be neater?
    thanks a lot.
    AllenHere are a couple of other methods, although 'cleaner' is a bit subjective'.
    Personally, I prefer the NOT EXISTS technique over LEFT JOIN.
    INSERT INTO test1 VALUES(1,'c',1)
    INSERT INTO test1 VALUES(2,'c',2)
    INSERT INTO test1 VALUES(3,'c',3)
    INSERT INTO test1 VALUES(4,'c',4)
    INSERT INTO test1 VALUES(6,'b', NULL)
    INSERT INTO test1 VALUES(9,'d', NULL)
    INSERT INTO test1 VALUES(10,'e', NULL)
    GO
    INSERT INTO test2 VALUES('c', 1)
    INSERT INTO test2 VALUES('b', 1)
    INSERT INTO test2 VALUES('b', 2)
    INSERT INTO test2 VALUES('d', 1)
    GO
    SELECT P.id, P.A, P.B
    FROM test1 P
    JOIN test2 R ON P.A = R.A
    WHERE NOT EXISTS
    (
    SELECT *
    FROM test2 Q
    WHERE
    P.A = Q.A AND P.B = Q.B
    )
    GROUP BY P.id, P.A, P.B
    GO
    SELECT P.id, P.A, P.B
    FROM test1 P
    JOIN test2 R ON P.A = R.A
    LEFT JOIN test2 Q ON P.A = Q.A AND P.B = Q.B
    WHERE Q.A IS NULL
    GROUP BY P.id, P.A, P.B
    GO
    --
    Hope this helps.
    Dan Guzman
    SQL Server MVP
    "Allen" <cpchen@.cht.com.tw> wrote in message
    news:u19bQ77tDHA.2408@.tk2msftngp13.phx.gbl...
    > Hey, guys,
    > Below are my DDL,
    > CREATE TABLE [dbo].[test1] (
    > [id] [int] IDENTITY (1, 1) NOT NULL ,
    > [A] [varchar] (50) COLLATE Chinese_Taiwan_Stroke_CI_AS NULL ,
    > [B] [varchar] (50) COLLATE Chinese_Taiwan_Stroke_CI_AS NULL
    > )
    > CREATE TABLE [dbo].[test2] (
    > [A] [varchar] (50) COLLATE Chinese_Taiwan_Stroke_CI_AS NULL ,
    > [B] [varchar] (50) COLLATE Chinese_Taiwan_Stroke_CI_AS NULL
    > )
    > test1 table
    > id A B
    > --
    > 1 c 1
    > 2 c 2
    > 3 c 3
    > 4 c 4
    > 6 b
    > 9 d
    > 10 e
    > test2 table
    > A B
    > --
    > c 1
    > b 1
    > b 2
    > d 1
    >
    > result table
    > id A B
    > --
    > 2 c 2
    > 3 c 3
    > 4 c 4
    > 6 b
    > 9 d
    > Here is my sql to get the result table
    > SELECT P.id,P.A,P.B
    > FROM test1 P left outer join test2 R on P.A=R.A
    > WHERE (NOT EXISTS
    > (SELECT *
    > FROM test2 Q
    > WHERE P.A = Q.A AND P.A + P.B = Q.A + Q.B))
    > and R.A is not null
    > group by P.id,P.A,P.B
    >
    > Can this SQL command be neater?
    >
    > thanks a lot.
    > Allen
    >

    AMO:Problem when dropping a role from Analysis Server 2005 by using .Net Programming and AMO

    HI All,

    I am working on Analysis services 2005 and managing with AMO in .Net 2.0.

    Here am able to create a role and drop a role if there is no permissions to the role but in case role is having any permissions am not able to Drop the role from the database.Could any one please tell me how to remove the permissions on that role by using AMO and .Net.

    Thanks in Advance,

    vishwesh

    Have you tried searching the DatabasePermissions collection of the database for Permissions referring to the Role in question, and removing each of these from the database?

    Programming AMO Security Objects

    ...

    A role cannot be dropped if there is any Permission object associated with the role. To drop a role, all Permission objects in the Database objects must be searched, and the Role removed from permissions, before the Role can be dropped from the Database.

    |||

    Hi

    I gone through the document but can you send sample code how to remove the database permissions .

    |||

    I don't have such a working sample at the moment - maybe someone else on the Forum does?