Saturday, October 16, 2010

Connecting SQL Reporting Services to a SharePoint List

SQL 2005 Reporting Services Drill down

Reporting Services 2008

Friday, October 15, 2010

Importing/Exporting Settings with Visual Studio 2005

UPDATES


If need to know every updates from the website 
Please send your email ID to :

sqlserverbook@gmail.com

SQL Server 2005 Service Broker


Please Click on 720p HD.

An Introduction to SQL Server Service Broker

Introduction

With Service Broker, a feature in Microsoft SQL Server 2005, internal or external processes can send and receive guaranteed, asynchronous messages by using extensions to Transact-SQL Data Manipulation Language (DML). Messages can be sent to a queue in the same database as the sender, to another database in the same SQL Server instance, or to another SQL Server instance either on the same server or on a remote server.
To better understand Service Broker, familiarity with the key concepts of queues, dialogs, conversation groups, and activation is helpful. These are discussed briefly in this section.

Queues

Service Broker uses queues to provide loose coupling between the message sender and the message receiver. The sender can use the SEND command to put a message in a queue and then continue on with the application, relying on Service Broker to ensure that the message reaches its destination.
Queues permit a lot of scheduling flexibility. For example, the sender can send out multiple messages for multiple receivers to process in parallel. The receivers might not process the messages until long after they were sent, but because incoming messages are queued, the receivers can process them at their own rate and the sender doesn't have to wait for the receivers to finish before continuing.

Dialogs

Service Broker implements dialogs, which are bidirectional streams of messages between two endpoints. All messages in a dialog are ordered, and dialog messages are always delivered in the order they are sent. The order is maintained across transactions, across input threads, across output threads, and across crashes and restarts. Some message systems ensure message order for the messages sent or received in a single transaction but not across multiple transactions, making Service Broker dialogs unique in this regard.
Each message includes a conversation handle that uniquely identifies the dialog that is associated with it. For example, an order entry application might have dialogs open simultaneously with the shipping application, the inventory application, and the billing application. Because messages from each application have a unique conversation handle, it's easy to tell which application sent each message.

Conversation Groups

Service Broker provides a way of grouping all the dialogs that are used for a particular task. This method uses conversation groups. In our previous order entry example, all the dialogs associated with processing a particular order would be grouped into a single conversation group. The conversation group is implemented as a conversation group identifier, which is included with all messages in all dialogs contained in the conversation group. When a message is received from any of the dialogs in a conversation group, the conversation group is locked with a lock that is held by the receiving transaction. For the duration of the transaction, only the thread that holds the lock can receive messages from any of the dialogs in the conversation group. This makes our order entry application much easier to write because even though we use many threads for scalability, any particular order is only processed on one thread at a time. This means we don't have to make our application resilient to problems that are caused by the simultaneous processing of a single order on multiple threads.
One of the most common uses for the conversation group identifier is to label the state that is associated with a particular process. If a process involves many messages over time, it probably doesn't make sense to keep an instance of the application running through the whole process. For example, the order entry application will scale better if, between messages, any global state that is associated with processing an order is stored in the database and retrieved when the next message associated with that order is received. The conversation group identifier can be used as the primary key in the state tables to enable quick retrieval of the state associated with each message.

Activation

You use the activation feature of Service Broker to specify a stored procedure that will handle messages destined for a particular service. When messages arrive for a service, Service Broker checks whether there is a stored procedure running that can process the messages. If there isn't a running message-processing stored procedure, Service Broker starts one. The stored procedure then processes messages until the queue is empty, after which it terminates. Moreover, if Service Broker determines that messages are arriving faster than the stored procedure can process them, it starts additional instances of the stored procedure until enough are running to keep up with the incoming messages (or until the configured maximum number is reached). This ensures that the right number of resources for processing incoming messages are always available.

Why Use Asynchronous, Queued Messaging?

Queues enable the flexible scheduling of work, which can translate to big improvements in both performance and scalability. To see how, go back to the order entry example. Some parts of an order must be processed before the order can be considered complete, such as the order header, available to promise, and order lines. But other parts realistically don't have to be processed before the order is committed; for example, billing, shipping, inventory, and so on. If the "delayable" piece of the order can be processed in a guaranteed but asynchronous manner, the core part of the order can be processed faster.
Asynchronous messaging can also provide opportunities for increased parallelism. For example, if you need to check the customer's credit and check availability for ordered items, starting both processes simultaneously can improve overall response time.
Queuing can also enable systems to distribute processing more evenly, reducing the peak capacity required by a server. For example, a typical incoming order rate might look something like this:
ms345108.sql2k5_servbroker_fig1(en-US,SQL.90).gif
Figure 1
There is a peak at the beginning of the day and another one at the end. If each order is entered into the shipping system as it is created, the shipping system's load would look like this:
ms345108.sql2k5_servbroker_fig2(en-US,SQL.90).gif
Figure 2
The afternoon peak is bigger because that's when the shipping paperwork is done for outgoing shipments. If the shipping system is connected to the order entry system with a queue, the peaks can be leveled by shifting some of the work to the slack incoming order times:
ms345108.sql2k5_servbroker_fig3(en-US,SQL.90).gif
Figure 3

Why Use Transactional Messaging?

Service Broker supports transactional messaging, which means that messages are sent and received as transactions. If a transaction fails, the message sends and receives are rolled back, and don't take effect until the transaction has processed the messages successfully and committed.
Transactional messaging makes programming a messaging application much more straightforward because you can freely scatter sends and receives wherever they make sense and nothing happens until the transaction commits. If the transaction gets rolled back, none of the sends take place, and the messages that were received go back in the queue so they will be received and processed again.

How Service Broker Solves the Hard Problems in Messaging

Messaging applications have been around for a long time, and there are compelling reasons to build them. So why aren't there more of them? The answer is that messaging applications are hard to get right. SQL Server Service Broker, however, solves some of the most difficult messaging application issues: message ordering, coordination, multithreading, and receiver management.

Message Ordering

In traditional reliable messaging applications, it's easy to get messages delivered out of order. For example, application A sends messages 1, 2, and 3. Application B receives and acknowledges 1 and 3, but experiences an error with 2, so application A resends it. However, now 2 is received after 3. Traditionally, programmers dealt with this problem by writing the application so that order didn't matter, or by temporarily caching 3 until 2 arrived so the messages could be processed in order. In contrast, Service Broker handles this transparently, so all messages in a dialog are received in the order sent, with no gaps in the message sequence.
A related problem is duplicate delivery. In the previous example, if application B received message 2, but the acknowledgement message back to application A was lost, application A would resend 2 and application B would now receive 2 twice. Again, Service Broker ensures that messages are never delivered twice, even if the power is lost in the middle of a transaction.

Coordination

Messages are generally stand-alone entities, which can make it difficult to determine which conversation a message came from. For example, you may send thousands of messages to an inventory service requesting inventory updates. The inventory service may respond to some of these messages almost immediately and take a very long time to respond to others, making it difficult to decide which response message corresponds to which inventory request.
With Service Broker, by contrast, both the dialog handle and the conversation group identifier are included with every message, making it very easy to determine the order and request that each response goes with. (Some messaging systems have a correlation ID you can set to make this determination, but with dialogs, this is not necessary.)

Multithreading

One of the most difficult issues in a messaging application is making a multithreaded reader work correctly. When multiple threads are receiving from the same queue, it's always possible for messages to get processed out of order, even if they are received in order. For example, if a message containing an order header is received by thread A and a message containing an order line is later received by thread B, it's possible that the order line transaction will attempt to commit first and fail a referential constraint because the order doesn't exist yet. Although the order line message will roll back until the order header exists, it is still a waste of resources.
Service Broker solves multithreading issues by putting a lock on the conversation group when a message is read, so that no other thread can receive associated messages until the transaction commits. Service Broker makes multithreaded readers work simply and reliably.

Receiver Management

In many reliable messaging systems, the application that receives messages from a queue must be started before messages are received. In most cases, the user must decide how many application instances or threads should be running in each queue. If this is a fixed number and the message arrival rate varies, there are either too many or too few queue readers running most of the time.
Service Broker solves receiver management issues by activating queue readers as required when messages arrive. Moreover, if a reader crashes or the system is rebooted, readers are automatically started to read the messages in the queue. Service Broker does many of the same kinds of application management tasks that are typically handled by a middle-tier transaction-processing monitor.

Why Build Messaging into the Database?

Why is Service Broker part of the database engine? Wouldn't it work as well if it were an external application that used the database to store its messages? There are several reasons why the database is the right place for Service Broker to be.

Conversation Group Locking

Service Broker makes multiple queue readers possible by locking the conversation group; however, locking the conversation group with normal database commands is almost impossible to accomplish efficiently. Service Broker accordingly uses a new kind of database lock, and only Service Broker commands understand this lock type.

Remote Transactional Receive Handling

Some messaging systems restrict transactional messaging to receiving applications that are running on the same server as the queue. Service Broker, by contrast, supports remote transactional receives from any server that can connect to the database.

Common Administration

One of the issues with transactional messaging systems is that if the messages are stored in a different place than the data, it's possible for the message store and the database to get out of synch when one or the other is restored from backup. With a single database for both messages and application data in Service Broker, this is very hard to get wrong. When your data, messages, and application logic are all in the database, there is only one thing to back up, one place to set up security, and one thing to administer.

Direct Sends to Receive Queue

Because Service Broker is integrated into the database engine, a message that is addressed to another queue in any database in the same SQL Server instance can be put directly into the receive queue, bypassing the send queue and greatly improving performance.

Common Language Support

The messaging part of an application and the data part of an application use the same language and tools in a Service Broker application. This leverages the developer's familiarity with Microsoft ActiveX Data Objects (ADO) and other database programming techniques for message-based programming. With the CLR (common language runtime) stored procedures available in SQL Server 2005, stored procedures that implement a Service Broker service can be written in a variety of languages and still take advantage of the common administration benefits of Service Broker.

Single-Connection Execution Capability

Service Broker commands can be executed over the same database connection as the rest of the Transact-SQL used by the application. Using a single connection means that a messaging transaction doesn't need to be a distributed transaction, as it would have to be if the messaging system were external to the database.

Enhanced Security

Because Service Broker messages are handled internally by the database, the access permissions of the message sender can be easily checked against database objects. If the message system were an external process, a separate database connection would be required for each user who is sending messages. Having the same identity for the database and the messaging system makes security easier to set up correctly.

Transmission Queue Visibility Across Databases

Because Service Broker runs in the context of the database instance, it can maintain an aggregate view of all messages that are ready to be transmitted from all databases in the instance. This capability enables each database to maintain its own transmission queues for backup and administration while still maintaining fairness in resource usage across all the databases in the instance, something that would be very difficult, if not impossible, for an external messaging manager to do.

Conclusion

The unique features of Service Broker and its deep database integration make it an ideal platform for building a new class of loosely coupled services for database applications. Service Broker not only brings asynchronous, queued messaging to database applications but significantly expands the state of the art for reliable messaging.

Thursday, October 14, 2010

updates

If need to know every updates from the website 
Please send your email ID to :

sqlserverbook@gmail.com

Add node to existing SQL Server 2008 failover cluster

Remove active node from existing SQL Server 2008 failover cluster

Remove passive node from existing SQL Server 2008 failover cluster

Integrated single node installation SQL Server 2008 failover cluster

InstallingSQLServer2008

Wednesday, October 13, 2010

LiteSpeed SQL Server Backup and Restore

Creating Analysis Service Project

SSAS: Defining and Deploying a Cube

Which query is scanning that index?


SQL 2005's db_dm_index_usage_stats DMV allows DBAs to analyse how often indexes are being used (or not used), including individual counts for seeks, scans & lookups on each index in your database. I was thrilled when I first saw this DMV as there was previously no way to determine whether indexes weren't being used & could be safely removed from a table without risk of queries suddenly full scanning the table & bringing a system's performance to its knees.
Another useful metric reported by db_dm_index_usage_stats is its user_scans column. This reports how many times each index has been fully scanned without a filter predicate (not range scanned). Full scans are usually bad from a performance perspective so it is useful to know whether you have any indexes that are being fully scanned.
Even more useful would be to know which QUERIES are scanning those indexes so you could do something about tuning them. Unfortunately, db_dm_index_usage_stats doesn't quite go this far but I recently decided to try & develop a script to work this out. After scratching around the various performance DMVs & not getting very far, I decided to post a question to the SQL MVP discussion forum.
Thankfully, an ex SQL MVP who now works for Microsoft on the SQL dev team - Umachandar Jayachandran - came up with this innovative approach:

with xmlnamespaces('http://schemas.microsoft.com/sqlserver/2004/07/showplan' as s)
select top 50 st.text, qp.query_plan, qs.ec as exec_count, qs.tlr as total_reads
from (
  select s.sql_handle, s.plan_handle, max(s.execution_count) as ec, max(s.total_logical_reads) as tlr
  from sys.dm_exec_query_stats as s
  where s.max_logical_reads > 100
  group by s.sql_handle, s.plan_handle) as qs
  cross apply sys.dm_exec_query_plan(qs.plan_handle) as qp
  cross apply sys.dm_exec_sql_text(qs.sql_handle) as st
  cross apply (select distinct relop.value('@Index','nvarchar(130)') as IndexRef
  from qp.query_plan.nodes(
     N'//s:Batch/s:Statements/s:StmtSimple/s:QueryPlan[1]//
     s:RelOp[@PhysicalOp = ("Index Scan")]/*[local-name() = ("IndexScan")]/
     s:Object[@Database = ("[DBNameHere]")
     and @Table = ("[TableNameHere]")
     and @Index = ("[IndexNameHere]")]'
     ) as ro(relop)
  ) as r

Notice that this query leverages three performance DMVs:
* dm_exec_query_stats - identifies base query performance stats
* dm_exec_query_plan - cross joined to aggregated view of dm_query_stats to pick up the XML query plan for each query.
* dm_exec_sql_text - also cross joined to pick up each sql statement.
The innovative part of this query is the way it uses XQuery syntax to interrogate the XML typed query_plan column from dm_exec_query_plan, traversing the XML Showplan structure of the query_plan column's nodes collection. It starts at the Batch element, navigating through Statements/StmtSimple/QueryPlan before searching through RelOp elements for one with PhysicalOp attribute = "Index Scan" and an IndexScan element with a child Object element containing the required DBname, TableName & IndexName. This sounds like quite a mouthful, but if you compare the above sentence with some XML output from a query plan which scans an index you should be able to follow the XML traversal fairly easily.
Behind the scenes, this query is actually trawling the procedure cache, performing the above parse operation on every plan in the cache. On systems with large caches (10s of Gb) this might take minutes, but this shouldn't result in a significant performance degradation as the scan is non-blocking & although it works a single CPU hard, it shouldn't consume multi CPUs (it hasn't in my testing so far at least). The filter on max_logical_reads attempts to limit the number of plans parsed. Whilst I have set the filter to 100 reads here, you might experiment with this based on the size of index scans you're evaluating. There's no point parsing query plans that don't have a maximum logical reads count at least the same size as the index scan. Setting this filter to a size close to the number of pages your scanned index is using for storage will usually eliminate the majority of smaller plans within the cache from being XML parsed. 
Keep in mind that this query only works with execution plans currently cached in the procedure cache. This is a consequence of using these DMVs which are limited to accessing contents of the current cache. Because the procedure cache is fairly transient, results may vary if the system is under memory pressure or if you restart SQL Server regularly, or manually clear procedure cache (eg by using dbcc freeproccache).
Another limitation of this query is that it only shows stored procedures, without showing the specific statements within stored procedures that are scanning the index. This shouldn't be too hard to work out by looking at the stored proc code, but other DMVs can help zero in on the actual statements. I'll try & improve this aspect of the code when time permit

Measure database file i/o with fn_virtualfilestats (SQL2000 AND SQL2005)

I'm posting another admin script today which can be useed on either SQL Server 2000 or SQL Server 2005 to measure i/o behaviour against specific database files, using the fn_virtualfilestats() system TSQL function.

This script is useful when you need to identify the source of disk bottlenecks on a database server.

For example, you might have identified that a disk bottleneck on your exists on a server via use of a Perfmon counter such as [\\YOURSERVERNAME\LogicalDisk(D:)\Avg. Disk Read Queue Length], but multiple database files exist on the D:\ volume and you're not sure which specific file is causing the i/o activity that is the source of the bottleneck.

The output from this script provides a break-down of specific i/o data on a per-database-file basis, allowing you to see how many bytes have been read to or written from each file (total, since the last restart of SQL Server) and also the total stall time for each file (ms).

More columns were added to the output of fn_virtualfilestats() for SQL Server 2005, providing even further detailed data than from the SQL Server 2000 version of fn_virtualfilestats(). Most significantly, stall time is only provided as a total in the SQL 2000 version but is broken down into read stalls & write stalls in the SQL 2005 version.

When this script runs, it determines which version of SQL Server it is running against & provides the appropriate output for the discovered version. So, when you run this against SQL 2000 you can expect the new SQL 2005 output columns to remain blank.

I've often found fn_virtualfilestats() very useful in demonstrating how much i/o the tempdb can generate compared to user databases. In fact, this is what prompted me to post this script today - I used it yesterday to show a customer that their tempdb was generating 5 x the write i/o activity compared to their main production database. This led to some further useful discussion about file placement.


/**********
* fn_virtualfilestats() script
* Greg Linwood
* greg@SolidQualityLearning.com
* Feb 25th, 2006
*
* Collects fn_virtualfilestats() for all databases
* on either SQL 2000 or SQL 2005 instances.
**********/
set nocount on
go
if object_id('tempdb..#filestats') > 0
drop table #filestats
go
declare @ver int
set @ver = cast(serverproperty('Productversion') as char(1))
create table #filestats (
DbId int not null,
DBName varchar (255) not null,
fileid int not null,
FileName varchar (255) not null,
SampleTime datetime not null,
TS bigint null,
NumberReads bigint null,
BytesRead bigint null,
IoStallReadMS bigint null,
NumberWrites bigint null,
BytesWritten bigint null,
IOStallWriteMS bigint null,
IOStallMS bigint null,
BytesOnDisk bigint null,
constraint pk primary key (DbId, fileid, SampleTime))
declare @database_name varchar(255)
declare @database_id int
declare @file_id int
declare @file_name varchar (1000)
declare @sql nvarchar(4000)
declare cr cursor for
select sd.name, sd.dbid, saf.fileid, saf.name
from sysaltfiles saf
join sysdatabases sd on saf.dbid = sd.dbid
order by sd.name, saf.fileid
open cr
fetch next from cr
into @database_name, @database_id, @file_id, @file_name
while @@fetch_status = 0
begin
if @ver = 8
begin
select @sql = N'
insert into #filestats (DBName, FileName, DbId, FileId, SampleTime,
TS, NumberReads, BytesRead, NumberWrites, BytesWritten, IOStallMS)
select @dn, @fn, DbId, FileId, getdate(), [Timestamp], NumberReads,
BytesRead, NumberWrites, BytesWritten, IOStallMS
from :: fn_virtualfilestats(@database_id, @file_id) '
execute sp_executesql @sql
, N'@dn varchar (255), @fn varchar (255), @database_id int, @file_id int'
, @dn = @database_name
, @fn = @file_name
, @database_id = @database_id
, @file_id = @file_id
end


if @ver = 9
begin
select @sql = N'
insert into #filestats (DBName, FileName, DbId, FileId, SampleTime,
TS, NumberReads, BytesRead, IoStallReadMS, NumberWrites, BytesWritten,
IOStallWriteMS, IOStallMS, BytesOnDisk)
select @dn, @fn, DbId, FileId, getdate(), [Timestamp], NumberReads,
BytesRead, IoStallReadMS, NumberWrites, BytesWritten,
IOStallWriteMS, IOStallMS, BytesOnDisk
from :: fn_virtualfilestats(@database_id, @file_id) '
execute sp_executesql @sql
, N'@dn varchar (255), @fn varchar (255), @database_id int, @file_id int'
, @dn = @database_name
, @fn = @file_name
, @database_id = @database_id
, @file_id = @file_id
end
fetch next from cr
into @database_name, @database_id, @file_id, @file_name
end
go
close cr
deallocate cr
go
select * from #filestats
go
if object_id('tempdb..#filestats') > 0
drop table #filestats
go

How to: Create a Sparkline in a Table

Sparklines, Bar Charts, and Indicators in Report Builder 3.0

Tuesday, October 12, 2010

SQL Server 2008 Reporting Services

SSIS - WorkFlow Vs. DataFlow

SSIS - Variables In Detail

Basic SSIS Lookup Transformation

Event from SQLServerCentral

The 2010 SQLServerCentral Opening Night Party

By Press Release, 2010/10/26 (first published: 2010/05/13)


The 2010 PASS Summit is coming up soon and once again SQLServerCentral is hosting an opening night party that will be catered by West Coast Entertainment, bringing in a bunch of casino games. We'll have professional dealers hosting blackjack, poker, roulette, and craps on Monday night after the PASS reception. The party will be in the Convention Center, and here are the details
  • Monday, November 8, 2010
  • 8pm-10pm
  • Room 615-617

We've hosted a casino party for the last 5 years and they've all been a lot of fun. We usually have 10 or so casino games set up for you to take part in. We give you a voucher for a bunch of chips, and so you get the chance to have some Vegas style fun without any risk. We don't even require you to be a good gambler to win a prize.
We give away the vast majority of our prizes on a random basis. We'll call out the next Deuce, Ace, or even one-eyed Jack dealt as the winner. So just by participating, you have a good chance to win something. We do save a few prizes for the hardcore gamblers, giving the top three money winners a chance at something interesting, but most of our 50+ prizes will be given out to random attendees.
We'll also be looking to give away a few pieces of SQLServerCentral swag to everyone that attends. In the past we've given a copy of The Best of SQLServerCentral as well as a polo shirt to everyone. I'm not sure what we'll do this year, but it will be something nice.
So be sure to register for the PASS Summit, and use our code, SSC2010.
You get chips to gamble with, we give away a ton of free prizes, and a couple of the hardcore gamblers usually win a bigger prize. I might change that this year since it seems I have the same 3 or 4 guys winning all the chips every year.
Our prizes last year included:
  • Digital Picture Frame
  • Iron Man DVD
  • Nintendo DS
  • Old Man's War in paper book format
  • USB Flash drives
  • A small digital video camera
  • Red Thunder in paper book format
  • Matrix DVDs
  • and more
I've love suggestions for prizes, and I might see if I can swing an iPad. However I need lots of referrals. The more referrals, the more prizes :)

Tickets

There are two ways to get a ticket to our party. If you are attending the PASS Summit, you can use the "SSC2010" registration code when you sign up. This will get you $200 off the registration price and ensure you have a ticket to the party in your badge when you arrive.
If you are a speaker, volunteer, chapter leader, or someone else that already has a code to use, we are selling tickets for the party. For $30, you can attend and get a chance to win a prize. Our aim is to get a prize for every 3 people that sign up, so your chances of walking away with a DVD or something else are pretty good.
To get tickets, you can
  • paypal $30 to sjones@sqlservercentral.com
  • bring $30 cash to the door
Either way, be sure to get your ticket today.
Register for PASS and use SSC2010 as the code.
Note that if you're already registered, we can't add the code for you. It's a referral code. However if you register now and forget, send me and PASS an email right away.

Details

Date: Monday, November 8, 2010
Time: 8pm - 10pm
Location: Washington State Convention Center, room 615-617. This is the top floor, above registration.


If You want to Know every update from this blog ?



Please send your email address to
sqlserverbook@gmail.com

MDX with Analysis Services - Overview

Multi-Dimensional eXpressions (MDX) is a language used for querying and extending capabilities of Analysis Services cubes. MDX is typically used for two purposes:


  1. Creating reporting queries. Such constructs contain SELECT, FROM and WHERE clauses (among other elements) and are referred to as MDX queries or MDX statements.
     
  2. Defining cube structures such as calculated members, named sets, actions, key performance indicators and so forth. Such constructs are referred to as MDX expressions. MDX expressions do not contain SELECT, FROM or WHERE clauses; they are used to define calculations using cube dimension members and measures.


If you examine a basic MDX statement it might seem that it contains parts similar to SQL statements: the SELECT, FROM and WHERE clauses. However, that is where the similarity ends. MDX is very different from SQL because it is written for traversing dimension hierarchies and defining cube cells. MDX doesn't support variables, parameters, cursors and other common SQL structures. On the other hand, MDX is more powerful than SQL when it comes to referencing hierarchy members. Even if you have programmed SQL queries for a decade getting used to MDX might take a while.



Fortunately there are 3rd party tools on the market that create MDX queries for you. Such tools can be used for browsing the cube data and could also help you learn MDX. Unfortunately, MDX queries generated by the 3rd party tools are typically generic and therefore not as optimal as they could be. Furthermore, the tool vendors often extend the MDX functionality by writing custom functions, so queries generated by such tools might not always work for your application. 



When writing MDX queries you need to make a clear distinction between the data and metadata. Cube data is what you present to your users, for example you would show the sales amount for Ford Taurus 2005 and Toyota Camry 2004. Metadata is how the data is structured in your dimensions; for example, the product dimension could have levels of brand, make, model and year. It is important to realize if you are trying to refer to data or to metadata within MDX because different functions are used to address each need. For example, if you wanted to return all brands within your product dimension you would use the "members" function, as in [product].[brand].members. On the other hand, if you wanted to see all makes within a certain brand you would use the "children" function by providing the name of the parent, as in [product].[brand].[Toyota].children. 



This section will introduce you to basic MDX concepts and then show you how to make your expressions more powerful by adding various functions.

DTS to SSIS Migtation