mcts exam 70-448 selected q & a

57
You are developing a SQL Server 2008 SQL Server Integration Services (SSIS) package. The package, PackageA, contains two Execute Package tasks that are used to run two other packages. PackageA also contains a Script task. You want to create a variable that is accessible to the Script task and Execute Package tasks as well as to the two child packages. What should you do? Explanation: You should create a variable, set the scope at the package level, and edit the configuration of the child packages to ensure that the variable is accessible to the Script task as well as to the child packages. When a variable is created at the package level, the variable is accessible to all the components in the package. To make the variable accessible to child packages as well, you should select the Parent Package Variable configuration type for the child packages, and then specify the name of the variable in the parent package. A package configuration contains a property/value pair and is created after the package has been developed. Following are the types of package configuration: XML configuration: An XML file is which contains one or more package configurations. Environment variable: The configuration points to an environment variable that contains the package configuration. Registry entry: A new or existing registry key contains the package configuration. Parent Package variable: A variable in the parent package contains a configuration value used by the child packages. SQL Server table: A SQL Server table contains one or more package configurations. You should not create a variable with the Script Task scope and edit either the properties or the configuration of the child packages, because the variable would not be accessible to any other task in the package. If the variable is created with the Script task scope, the variable will only be accessible to the Script task, and therefore will not be accessible to the child packages. You should not create a variable with the package scope and then edit the properties of the child packages because changing the properties of the child package will not make the variable in the parent package accessible to the child packages. You need to change the configuration of the child packages to make the variable in the parent package accessible to the child packages. You created a SQL Server 2008 SQL Server Integration Services (SSIS) package in a development environment. The package uses the Merge Transformation task to merge the data contained in two Microsoft Excel workbooks and store the results in a SQL Server table. You are now moving the package from the development environment to the test environment. The package will later be moved to the production environment. You have set the configuration type as an XML file, which stores the path of the Excel workbooks. The path of the configuration file is different in each of the environments. You want to ensure that the package runs properly in the test and production environments. What should you do? Item: 1 (Ref:Cert-70-448.1.4.4) n m l k j Create a variable with the Script Task scope and edit the properties of the child packages. n m l k j Create a variable with the Script Task scope and edit the configuration of the child packages. n m l k j Create a variable with the parent package scope and edit the configuration of the child packages. n m l k j i Create a variable with the parent package scope and edit the properties of the child packages. Answer: Create a variable with the parent package scope and edit the configuration of the child packages. Item: 2 (Ref:Cert-70-448.1.4.5) Page 1 of 57 Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Upload: tatyana-yanaty

Post on 29-Mar-2015

428 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MCTS exam 70-448 selected Q & A

You are developing a SQL Server 2008 SQL Server Integration Services (SSIS) package. The package, PackageA, contains two Execute Package tasks that are used to run two other packages. PackageA also contains a Script task. You want to create a variable that is accessible to the Script task and Execute Package tasks as well as to the two child packages. What should you do?

Explanation: You should create a variable, set the scope at the package level, and edit the configuration of the child packages to ensure that the variable is accessible to the Script task as well as to the child packages. When a variable is created at the package level, the variable is accessible to all the components in the package. To make the variable accessible to child packages as well, you should select the Parent Package Variable configuration type for the child packages, and then specify the name of the variable in the parent package. A package configuration contains a property/value pair and is created after the package has been developed. Following are the types of package configuration:

� XML configuration: An XML file is which contains one or more package configurations.

� Environment variable: The configuration points to an environment variable that contains the package configuration.

� Registry entry: A new or existing registry key contains the package configuration.

� Parent Package variable: A variable in the parent package contains a configuration value used by the child packages.

� SQL Server table: A SQL Server table contains one or more package configurations.

You should not create a variable with the Script Task scope and edit either the properties or the configuration of the child packages, because the variable would not be accessible to any other task in the package. If the variable is created with the Script task scope, the variable will only be accessible to the Script task, and therefore will not be accessible to the child packages. You should not create a variable with the package scope and then edit the properties of the child packages because changing the properties of the child package will not make the variable in the parent package accessible to the child packages. You need to change the configuration of the child packages to make the variable in the parent package accessible to the child packages.

You created a SQL Server 2008 SQL Server Integration Services (SSIS) package in a development environment. The package uses the Merge Transformation task to merge the data contained in two Microsoft Excel workbooks and store the results in a SQL Server table. You are now moving the package from the development environment to the test environment. The package will later be moved to the production environment. You have set the configuration type as an XML file, which stores the path of the Excel workbooks. The path of the configuration file is different in each of the environments. You want to ensure that the package runs properly in the test and production environments. What should you do?

Item: 1 (Ref:Cert-70-448.1.4.4)

nmlkj Create a variable with the Script Task scope and edit the properties of the child packages.

nmlkj Create a variable with the Script Task scope and edit the configuration of the child packages.

nmlkj Create a variable with the parent package scope and edit the configuration of the child packages.

nmlkji Create a variable with the parent package scope and edit the properties of the child packages.

Answer:

Create a variable with the parent package scope and edit the configuration of the child packages.

Item: 2 (Ref:Cert-70-448.1.4.5)

Page 1 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 2: MCTS exam 70-448 selected Q & A

Explanation: You should use an environment variable to store the path of the XML file to ensure that the package is able to run properly in the test and production environments. Package configurations combine a property (the configuration type) and a value, and are set on a package during deployment using the Package Configuration Organizer dialog box. This ensures that the package is able to find the configuration even when the path for the configuration is not fixed.

You should not set the configuration type to a file variable because a file variable is not a valid configuration type. Valid configuration types are an XML configuration file, an environment variable, a registry key, a parent package variable, and a SQL Server table.

You should not set the configuration type to a registry key because a registry key can store only a single configuration. In the given scenario, the paths of the two Excel workbooks need to be provided to the package during deployment. Therefore, you should use a configuration type that can store multiple values. Similarly, you should not use a parent package variable because it can only store a single value.

You are developing a SQL Server 2008 SQL Server Integration Services (SSIS) package. You have included a Script component and an Execute SQL task in the package. The Script component calculates the number of records in a SQL Server 2008 table. You want to use the value calculated by the Script component in the Execute SQL task. What should you do?

Explanation: You should create a user variable and set the scope as the package to ensure that the value calculated by the Script component can be used by the Execute SQL task. Integration Services uses two namespaces, User and System. Variables in the User namespace are custom created by the user, but System namespace variables cannot be created. User-created variables can have literal values or beset to an expression.

nmlkj Set the configuration type to a file variable to store the path of the XML file.

nmlkj Set the configuration type to a registry key.

nmlkj Set the configuration type to a parent package variable.

nmlkji Use an environment variable to store the path of the XML file.

Answer:

Use an environment variable to store the path of the XML file.

Item: 3 (Ref:Cert-70-448.1.4.6)

nmlkj Create a user variable and set the scope as the Script task.

nmlkji Create a user variable and set the scope as the package.

nmlkj Create a system variable and set the scope as the Script task.

nmlkj Create a system variable and set the scope as the package.

Answer:

Create a user variable and set the scope as the package.

Page 2 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 3: MCTS exam 70-448 selected Q & A

When you create a variable in the User namespace, you also need to specify its scope as a package, a container, or a task. When a variable is created with the package scope, it is available to all the tasks and containers in the package. Variables with a container scope are accessible by all the tasks in the container, but not to any task outside the container. When the variable is created with the task scope, it is available to only that task and not to any other task in the package.

In the given scenario, you should create a user variable and specify the scope as the package to ensure that the variable can be accessed by both the Script component and the Execute SQL task.

You should not create a user variable and set the scope as the Script task because when a variable is created with the scope as a task, the variable is accessible to only that task and not to any other task in the package. Therefore, the variable will not be accessible to the Execute SQL task.

The options stating that you should create a system variable are incorrect because you cannot create variables in the System namespace. You can only create variables in the User namespace.

You are developing a SQL Server 2008 SQL Server Integration Services (SSIS) package. You have a database maintenance plan which you want to execute from within this package, using the package's Execute Package task. Which type of connection manager should you use?

Explanation: You should use an OLE DB connection manager to ensure that you are able to execute the database maintenance plan. The Execute Package task uses the OLE DB connection manager to execute packages that are stored in the SQL Server MSDB database, and a File Connection manager to execute the packages that are stored in the file system. The database maintenance plan is stored in the SQL Server MSDB database, and therefore you should use an OLE DB connection manager to execute the database maintenance plan.

You should not use an ADO .NET connection manager because an Execute Package task only uses OLE DB connection managers and File connection managers to execute packages.

You should not use a Flat File connection manager or an ADO connection manager because the database maintenance plan is stored in the SQL Server MSDB database. Therefore, it can only be executed using an OLE DB connection manager. A Flat File connection manager is used to execute packages that are stored in the file system. An ADO connection manager allows a package to connect to ActiveX Data Objects (ADO) objects.

You have developed a SQL Server 2008 SQL Server Integration Services (SSIS) package. The package runs a

Item: 4 (Ref:Cert-70-448.1.5.1)

nmlkj ADO .NET connection manager

nmlkj Flat File connection manager

nmlkji OLE DB connection manager

nmlkj ADO connection manager

Answer:

OLE DB connection manager

Item: 5 (Ref:Cert-70-448.1.5.2)

Page 3 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 4: MCTS exam 70-448 selected Q & A

child package using the Execute Package task. When you execute the package, the parent package fails when the child package fails. You want to ensure that the parent package will keep executing even when the child package fails. What should you do?

Explanation: You should set the ExecuteOutOfProcess property of the Execute Package task to True to ensure that the parent package keeps executing even when the child package fails. By default, the ExecuteOutOfProcess property of the Execute Package task is set to False, ensuring that the child package runs in the process started by the parent package. Therefore, if a child package fails, the parent package also fails. You can change this default behavior by setting the ExecuteOutOfProcess property of the Execute Package task to True. This ensures that the child package starts its own process when executed, and has no effect on the processes of the parent package.

You should not set the DisableEventHandlers property of the Execute Package task, the child package, or the parent package to True because the DisableEventHandlers property specifies whether the event handlers are enabled or disabled. Setting the DisableEventHandlers property to True will not ensure that the parent package keeps executing even when the child package fails.

You have developed a SQL Server 2008 SQL Server Integration Services (SSIS) package. You want to create a deployment utility for the project. You also want to include a Readme file along with the project. What should you do?

Explanation: You should copy the Readme file to the project's Miscellaneous folder for it to be included with the project. To deploy a package, you first configure the build process to create a deployment utility for a SSIS project. The deployment utility folder contains all files required for the package deployment as well as a Miscellaneous folder. You can specify three deployment utility properties to define how the packages in the project will be

nmlkji Set the DisableEventHandlers property of the Execute Package task to True.

nmlkj Set the DisableEventHandlers property of the child package to True.

nmlkj Set the DisableEventHandlers property of the parent package to True.

nmlkj Set the ExecuteOutOfProcess property of the Execute Package task to True.

Answer:

Set the ExecuteOutOfProcess property of the Execute Package task to True.

Item: 6 (Ref:Cert-70-448.1.6.1)

nmlkji Copy the Readme file to the Miscellaneous folder of the project.

nmlkj Copy the Readme file to the folder in which the project is saved.

nmlkj Add a Flat File connection to the project and open the Readme file.

nmlkj Add a new package to the project and add a Flat File connection to the package.

Answer:

Copy the Readme file to the Miscellaneous folder of the project.

Page 4 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 5: MCTS exam 70-448 selected Q & A

deployed, as follows:

� AllowConfigurationChange: This property allows or prevents changes to the package configurations during deployment.

� CreateDeploymentUtility: This property allows or prevents the creation of a deployment utility during the build process.

� DeploymentOutputPath: This property specifies the path where the deployment utility should be saved.

To build a deployment utility, the CreateDeploymentUtility property must be set to True. Once the build process and properties are configured for the deployment utility, you should build the project to create the deployment utility. Once the deployment utility is created, the packages and their configurations are automatically included. Copies of the project package, the package dependencies, and a manifest file (named <project name>.SSISDeploymentManifest.xml) are placed in the project's bin\Deployment folder.

You should not copy the Readme file to the folder in which the project is saved because this will not include the Readme file in the project once the deployment utility is created. To include additional files in the project when the deployment utility is created, you should add the files to the Miscellaneous folder of the project.

You should not use a Flat File connection to the project because a Flat File connection is used to include a file with the package in a project which can be used for a task. To include additional files in the project when the deployment utility is created, you should add the files to the Miscellaneous folder of the project.

You have created a SQL Server 2008 SQL Server Integration Services (SSIS) package. The package contains an Execute SQL task. You are enabling logging for the package and have selected the Provider type as SSIS log provider for Text files. You want to log all the events during the interaction between the package and the external data source. What should you do?

Explanation: You should select the Diagnostic event of the package on the Details tab of the Configure SSIS Logs dialog box to log all the events during the interaction between the package and the external data source. You can use the Integration Services logging feature to log entries for the run-time events. Logging features are configured in the Configure SSIS Logs dialog box. You can select the log providers on the Providers and Logs tab of the Configure SSIS Logs dialog to write the log entries. The providers can be a text file, SQL Server Profiler, SQL Server, Windows Event Log, or XML file. The Details tab of the Configure SSIS Logs dialog box is used to select the various events that should be logged in the log files. The Diagnostic event should be selected to write the logs that provide diagnostic information. When the Diagnostic event is selected, a log entry is written to the log file before and after each call to an external service provider is made. These log entries can be used to troubleshoot the events that are caused due to the interaction between the package and the external data source.

Item: 7 (Ref:Cert-70-448.1.6.2)

nmlkj Select the Diagnostic event of the package on the Details tab of the Configure SSIS Logs dialog box.

nmlkj Select the OnTaskFailed event of the package on the Details tab of the Configure SSIS Logs dialog box.

nmlkji Change the Provider type to SSIS log provider for SQL Server.

nmlkj Change the Provider type to SSIS log provider for SQL Server Profiler.

Answer:

Select the Diagnostic event of the package on the Details tab of the Configure SSIS Logs dialog box.

Page 5 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 6: MCTS exam 70-448 selected Q & A

You should not select the OnTaskFailed event because the OnTaskFailed event is used to write a log entry for each failure of a task. The OnTaskFailed event cannot be used to log all events during the interaction between the package and the external data source.

You should not change the Provider type to SSIS log provider for SQL Server because this will write the log entries for the selected events to the sysssislog table in a SQL Server database. This will not ensure that all the interactions between the package and the external data source are logged.

You should not change the Provider type to SSIS Log provider for SQL Server Profiler because this will write traces in a file with an extension .trc . You can view these traces using SQL Server Profiler. This also will not ensure that all the interactions between the package and the external data source are logged.

You have created a SQL Server 2008 SQL Server Integration Services (SSIS) package. You want to enable logging for the package. You have selected the SSIS log provider for Text files option on the Providers and Logs tab of the Configure SSIS Logs: Package dialog box. (Click the Exhibit(s) button to view the dialog box.) You now want to ensure that the logs display the name of the container for which the logged event is generated. What should you do?

Item: 8 (Ref:Cert-70-448.1.6.3)

nmlkji Select the SourceName option on the Details tab of the Configure SSIS Logs dialog box.

nmlkj Select the ExecutionID option on the Details tab of the Configure SSIS Logs dialog box.

nmlkj Change the Provider type to SSIS log provider for SQL Server.

nmlkj Change the Provider type to SSIS log provider for Windows Event Log.

Page 6 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 7: MCTS exam 70-448 selected Q & A

Explanation: You should select the SourceName option on the Details tab of the Configure SSIS Logs dialog box to ensure that the logs display the name of the container for which the logged event is generated. You can use the Details tab of the Configure SSIS Logs dialog box to select the events for which you want to have the log entries in the log file and the information to be logged for each event. The options available for the logging information are as follows:

� Computer: Writes the name of the computer on which the event occurred.

� Operator: Writes the name of the user who had executed the package.

� SourceName: Writes the name of the package, container, or task that caused the event.

� SourceID: Writes the global unique identifier (GUID) for the package, container, or task that caused the event.

� ExecutionID: Writes the GUID of the package instance in which the event occurred.

� MessageText: Writes the message linked with the event being logged.

� DataBytes: Reserved for future use.

You should not select the ExecutionID option on the Details tab of the Configure SSIS Logs dialog box because when the ExecutionID option is selected, it logs the GUID of the package that caused the event. To log the name of the container for which the event is generated you should select the SourceName option on the Details tab of the Configure SSIS Logs dialog box.

You should not change the Provider type to either SSIS log provider for SQL Server or SSIS log provider for Windows Event Log because changing the Provider type will not include the name of the container in the log file. The information being logged for an event does not depend on the provider type, but on the options selected on the Details tab of the Configure SSIS Logs dialog box.

Answer:

Select the SourceName option on the Details tab of the Configure SSIS Logs dialog box.

Page 7 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 8: MCTS exam 70-448 selected Q & A

You have developed a SQL Server 2008 SQL Server Integration Services (SSIS) package. The package contains a Sequence container that includes an Execute SQL task and a Script task. You have set a few options on the Configure SSIS Logs dialog box. (Click the Exhibit(s) button to view the configurations.) You now want to set different logging options for the Script task and the Sequence container. What should you do?

Item: 9 (Ref:Cert-70-448.1.6.4)

nmlkji Clear the Sequence Container check box to enable the events on the Details tab.

nmlkj Clear the Script Task check box to enable the events on the Details tab.

nmlkj Select the Package check box to enable the events on the Details tab.

nmlkj Click the Script Task check box twice to enable the events on the Details tab.

Answer:

Click the Script Task check box twice to enable the events on the Details tab.

Page 8 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 9: MCTS exam 70-448 selected Q & A

Explanation: You should click the Script Task check box twice to enable the events on the Details tab. This action will allow you to select the check boxes to set different logging events on the Details tab for the Script task and the Sequence container.

You can enable or disable the logging for the package and the containers in the package using the Containers pane of the Configure SSIS Logs dialog box. The package and the containers in the package are displayed in the Containers pane, each with a corresponding check box. When the check box is not selected, the events for container are not logged. When the check box is dimmed, the event uses the logging options for the parent. When the check box is selected, the container uses its own logging options. In the given scenario, the check box next to the Script task is dimmed, so it will therefore use the logging options set for the Sequence container. To ensure that the Script task uses different logging options than those set for the Sequence container, you should double-click on the Script task check box to select it. After you have selected the check box, you can select the events and event information to be logged. The options available for the logging information are as follows:

� Computer : Writes the name of the computer on which the event occurred.

� Operator : Writes the name of the user who had executed the package.

� SourceName : Writes the name of the package, container, or task that caused the event.

� SourceID : Writes the global unique identifier (GUID) for the package, container, or task that caused the event.

� ExecutionID : Writes the GUID of the package instance in which the event occurred.

� MessageText : Writes the message linked with the event being logged

� DataBytes : Reserved for future use.

Page 9 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 10: MCTS exam 70-448 selected Q & A

You should not clear the Sequence Container check box to enable the events on the Details tab because this action will disable the logging for the Sequence container, and will not set different logging options for the Script task and the Sequence container.

You should not clear the Sequence Container check box to enable the events on the Details tab because this action will disable the logging for the Sequence container, and will not set different logging options for the Script task and the Sequence container.

You should not clear the Script Task check box to enable the events on the Details tab because this will disable the logging for the Script Task, and will not set different logging options for the Script task and the Sequence container.

You should not select the Package check box to enable the events on the Details tab. This action will enable logging for the Package and will ensure that the selected events for the package are logged, but it will not set different logging options for the Script task and the Sequence container.

You are developing a SQL Server 2008 SQL Server Integration Services (SSIS) package. The package contains an OLE DB source used to retrieve data from a SQL Server 2008 table named Student. The table contains the names, ages, and addresses of all students at a college. The package also consists of a Flat File destination, which is used to write data to a Flat File. You want to export the records of all students who are older than 20 to the Flat File. Which Data Flow element should you add to the package for validation?

Item: 10 (Ref:Cert-70-448.1.7.2)

nmlkj The Script task

nmlkj The Script component

nmlkji The Audit transformation

Page 10 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 11: MCTS exam 70-448 selected Q & A

Explanation: You should add a Script component to the package for validating the records. A Script component can be used to run a custom script code to perform the following functions:

� Apply multiple transformations to the data.

� Validate column data.

� Access rules in the existing .NET assembly.

� Use custom formulas.

You could also add a Conditional Split transformation to the package because a Conditional Split transformation is used to route the input data to different outputs based on the values contained in the source data. You should not add a Script task to the package because a Script task cannot be used to validate the data in each row of a data set. The Script task performs functions at the enumerated object level, but not at the row level of a data set. The Script task can access data from data sources which cannot be accessed using the built-in connection types, count the number of rows in a file, or check whether a file is empty.

You should not add a Audit transformation to the package because an Audit transformation is used to include the environment data in the data flow of a package. The Audit transformation cannot be used to validate the data in a file. You should not add a Cache transformation to the package because a Cache transformation writes data from a connected data source in the data flow to a Cache connection manager. The Cache transformation cannot be used to skip records if the input data does not meet a condition.

You are the administrator for the SQL Server 2008 servers in your company. You want to run a newly created package on several servers, and configure the package with different properties for the specific server that the package is running on. Each package deployment may run with multiple configurations on different servers. What configuration type should you include with the package?

Explanation: You should use an XML configuration file to meet the scenario requirements. An XML file can include multiple

nmlkj The Cache transformation

Answer:

The Script component

Item: 11 (Ref:Cert-70-448.2.5.10)

nmlkj Environment variable

nmlkj Parent package variable

nmlkji XML configuration file

nmlkj web.config file

Answer:

XML configuration file

Page 11 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 12: MCTS exam 70-448 selected Q & A

configurations, so you can use the XML configuration file to deploy the package with multiple configurations on different servers. You could also use a SQL Server table to deploy the package with multiple configurations on different servers. A SQL Server table can store multiple configurations in a table of a database.

You should not use a Parent package variable. This configuration is used to update properties in child packages. The configuration type cannot include multiple configurations.

You should not use an Environment variable. This configuration type stores configuration information in environmental variables. The configuration type cannot include multiple configurations.

You should not use the web.config file to include configuration information for a package. The web.config file is used to configure IIS web sites. The web.config file does store configuration information for a package.

You are developing a SQL Server 2008 SQL Server Integration Services (SSIS) package. This package will need to be deployed on several servers. On each server, the deployed package will have a different minimum disk space requirement before the package can run. How can you configure the package to run with different minimum disk space requirements for different servers?

Explanation: You should configure the different disk space requirements in an XML file. An XML file can be used as a package configuration type that can package configurations can update values of properties in a package at run time. When the package runs, it gets the new values of the property from the configuration type. You can use the following configuration types:

� XML configuration file

� Environment variable

� Registry entry

� Parent package variable

� SQL Server table

An XML configuration file and SQL Server table can contain multiple configurations. You can use the XML configuration file to deploy the package with different minimum disk space requirements on different servers.

You should not use the web.config file to include configuration information for a package. The web.config file is used to configure IIS web sites. The web.config file does not store configuration information for a package.

You should not use the /De option of the dtexec utility to reference the registry entry for the disk space requirements in either HKEY_CURRENT_USER or HKEY_LOCAL_MACHINE. The /De option of the dtexec utility sets the decryption password for a password-encrypted package. You cannot use the /De option to

Item: 12 (Ref:Cert-70-448.2.5.11)

nmlkj Configure the different disk space requirements in the web.config file.

nmlkj Configure the different disk space requirements in an XML file.

nmlkj Use the /De option of the dtexec utility to reference the registry entry for the disk space requirements in HKEY_CURRENT_USER.

nmlkji Use the /De option of the dtexec utility to reference the registry entry for the disk space requirements in HKEY_LOCAL_MACHINE.

Answer:

Configure the different disk space requirements in an XML file.

Page 12 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 13: MCTS exam 70-448 selected Q & A

change configurations of a package. However, you can use a register entry as a configuration type of package. You could use a registry entry to change the amount of disk space requirement needed for the package to load. If you wanted to use a registry entry instead of an XML file to store the configuration for the minimum disk space requirements, you can either use an existing key or create a new key in HKEY_CURRENT_USER.

You have developed a SQL Server 2008 SQL Server Integration Services (SSIS) package in a development environment. The package consists of a few variables that are used to store the connection parameters for the OLE DB and ADO connections in the package. You now want to deploy the package in the production environment. You want to pass the values of these variables during the deployment. What should you do?

Explanation: You should create an XML file and set it as a configuration file to pass the values of the variables during the deployment. A package configuration contains a property/value pair and is created after the package has been developed. It provides values to the properties of a package. There are five types of package configuration:

� XML configuration: An XML file is which contains one or more package configurations.

� Environment variable: The configuration points to an environment variable that contains the package configuration.

� Registry entry: A new or existing registry key contains the package configuration.

� Parent Package variable: A variable in the parent package contains a configuration value which is to be used by the child packages. SQL Server table: A SQL Server table contains one or more package configurations.

You should create an XML configuration since more than one value has to be provided as configurations settings to the package during execution. In the given scenario, you need to pass the values of variables that store the connection parameters for the OLE DB and ADO connections in the package during deployment. Therefore, you should provide these values as configuration settings to the package.

You should not create an XML file and store it in the bin\Deployment folder because this will not ensure that the values for the variables are passed during deployment of the package. The bin\Deployment folder is the default location of the deployment utility when you create a deployment utility for the project.

You should not create an environment variable to store the configuration settings because the environment variable can be used to store only one configuration setting. In the given scenario, you need to provide the variables containing the connection parameters for both the OLE DB and ADO connections. Therefore, you should select a configuration type that can store more than one value. Similarly, you should not create a registry entry to store the configuration settings because a registry entry can be used to store only one configuration setting.

Item: 13 (Ref:Cert-70-448.2.5.12)

nmlkj Create an XML file and store it in the bin\Deployment folder

nmlkj Create an XML file and set it as the configuration file

nmlkji Create an environment variable to store the configuration settings

nmlkj Create a registry entry to store the configuration settings

Answer:

Create an XML file and set it as the configuration file

Page 13 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 14: MCTS exam 70-448 selected Q & A

You have developed a SQL Server 2008 SQL Server Integration Services (SSIS) package in a development environment. The package uses an OLE DB connection to connect to a SQL Server 2008 database. You have now moved the package to a test environment. When you execute the package, you receive an error stating that the user is not authorized to access the SQL Server in the test environment. During troubleshooting, you determine that the username and password used to connect to the SQL server are different from the Production environment username and password. What should you do to fix the problem?

Explanation: You should create a XML configuration file to ensure that the package is able to run properly in the test environment. You should store the user credentials in the XML configuration file and use this file to provide the correct user credentials to the package. An XML configuration file allows you to provide more than one configuration settings to the package during execution. The values in the XML configuration file can be different when the package is deployed on different servers.

You should not create a Parent Package variable because a Parent Package variable is used to make a variable in the parent package accessible to the parent's child package. A Parent Package variable cannot be used to store a username and password.

You should not create an environment variable because an environment variable cannot be used to store more than one configuration setting. In the given scenario, you need to ensure that the username and password for both environments are provided as the configuration settings to the package, which can be accomplished by using an XML configuration file.

You should not create a registry key entry because a registry key cannot be used to store more than one configuration setting. If you wanted to store multiple configurations, you would require multiple registry keys.

You are the administrator of your company's SQL Server Integration Services (SSIS). There are several SQL Server 2008 servers in your environment. You want to store packages from multiple SSIS installations in a central location. You want to ensure that the packages can be backed up through normal SQL Server maintenance tasks, and that packages can be deployed on different SSIS servers in case a single SSIS server has a hardware failure. What should you configure in the package installation wizard?

Item: 14 (Ref:Cert-70-448.2.5.13)

nmlkj Use a Parent Package variable.

nmlkji Use an environment variable.

nmlkj Use a XML configuration file.

nmlkj Use a registry key entry.

Answer:

Use a XML configuration file.

Item: 15 (Ref:Cert-70-448.2.5.3)

nmlkj Choose File system deployment and specify a shared folder on the SQL Server.

Page 14 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 15: MCTS exam 70-448 selected Q & A

Explanation: You should choose SQL Server deployment in the package installation wizard. You should specify the location as the SQL Server that does not contain the SSIS server. In this scenario, you must have the packages backed up through normal SQL Server maintenance tasks. When you choose SQL Server deployment in the package installation wizard, the packages are saved in the MSDB database on a SQL server. You can use one SQL server to be the central location from multiple SSIS installations. The SSIS instance and the MSDB database do not have to be on the same computer. The MSDB is simply a storage location. The SQL server stores packages in the sysdtspackages table. The SSIS package dependencies are stored in a folder on the SQL Server. Although normal maintenance tasks on a SQL Server will back up system databases such as the MSDB database and Master database, you must manually configure a backup task to back up the folder that contains the SSIS package dependencies.

You should not specify the location for Server deployment as the SQL Server that contains the SSIS server. You are required to ensure that packages can be deployed on different SSIS servers in case a single SSIS server has a hardware failure. If you choose an SQL Server that contains an SSIS server, you will not be able to deploy the packages stored in the MSDB database if there is a hardware failure on the server. You should choose a SQL Server that does not contain an SSIS instance.

You should not choose File system deployment and specify a local or shared folder on the SQL Server. In this scenario, you want to ensure that the packages can be backed up through normal SQL Server maintenance tasks. Normal maintenance tasks on a SQL Server include backing up the system databases such as the Master database and MSDB database. Normal maintenance tasks on the SQL Server do include backing up the file server. You can choose File system deployment in the Package Installation Wizard if the package uses a file copy install through *msi files.

nmlkj Choose File system deployment and specify a local folder on the SQL Server.

nmlkji Choose SQL Server deployment and specify the location as the SQL Server that contains the SSIS server.

nmlkj Choose SQL Server deployment and specify the location as the SQL Server that does not contain the SSIS server.

Answer:

Choose SQL Server deployment and specify the location as the SQL Server that does not contain the SSIS server.

Page 15 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 16: MCTS exam 70-448 selected Q & A

You are the administrator for the SQL Server 2008 Servers in your company. You created packages and tested them in a lab on a server called SSIS1. You have configured the following setting, in the Package Installation Wizard, for each package. (Click the Exhibit(s) button to view the setting.) You have run the build process for the packages on a server named SSIS1. You now want to install the packages on a server named SSIS2. The packages on SSIS2 will use a different SQL Server database for configuration and log entries than the ones used during the package development on SSIS1. What must you configure?

Item: 16 (Ref:Cert-70-448.2.5.5)

nmlkj Create the appropriate SQL Server tables in the database on SSIS2 for the configurations and log files.

nmlkj Edit the manifest file to reflect the new location for the configuration and log files.

nmlkj In the Package Installation Wizard, specify the new database for the configuration and log files.

nmlkji In Configuration Properties, specify the new database for the configuration and log files under Debugging.

Answer:

Create the appropriate SQL Server tables in the database on SSIS2 for the configurations and log files.

Page 16 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 17: MCTS exam 70-448 selected Q & A

Explanation: You should create the appropriate SQL Server tables in the database on SSIS2 for configuration and log files. When preparing to install packages on a target computer, you should ensure that the following requirements are met:

� Create SQL Server tables for configurations and for log entries. If the target server uses a different SQL Server

database than the one used during package development on the source server, you must re-create the configuration table and log entries in that database.

� Create Registry keys. The Registry keys that configurations use must exist and they must include a value that is a string or DWORD

named Value.

� Create environment variables. The same environment variables that configurations use on the source server must exist on the target

server.

� Create a share on the target computer and map a drive to the share. This share will be used to copy the

deployment bundle.

You should not edit the manifest file to reflect the new location for the configuration and log file. You will have to create the appropriate SQL Server tables in the database on target server for configurations and log entries.

You cannot specify the new database for configurations and log files in the Package Installation Wizard. You can only specify the SQL Server that will hold the MSDB database to store package files.

You cannot specify the new database for configurations and log files in the In Configuration Properties under Debugging. You can specify Data Flow Optimization settings, Debug Option settings, Start Action options and Start options under Debugging.

You are the administrator for the SQL Server 2008 servers in your company. You created packages and tested them on a server named SSIS2. You want to prevent packages created by untrusted users from loading and running. Only the packages you create should be able to execute. What should you configure?

Item: 17 (Ref:Cert-70-448.2.5.6)

nmlkj Choose Encrypt all data with password as the Package Protection level.

Page 17 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 18: MCTS exam 70-448 selected Q & A

Explanation: You should use a digital signature to prevent packages created by untrusted users from loading and running The CheckSignatureOnLoad package property determines whether to check for the presence of a certificate, and the CertificateObject property specifies the certificate that was used to sign the package. These properties can be used in combination to ensure that only packages you sign can be executed.

A registry value can be used to manage untrusted signatures of signed packages. The BlockedSignatureStates DWORD value of the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\SSIS registry key can have the following values:

� 0 - No restrictions are used.

� 1 - Does not block unsigned packages, but blocks invalid signatures.

� 2 - Blocks invalid, untrusted, and self-generated signatures. This setting does not block unsigned packages.

� 3 - Blocks invalid, untrusted, and self-generated signatures, and also unsigned packages.

To sign a package you must obtain a code-signing certificate issued by a trusted certificate authority (CA). To sign a package, you must do the following:

1. In Business Intelligence Development Studio, double click on the package. 2. On the SSIS menu, click Digital Signing . 3. On the Select Certificate dialog box, select the appropriate code-signing certificate.

You should not choose Encrypt all data with password as the Package Protection level. This protection level encrypts the entire package based on a password that is provided when the package is created. Any user who knows the password can open, alter, and load the package. In this scenario, you want to prevent altered packages from loading and running, so you should digitally sign the package.

You should not choose Encrypt sensitive data with password as the Package Protection level. This protection level saves and encrypts sensitive data with the package. If the package is opened without a password, the sensitive data will be blank and the user must provide values for the sensitive data. The user will not be able to execute the package without providing a password. If the user knows the correct password, he/she can still open, alter, and load the package. In this scenario, you want to prevent altered packages from loading and running, so you should digitally sign the package.

You should not store the package in a folder with limited permissions. A package that is stored in a folder that is limited by NTFS permissions may prevent someone who does not have access to the package from loading the package, but not from altering the package as required by the scenario.

nmlkj Choose Encrypt sensitive data with password as the Package Protection level.

nmlkji Use a Digital Signature to sign the packages.

nmlkj Store the packages in a folder with limited permissions.

Answer:

Use a Digital Signature to sign the packages.

Page 18 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 19: MCTS exam 70-448 selected Q & A

You are the administrator for the SQL Server 2008 servers in your company. You created several packages and tested them on servers named SSIS1 and SSIS2. You want to manage signed and unsigned packages, so you ensured that the BlockedSignatureStates DWORD value of the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\SSIS registry key has been added to SSIS1 and SSIS2. You now want to ensure the following:

� Packages with invalid and untrusted signatures should be blocked on SSIS1 and SSIS2.

� Packages with self-generated signatures should be blocked on SSIS1 and SSIS2.

� Unsigned packages should not be blocked on SSIS1 and SSIS2.

What value must you set for BlockedSignatureStates?

Explanation: You must set the value of BlockedSignatureStates to 2. The BlockedSignatureStates value determines whether a package should be blocked due to an untrusted signature, self-generated signature, invalid signature, or missing signature. The BlockedSignatureStates DWORD value of the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\SSIS registry key can have the following values:

Item: 18 (Ref:Cert-70-448.2.5.7)

nmlkj 0

nmlkj 1

nmlkji 2

nmlkj 3

Answer:

2

Page 19 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 20: MCTS exam 70-448 selected Q & A

� 0 - No restrictions are used.

� 1 - Does not block unsigned packages, but blocks invalid signatures.

� 2 - Blocks invalid, untrusted, and self-generated signatures. This setting does not block unsigned packages.

� 3 - Blocks invalid, untrusted, and self-generated signatures, and unsigned packages.

You should not set the value to 0. This value places no restrictions on the package.

You should not set the value to 1. Although this value does block unsigned packages, it does not block self-generated signatures.

You should not set the value to 3. This value will block unsigned packages.

You are the administrator for the Verigon Corporation. You created several packages and tested them on a server named SSIS1. One of your coworker deleted the ImportPackageSales package by issuing the following command: dtutil /SQL ImportPackageSales /DELETE

What should you do to restore the ImportPackageSales package?

Explanation: You should restore the MSDB database. Your coworker issued the dtutil /SQL ImportPackageSales /DELETE

command to delete the package. The /SQL keyword in the dtutil utility indicates that the package was stored in the MSDB database. You should restore the MSDB database to retrieve the package.

You should not restore the Master database to restore the package. The Master database is used to store system configuration information for the SQL Server and SQL Server Integration Services (SSIS) packages. You should have the Master database backed up regularly with a SQL Server maintenance task.

You should not restore the SSIS Package store to restore the ImportPackageSales package. The dtutil command that deleted the package used the /SQL keyword, not the /DTS keyword. The /DTS keyword would indicate that the package was stored in the SSIS Package store. Because the /SQL keyword was used, you can determine that the package was stored in the MSDB database.

You should not restore the folder where the ImportPackageSales package was stored because the dtutil command that deleted the package did not use the /FILE keyword. The /FILE keyword would indicate that the package was stored in the file system. However, the package was stored in the MSDB database, not in the file system.

The company for which you are a database administrator is planning to launch a new coffee product in the market. You have created a data mining model to predict the relevant consumers for the new product, based on

Item: 19 (Ref:Cert-70-448.2.5.9)

nmlkj Restore the Master database

nmlkji Restore the MSDB database

nmlkj Restore the SSIS Package store

nmlkj Restore the folder where the the ImportPackageSales package was stored

Answer:

Restore the MSDB database

Item: 20 (Ref:Cert-70-448.3.6.1)

Page 20 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 21: MCTS exam 70-448 selected Q & A

customers who bought your other products. You want to create a Data Mining Extensions (DMX) query to list the top 1,000 probable customers discovered by the mining model. You want to ensure that the prediction is created on the input data based on the patterns that exists in the data mining model. Which type of prediction should you use?

Explanation: You should use the Prediction Join to ensure that the prediction is created on the input data based on the patterns that exists in the data mining model. You can use the following types of predictions in a DMX query:

� Prediction Join: Creates predictions based on the patterns defined the mining model.

� Natural Prediction Join: Creates prediction by matching the names of the mining model columns and the input columns.

� Empty Prediction Join: Creates a prediction without providing any input data and based only on the mining model.

� Singleton query: Creates predictions using a query. The data is fed into the query to generate the prediction.

You should not use a Singleton Query because a Singleton Query is used to create a prediction by providing data to a query. The data is provided to the query by typing one or more cases in the query. The Singleton query cannot be used to create predictions based on the patterns that exist in the data mining model.

You should not use an Empty Prediction Join because an Empty Prediction Join is used to create a prediction based only on the mining model. You cannot provide input data to an Empty Prediction Join and therefore it cannot be used to create predictions on the input data based on the patterns that exists in the data mining model.

You should not use a Natural Prediction Join because a Natural Prediction Join is used to create a prediction by matching the names of the mining model columns and the input columns. A Prediction Join cannot be used to create predictions on the input data based on the patterns that exists in the data mining model.

You are using SQL Server 2008 Analysis Services. You need to optimize queries on dimensions within a cube. You want to ensure that the query is resolved without accessing the partition's source data. You also want to place the storage location on another computer running SQL Server 2008 Analysis Services. Which type of partition should you create?

nmlkj Singleton Query

nmlkj Empty Prediction Join

nmlkj Natural Prediction Join

nmlkji Prediction Join

Answer:

Prediction Join

Item: 21 (Ref:Cert-70-448.3.6.2)

nmlkj OLAP

nmlkji Multidimensional OLAP (MOLAP)

nmlkj Relational OLAP (ROLAP)

nmlkj Hybrid OLAP (HOLAP)

Page 21 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 22: MCTS exam 70-448 selected Q & A

Explanation: You should create a MOLAP storage mode partition to resolve queries without accessing the partition's source data and to place the storage location on another computer. The MOLAP provides high optimization to maximize query performance because the copy of the source data is stored in a multidimensional structure. Therefore, queries can be resolved without accessing the partition's source data.

You cannot use OLAP as the storage mode to create dimension partitions because OLAP is not a valid storage mode.

You should not create a ROLAP storage mode partition because the ROLAP storage mode stores the aggregations in the partition in the relational database specified in the partition's data source. The aggregations are stored as indexed views in the relational database. Therefore, the query response time is slower with ROLAP as compared to MOLAP or HOLAP. However, the ROLAP storage mode can save storage space when working with large datasets that are not queried often.

You should not create a HOLAP storage mode partition because the HOLAP storage mode does not use a copy of the source data for queries that want to use the source data. If the user must drill down to an atomic cube cell where there is no aggregation data, and the query must retrieve data from the relational database, then HOLAP will not be as fast as MOLAP, because in MOLAP, the source data is stored in the MOLAP structure.

You are the database administrator of a company. You have created a cube named DataHouse. You want to improve the cube's query performance to retrieve up-to-date information without re-processing the cube. What should you do?

Explanation: You should use the automatic MOLAP to improve query performance so that you can retrieve up-to-date information from the DataHouse cube without re-processing the cube. Automatic MOLAP stores aggregation data and measures data in a multidimensional format that improves query performance. Automatic MOLAP storage mode listens to notifications from the relational database for the changes occurred in the database. The notification services raise events and send them to Analysis Services. When the Analysis Services receives the notification, the cube is automatically refreshed with the latest information.

You should not use scheduled MOLAP because scheduled MOLAP will refresh the data automatically after every 24 hours. If you use

scheduled MOLAP, the data on the DataHouse cube may not be reflect the current data in the relational database.

Answer:

Multidimensional OLAP (MOLAP)

Item: 22 (Ref:Cert-70-448.3.6.3)

nmlkj Use scheduled multidimensional OLAP (MOLAP).

nmlkj Use real time relational OLAP (ROLAP).

nmlkji Use real time hybrid OLAP (HOLAP).

nmlkj Use automatic MOLAP.

Answer:

Use automatic MOLAP.

Page 22 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 23: MCTS exam 70-448 selected Q & A

You should not use real time ROLAP because in ROLAP storage mode, both measures data and aggregations data are stored in a relational format, which decreases the query performance.

You should not use real time HOLAP because in HOLAP storage mode, measures data is stored in a relational format, while aggregations data is stored in a multidimensional format. Since the measure data is stored in relational format, the query response time is slower than for ROLAP.

You are a database administrator of a company that has a chain of multiple retail stores in different locations. You have created a table named Cust, which stores the records of all of the customers of your company. You have also created a table Sales, which contains the records of the products sold. The company has decided to create a data model to analyze the purchasing behavior of its customers based on their location. You want to create a mining model to analyze the purchasing behavior. You need to keep in mind that a customer in the Cust table can have multiple purchase records in the Sales table. Which mining model structure should you create to analyze the purchasing behavior?

Item: 23 (Ref:Cert-70-448.3.6.4)

nmlkj CREATE MINING MODEL SaleData ( CustID TEXT KEY, Location TEXT DISCRETE, Items TABLE ( ItemName TEXT KEY, Quantity LONG CONTINUOUS

) ) USING Microsoft_Decision_Trees

nmlkji CREATE MINING MODEL SaleData ( CustID TEXT KEY, Location TEXT DISCRETE, Items TABLE PREDICT ( ItemName TEXT KEY, Quantity LONG CONTINUOUS PREDICT ) ) USING Microsoft_Decision_Trees

nmlkj CREATE MINING MODEL SaleData ( CustID TEXT KEY, Location TEXT DISCRETE, ItemName TEXT KEY, Quantity LONG CONTINUOUS PREDICT

) USING Microsoft_Decision_Trees

nmlkjCREATE MINING MODEL SaleData ( CustID TEXT KEY, Location TEXT DISCRETE, Items TABLE PREDICT (

Page 23 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 24: MCTS exam 70-448 selected Q & A

Explanation: You should create the following mining model structure to analyze the purchasing behavior of customers based on their locations:

CREATE MINING MODEL SaleData

(

CustID TEXT KEY,

Location TEXT DISCRETE,

Items TABLE PREDICT

(

ItemName TEXT KEY,

Quantity LONG CONTINUOUS PREDICT

)

) USING Microsoft_Decision_Trees

This mining model structure will create a mining model named SaleData. The SaleData mining model contains a case table based on the CustID and Location columns, along with a nested table named Items, to predict the Quantity column. The nested table helps in describing the data using multiple rows. In this scenario, the case table is describing the customer details, whereas the nested table is describing the purchase details for that customer. In this manner, one customer can purchase multiple products.

You should not create the following mining model structure:

CREATE MINING MODEL SaleData

(

CustID TEXT KEY,

Location TEXT DISCRETE,

Items TABLE

(

ItemName TEXT KEY,

Quantity LONG CONTINUOUS

)

) USING Microsoft_Decision_Trees

This is because the mining model does not contain any predictable column. A mining model must have a predictable column to analyze the results of the mining structure.

You should not create the following mining model:

CREATE MINING MODEL SaleData

(

ItemName TEXT KEY, ) ) USING Microsoft_Decision_Trees

Answer:

CREATE MINING MODEL SaleData ( CustID TEXT KEY, Location TEXT DISCRETE, Items TABLE PREDICT ( ItemName TEXT KEY, Quantity LONG CONTINUOUS PREDICT ) ) USING Microsoft_Decision_Trees

Page 24 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 25: MCTS exam 70-448 selected Q & A

CustID TEXT KEY,

Location TEXT DISCRETE,

ItemName TEXT KEY,

Quantity LONG CONTINUOUS PREDICT

) USING Microsoft_Decision_Trees

This mining model contains a case table which is describes the customer details, but does not contain a nested table describing the purchase details for the customer. If you use only a case table to describe the data, then according to the scenario it will not be possible for the case table to describe data in one single row. According to the scenario, one customer can purchase multiple products. Therefore, the case table alone cannot describe data in one row. You should create a nested table to describe the purchase details.

You should not create the following mining model:

CREATE MINING MODEL SaleData

(

CustID TEXT KEY,

Location TEXT DISCRETE,

Items TABLE PREDICT

(

ItemName TEXT KEY,

)

) USING Microsoft_Decision_Trees

This is because the mining model does not create a predictable column. The mining model must have a predictable column to analyze the results of the mining structure. In this scenario, the Quantity column should be specified as predictable to analyze the purchasing behavior of customers.

You are the database administrator of your company. You have created a Sales table to store the sale records for your company's products. The Sales table contains fields to store the sale date and the sale amount. The Marketing department wants to predict the weekly sales for the next year. You want to create a time series model to predict the weekly sales for the next year. Which Data Mining Extensions (DMX) query should you use to create a time series model for the Marketing Department?

Item: 24 (Ref:Cert-70-448.3.6.5)

nmlkj CREATE MINING MODEL SalePredict ( TranID TEXT KEY, SaleDate DATE KEY TIME, SaleAmount LONG CONTINUOUS PREDICT ) USING MICROSOFT_TIME_SERIES

nmlkj CREATE MINING MODEL SalePredict ( TranID TEXT KEY, SaleDate TEXT DISCRETE, SaleAmount LONG CONTINUOUS PREDICT ) USING MICROSOFT_TIME_SERIES

nmlkj CREATE MINING MODEL SalePredict ( TranID TEXT KEY, SaleDate TEXT DISCRETE, SaleAmount LONG CONTINUOUS PREDICT ) USING MICROSOFT_TIME_SERIES (PERIODICITY_HINT='{7}')

nmlkjiCREATE MINING MODEL SalePredict ( TranID TEXT KEY, SaleDate DATE KEY TIME,

Page 25 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 26: MCTS exam 70-448 selected Q & A

Explanation: You should create the following DMX query to predict the weekly sales for the next year:

CREATE MINING MODEL SalePredict (

TranID TEXT KEY,

SaleDate DATE KEY TIME,

SaleAmount LONG CONTINUOUS PREDICT

) USING MICROSOFT_TIME_SERIES (PERIODICITY_HINT='{7}')

The SalePredict mining model defines the SaleDate field with the type KEY TIME. In a time series mining model, the column used for

the time scale should have a KEY TIME type specified. The KEY TIME type is used when data values occurred in order and on a time scale. The PERIODICITY_HINT parameter tells algorithm about seasonality information in the data. In this scenario, the PERIODICITY_HINT parameter value should be specified as 7 to predict the weekly sales report.

You should not use the following DMX query:

CREATE MINING MODEL SalePredict (

TranID TEXT KEY,

SaleDate DATE KEY TIME,

SaleAmount LONG CONTINUOUS PREDICT

) USING MICROSOFT_TIME_SERIES

This is because the DMX query for a time series model should use PERIODICITY_HINT to specify the time period to be used for

the prediction. The query specified above does not include the PERIODICITY_HINT parameter.

All the other options are incorrect because according to the scenario, the SaleDate column should be specified as type KEY TIME in the DMX query. The KEY TIME type is used when values occurred in order and on a time scale.

You are in the process of deploying an Analysis Services Database in the production environment. You are using the Analysis Services Deployment Wizard to perform this deployment. The project will be named Deploy_Prod1. During the deployment process, you want to retain the existing roles and role members on the destination server. Which input file should you modify to accomplish the objective?

SaleAmount LONG CONTINUOUS PREDICT ) USING MICROSOFT_TIME_SERIES (PERIODICITY_HINT='{7}')

Answer:

CREATE MINING MODEL SalePredict ( TranID TEXT KEY, SaleDate DATE KEY TIME, SaleAmount LONG CONTINUOUS PREDICT ) USING MICROSOFT_TIME_SERIES (PERIODICITY_HINT='{7}')

Item: 25 (Ref:Cert-70-448.3.7.1)

nmlkj <Deploy_Prod1>.asdatabase

nmlkj <Deploy_Prod1>.deploymenttargets

nmlkj <Deploy_Prod1>.configsettings

nmlkji <Deploy_Prod1>.deploymentoptions

Page 26 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 27: MCTS exam 70-448 selected Q & A

Explanation: You should modify the <Deploy_Prod1>.deploymentoptions input file to accomplish the objective. The <Deploy_Prod1>.deploymentoptions file is an input file used to create the deployment script while deploying an Analysis Services Database in the production environment. Using this file, you can specify the deployment options for roles and partitions, such as specifying that the existing roles and partitions on the destination server should not be affected by the deployment. This file can be opened in any text editor and modified manually.

You should not modify the <Deploy_Prod1>.asdatabase file. This file is another input file used to create the deployment script, but you do not specify the deployment options for roles and partitions in this file. You use this file to store information regarding SSAS objects in the project.

You should not modify the <Deploy_Prod1>.deploymenttargets file. This file is another input file used to create the deployment script, but you do not specify the deployment options for roles and partitions in this file. You use this file to specify the name of the target SSAS database and instance for deployment.

You should not modify the <Deploy_Prod1>.configsettings file. This file is another input file used to create the deployment script, but you do not specify the deployment options for roles and partitions in this file. You use this file to specify configuration settings such as storage locations of SSAS objects as well as data source connection information.

You are a database administrator for your company. The SQL Server 2008 Analysis Services server in your company has a number of large measure groups in a cube. Users complain that processing of these measure groups has become very slow. Upon analysis, you determine that the cube has grown so large that performance is slowed. You need to improve the cube processing performance. What should you do?

Explanation: You should create remote partitions of the measure groups in the cube. In this scenario, the size of the cube has increased, causing the processing to take more time. To address this problem you should partition the measure groups in the cube. This will distribute the data in the cube across partitions that can execute queries in parallel, thereby improving the performance. Moreover, the partitions should be remote partitions, as they will allow you to distribute data on multiple servers. Remote partitions allow you to process partitions on a different server, improving the processing performance manifold. A remote partition is defined on the production server, but processed on a remote server, thereby reducing the processing time.

Answer:

<Deploy_Prod1>.deploymentoptions

Item: 26 (Ref:Cert-70-448.3.7.2)

nmlkji Create write-enabled partitions of the measure groups in the cube.

nmlkj Create local partitions of the measure groups in the cube.

nmlkj Create remote partitions of the measure groups in the cube.

nmlkj Enable writeback data to a partition.

Answer:

Create remote partitions of the measure groups in the cube.

Page 27 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 28: MCTS exam 70-448 selected Q & A

You should not create write-enabled partitions of the measure groups in the cube. You should use write-enabled partitions to create partitions of changed data. Write-enabled partitions create a separate table of changed data, which is referred to as a writeback table. This table is stored on a separate partition without affecting existing data in a measure group. Creating a write-enabled partition will not improve the performance in this scenario because here, the slow performance is caused by the volume of data.

You should not create local partitions of the measure groups in the cube. Creating local partitions could improve performance of the cube in this scenario, but it would not allow you to distribute data on multiple servers. Since the high volume of data is the issue in this scenario, remote partitions should be used to distribute the data and thereby improve performance.

You should not create a local partition and enable writeback data to a partition. Writeback data creates a separate table that does not overwrite existing partitions of measure groups in a cube. In this scenario, the slow performance is caused by the high volume of data.

You are the database administrator of a company. You have created a multidimensional cube named DataHouse. The cube contains three tables named Customer, Sales, and Product. The cube also contains dimensions named Customer, Sale and Time, and a measure group named Customer. The Customer table contains a calculated member in it. You want to update the calculated member in the Customer table. What should you do?

Explanation: You should process the cube to update the calculated member in the Customer table. A cube consists of dimensions and measures. The measures are stored in partitions. When a cube is processed, the values are retrieved from the fact table and each member of the cube is populated with the measure value. When a cube is processed, it processes all the unprocessed dimensions in the cube along with one or more partitions. In addition, when a cube is processed, all the calculated members are also updated.

You should not process the Customer dimension because when a dimension is processed, the calculated members are not updated. When a dimension is processed, the tabular version of the dimension members is turned into usable hierarchies. In addition, the aggregations in a cube might be dropped when a dimension is processed.

You should not process all the partitions because processing the partition will not update the calculated members. The calculated members are a part of the cube and will be updated when a cube is processed. When a partition is processed, the unprocessed dimensions in the partition are also processed.

You should not process the partition that contains the Customer measure group because processing the partition will not update the calculated members. The calculated members are a part of the cube and will be updated when a cube is processed. When a partition is processed, the unprocessed dimensions in the partition are also processed.

Item: 27 (Ref:Cert-70-448.3.7.3)

nmlkj Process the Customer dimension.

nmlkj Process the cube.

nmlkj Process all the partitions.

nmlkji Process the partition that contains the Customer measure group.

Answer:

Process the cube.

Page 28 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 29: MCTS exam 70-448 selected Q & A

You are the database administrator of a company. You have designed a cube named DataHouse. The cube contains a measure group named Employee. You have changed the data in one of the tables, which will have an impact on the measure group Employee. These changes will not affect any other measure group or partition. You only want to update the measure group Employee, and not on any other measure group. What should you do?

Explanation: You should process the cube using the Process Incremental option to ensure that the updates are only performed on the measure group Employee and not on any other measure group. You can use the following processing options to control the processing done for each object in Microsoft SQL Server Analysis Services:

� Process Incremental: Adds the newly available fact data. You can specify the measure group or partition to be processed.

� Process Full: Processes the object along with all objects contained within it. If the object has already been processed, all the data is dropped

and the object is processed again.

� Process Default: Determines the current process state of an object and then performs the required processing to make the object fully

processed.

� Process Update: Reads the data again and updates the dimension attributes.

� Process Index: Rebuilds or creates indexes and aggregations for all the partitions that have been previously processed.

� Process Data: Processes the data only. When this option is used, the aggregations and indexes are not built.

� Unprocess: Drops the data in the object and any other objects contained in the specified object.

� Process Structure: Processes the dimensions of the unprocessed cube if required, and then creates the cube definition.

� Process Clear Structure: Removes the training data from a mining structure.

You should not process the cube using the Process Default option because the Process Default option cannot be used to process a specific measure group only. The Process Default option will determine the current process state of a cube and then will perform the required processing to make the cube fully processed.

You should not process the cube using the Process Full option because the Process Full option cannot be used to process a specific measure group. When the Process Full option is used, all the objects in the cube are also processed.

You should not process the cube using the Process Data option because the Process Data option cannot be used to process a specific measure group. When the Process Data option is used, only the data in the cube is processed and the aggregations and indexes in the cube are not built.

You are the database administrator of a company. You have created a multidimensional cube named

Item: 28 (Ref:Cert-70-448.3.7.4)

nmlkj Process the cube using the Process Default option.

nmlkj Process the cube using the Process Full option.

nmlkj Process the cube using the Process Data option.

nmlkji Process the cube using the Process Incremental option.

Answer:

Process the cube using the Process Incremental option.

Item: 29 (Ref:Cert-70-448.3.7.5)

Page 29 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 30: MCTS exam 70-448 selected Q & A

DataHouse. You want to use the Process command on the cube, using the ProcessUpdate value of the Type property. Which component of the cube can be processed using this value?

Explanation: A dimension can be processed using the ProcessUpdate value of the Type property because this value only supports processing of a dimension.

You can use the Process command to process the various components in the Analysis Services. The Type property of the Process command can be set to any of the following processing options to control the processing done for each object in Microsoft SQL Server Analysis Services:

� ProcessFull: Used to process the object along with all the objects contained in it. If the object has already been processed, all the data is

dropped and the object is again processed. This option can be used to process a cube, database, dimension, measure group, mining model, mining structure, or a partition.

� ProcessDefault: Used to determine the current process state of an object and then perform the required processing to make the object fully

processed. This option can be used to process a cube, database, dimension, measure group, mining model, mining structure, or partition.

� ProcessUpdate: Used to read the data again and to update the dimension attributes. This option can be used to process a dimension.

� ProcessIndexes: Used to rebuild or create indexes and aggregations for all the partitions that have been previously processed. This option

can be used to process a dimension, cube, measure group, or partition.

� ProcessData: Used to process the data only. When this option is used, the aggregations and indexes are not built. This option can be used to

process a dimension, cube, measure group, or partition.

� ProcessClear: Used to drop the data in the object and any other objects contained in the specified object. This option can be used to process a

cube, database, dimension, measure group, mining model, mining structure, or partition.

� ProcessStructure: Used to process the dimensions of the unprocessed cube if required and then create the cube definition. This option can

be used to process a cube or a mining structure.

� ProcessClearStructureOnly: Used to remove the training data from a mining structure. This option can only be used to process a

mining structure.

� ProcessScriptCache: This option can be used to process a cube.

All the other options are incorrect because the ProcessUpdate value of the Type property does not support the processing of partitions, measures, or cubes.

You are the administrator for the SQL Server 2008 servers in your company. You have run several Multidimensional Expressions (MDX) statements against your SQL Server Analysis Services (SSAS). You notice that the performance of your SSAS has suffered. You want to identify the MDX statements that run slowly. You need to troubleshoot and debug problems in Analysis Services without interfering with users who require continuous access to SSAS. What should you do? (Choose two. Each correct answer is part of the solution.)

nmlkj Dimension

nmlkj Partition

nmlkj Measure

nmlkji Cube

Answer:

Dimension

Item: 30 (Ref:Cert-70-448.4.4.1)

gfedc Capture events in a trace file on a test system and replay them on the test system.

gfedc Capture events in a trace file on the production system and replay them on the production system.

gfedcb Capture events in a trace file on the production system and replay them on the test system.

Page 30 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 31: MCTS exam 70-448 selected Q & A

Explanation: You should capture events on a trace file in the production system and replay them on the test system. This approach will allow for testing or debugging while letting users continue to access the production system without interference. This approach requires that you have a test server that is nearly identical to the live server.

After capturing the events, you should view events in the Queries Events event category. This category has the following classes:

� Query Begin - Collects all query begin events

� Query End - Collects all query end events

� Query Subcube - Collects all query subcube events. A query subcube event occurs every time a query accesses a subcube inside the multidimensional space.

You can use the Data Columns of StartTime and EndTime from each class to identify long-running queries that may be causing slow performance.

You should not capture events on a trace file in a test system and replay them on the test system. Capturing events on a test system will not emulate what is happing on a production system.

You should not replay captured events on the production system. Replaying events on a production system will hurt performance and hinder user access to the production system.

You should not view events in the event classes of the Errors and Warnings event category. This category has one Event Class, named Error, which records all new error events. This category will not identify long-running queries.

You are the database administrator for your company. You are responsible for managing the Analysis Server instance, named ASSQL1. You use the SQL Profiler to analyze the performance of the server. Users accessing the ASSQL1 server are complaining of poor query performance. You plan to capture the SQL queries so that you can debug them in the test environment. Which event classes must you capture on the production server to run the queries in the test environment? (Choose all that apply.)

gfedc Capture events in a trace file on a test system and replay them on the production system.

gfedc View the event classes in the Errors and Warnings event category.

gfedcb View the event classes in the Queries Events event category.

Answer:

Capture events in a trace file on the production system and replay them on the test system.

View the event classes in the Queries Events event category.

Item: 31 (Ref:Cert-70-448.4.4.10)

gfedc Audit Login event class

gfedc Audit Logout event class

gfedcb Query Begin event class

gfedcb Query End event class

gfedc Command Begin event class

gfedc Audit Object Permission Event class

Page 31 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 32: MCTS exam 70-448 selected Q & A

Explanation: You must capture the following event classes on the production server to run the queries in the test environment:

� Audit Login event class

� Query Begin event class

� Query End event class

All of these event classes should be captured with all data columns.

The Audit Login event class captures information regarding the user who logged in and the settings specific to the session. This information will be required while establishing a connection on the text server. The Query Begin event class captures information such as the complete text of the query, the type of the query, the parameters, and the time the query was executed. The Query End event class captures the end status of the query. These details are required to replay the query in a test environment.

You do not need to capture the Audit Logout event class. This class captures disconnect events for user sessions. This information is not required while replaying a query in the test environment. You require this information when you are auditing user connections and want to have details of when a user issued the disconnect command.

You do not need to capture the Command Begin class. This class is used to capture details specific to a particular command not a SQL query.

You do not need to capture the Audit Object Permission Event class. This class captures the events when access permissions for an object are changed. You do not require this information while replaying a query in the test environment.

You manage a Microsoft SQL Server Analysis Services (SSAS) database for your company. The network administrator reports that there might be unauthorized users who are accessing Analysis Services. You decide to monitor users connecting to Analysis Services by using SQL Server Profiler. Which event class should you monitor?

Explanation:

Answer:

Audit Login event class

Query Begin event class

Query End event class

Item: 32 (Ref:Cert-70-448.4.4.11)

nmlkj Audit Object Permission Event

nmlkj Audit Server Starts and Stops

nmlkji Audit Login

nmlkj Audit Logout

Answer:

Audit Login

Page 32 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 33: MCTS exam 70-448 selected Q & A

You should monitor the Audit Login event class of the Security Audit Event category because this event class keeps a record of all new connections that are made to the SQL Server instance on which SSAS is running. SSAS performance can be monitored using either SQL Server Profiler or Windows Performance monitor (Perfmon.exe). To monitor using SQL Server Profiler, you first need to create traces in SQL Server Profiler and later save the trace information in either a SQL Server table or a trace file (.trc).

Windows Performance Monitor provides performance counters for application using which monitoring can be implemented. For SSAS, the SQL Server Profiler can be used to identify issues, such as slow running MDX queries, errors generated, review, and capture SSAS events. For SQL Server instances, the SQL Server Profiler provides event class category such as Security Audit Event, Database Event, Server Event, TSQL Event, and Stored Procedures Events. The Security Audit Event category is commonly used to monitor events such as Audit Add DB User Event, Audit Add Login to Server Role Event, Audit Add Member to DB Role, and Audit Add Role Event.

SQL Server Profiler provides various event categories that would allow you to create traces that capture events related to issues that you want to monitor. Some of the event categories are:

� Queries Events Event Category: Used to monitor performance of MDX queries using classes, such as Query Begin, Query End, and Query Subcube.

� Errors and Warnings Event Category: Used to monitor all errors and warning that are raised during tracing.

� Command Events Event Category: Used to monitor commands that are executed during tracing using event classes Command Begin, and Command End.

� Security Audit Event Category: Used to monitor security-related events during tracing using event classes Audit Login, Audit Logout, Audit Backup/Restore Event, Audit Server Starts and Stops, and Audit Object Permission Event.

� Session Events Event Category: Used to monitor events that happen during a session.

You should not monitor the event class Audit Server Starts and Stops because this event class is used monitor start, stop, and pause of SSAS. Using this event class, you will not be able to monitor users who are logging on to the SQL Server instance.

You should not monitor the event class Audit Object Permission Event because this event class is used to monitor changes that happen to permissions given to users on different objects. Using this event class, you will not be able to monitor users who are logging on to the SQL Server instance.

You should not monitor the event class Audit Logout Event because this event class is used to monitor users who have stopped an already established connection with the SQL Server instance. Using this event class, you will not be able to monitor users who are logging on to the SQL Server instance.

You are the database administrator for a company. You manage Microsoft SQL Server Analysis Services (SSAS) database for your company. Users report that MDX statements executed on SSAS are running slowly. You decide to implement a trace using SQL Server Profiler to monitor MDX statements being executed on the SSAS. Which event category should you use?

Item: 33 (Ref:Cert-70-448.4.4.12)

nmlkj Security Audit Event

nmlkj Errors and Warnings Event

nmlkji Queries Events Event

nmlkj Command Events Event

Answer:

Queries Events Event

Page 33 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 34: MCTS exam 70-448 selected Q & A

Explanation: You should use the Queries Events Event category to monitor MDX statements that are executed on the SSAS. The Queries Events Event category provides event classes, such as Query Begin, Query End, and Query SubCube to monitor MDX statements. The Query Begin event class records information about queries that have started during SQL Server Profiler tracing. The Query End class records information about queries that have ended during SQL Server Profiler tracing. The Query SubCube class records information about queries that use distinct subcube during execution. Performance of SSAS can be monitored using either SQL Server Profiler or Windows Performance monitor (Perfmon.exe). To monitor using SQL Server Profiler, you first need to create traces in SQL Server Profiler and later save the trace information in either a SQL Server table or a trace file (.trc).

Windows system monitor is an application that provides performance counters which can be used for monitoring. For SSAS the SQL Server Profiler can be used to identify issues, such as slow running MDX queries, errors generated, review and capture SSAS events. For SQL Server instances, the SQL Server Profiler provides event class category such as Security Audit Event, Database Event, Server Event, TSQL Event, and Stored Procedures Events. The Security Audit Event category is commonly used to monitor events such as Audit Add DB User Event, Audit Add Login to Server Role Event, Audit Add Member to DB Role, and Audit Add Role Event.

SQL Server Profiler provides various event categories you can use to create traces that capture events related to issues that you want to monitor. Some of the event categories are:

� Queries Events Event Category: Used to monitor performance of MDX queries using classes, such as Query Begin, Query End, and Query Subcube.

� Errors and Warnings Event Category: Used to monitor all errors and warning that are raised during tracing.

� Command Events Event Category: Used to monitor commands that are executed during tracing using event classes Command Begin, and Command End.

� Security Audit Event Category: Used to monitor security related events during tracing using event classes Audit Login, Audit Logout, Audit Backup/Restore Event, Audit Server Starts and Stops, and Audit Object Permission Event.

� Session Events Event Category: Used to monitor events that happen during a session.

You should not use the Security Audit Event category because this event category is used to monitor events related logins, SQL Server start/stop/pauses, object permission changes, and backup/restore events. However, in the given scenario you are required to monitor slow-running MDX statements.

You should not use the Errors and Warnings Event category because this event category is used to monitor errors and warning generated during tracing. However, in the given scenario you are required to monitor slow-running MDX statements.

You should not use the Command Events Event category because this event category is used to monitor commands that are executed during tracing. However, in the given scenario you are required to monitor slow-running MDX statements.

You manage Microsoft SQL Server Analysis Services (SSAS) for your company. As per company policy, you are required to maintain a weekly log of SSAS server level activities and exceptions using a custom trace definition file. You decide to use the Analysis Server Properties dialog box to set properties of your SSAS. Which SSAS server properties should you use?

Item: 34 (Ref:Cert-70-448.4.4.13)

nmlkj FlightRecorder properties

nmlkji QueryLog properties

Page 34 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 35: MCTS exam 70-448 selected Q & A

Explanation: You should use the FlightRecorder properties to maintain a weekly log of server level activities and exceptions using a custom trace definition file. The Analysis Server Properties dialog box is used to enable logging at various levels in SSAS instance. To enable logging at the server level and the query level, the Analysis Server Properties dialog box displays two log properties, FlightRecorder and QueryLog. The FlightRecorder logging in SSAS allows you to log information about exceptions generated during processing and server level exceptions. To enable FlightRecorder logging, you should set the Log\FlightRecorder\Enabled property to true.

There are several related properties available for FlightRecorder properties that are displayed in the Analysis Server Properties dialog box only if the Show Advanced (All) Properties check box is selected. One of the properties under the Log\FlightRecorder property is FlightRecorder\ TraceDefinitionFile, whichspecifies the custom trace definition file that will configure the type of information to be logged. Other properties under the Log\FlightRecorder property are FileSizeMB, LogDurationSec, SnapshotDefinitonFile, and SnapshotFrequencySec.

You should not use the QueryLog properties because these properties are used to log information about queries that are executed on SSAS database. The QueryLog properties cannot use a custom trace definition file. The QueryLog properties are used in scenarios when you want to log information about SSAS queries in a SQL Server table for later analysis.

You should not use the ErrorLog properties because these properties are used to log only errors that happen during processing. ErrorLog properties cannot use a custom trace definition file.

You should not use the Exception properties because these properties are used to log only server-level exceptions. Exception properties cannot use a custom trace definition file and Microsoft recommends against changing Exception properties without proper guidance.

You manage Microsoft SQL Server Analysis Services (SSAS) for your company. To enable logging information about queries that are executed for SSAS database of your company, you enable logging using the Analysis Server Properties dialog box. As per company policy, you need to ensure that only 1 out of every 20 queries executed for your SSAS database is logged. What should you do?

nmlkj ErrorLog properties

nmlkj Exception properties

Answer:

FlightRecorder properties

Item: 35 (Ref:Cert-70-448.4.4.14)

nmlkj Set the QueryLog\QueryLogFileSize property to 20.

nmlkj Set the QueryLog\QueryLogSampling property to 1.

nmlkj Set the QueryLog\QueryLogFileSize property to 1.

nmlkji Set the QueryLog\QueryLogSampling property to 20.

Answer:

Page 35 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 36: MCTS exam 70-448 selected Q & A

Explanation: You should set the QueryLog\QueryLogSampling property to 20. The Analysis Server Properties dialog box is used to enable logging at various levels in SSAS instance. To enable logging at the server level and the query level, the Analysis Server Properties dialog box displays the log properties FlightRecorder and QueryLog. The QueryLog properties log information about queries that are executed on the SSAS database. There are several related properties available for QueryLog properties that are displayed in the Analysis Server Properties dialog box only if the Show Advanced (All) Properties check box is selected. One of the related properties for the QueryLog property is QueryLogSampling, which sets the ratio of sampled queries to unsampled queries. The default value for the QueryLogSampling property is 10, which would ensure that 1 query out of every 10 is sampled. Therefore, you should set the QueryLogSampling property to 20 to ensure that only 1 out 20 queries executed for your SSAS database is logged. Following are other properties that are related to QueryLog properties:

� QueryLog\ QueryLogFileName: Used to specify the name of the log file in case you are not using a database table to store log

information.

� QueryLog\ QueryLogFileSize: Used to specify the size of the log file. However, Microsoft recommends not changing the value of this property without expert guidance.

� QueryLog\ QueryLogConnectionString: Used to specify the connection string to the database on which log information would be

stored.

� QueryLog\ QueryLogTableName: Used to specify the name of the database table on which log information would be stored.

� QueryLog\ CreateQueryLogTable: Used to specify whether to create a table to store log information.

You should not configure any QueryLog\QueryLogFileSize properties because this property is used to specify the size of the log file, and will not affect the query sampling rate. Moreover, Microsoft recommends not changing the value of this property without expert guidance.

You should not set the QueryLog\QueryLogSampling property to 1 because this will ensure that 1 out of every 1 queries is logged. In the scenario, you do not need to log every query.

You manage Microsoft SQL Server Analysis Services (SSAS) for your company. You have created a measure group named Sales that contains multiple partitions. You need to ensure that aggregations created in the different partitions can be used later when the partitions are merged. What should you do? (Choose two. Each correct option presents a complete solution.)

Set the QueryLog\QueryLogSampling property to 20.

Item: 36 (Ref:Cert-70-448.4.4.5)

gfedcb Create an aggregation design using the DesignAggregation XMLA element.

gfedc Use the Aggregation Design Wizard to create an aggregation design.

gfedc Merge the partitions, and then use the Aggregation Design Wizard to create an aggregation design.

gfedc Use the AggregationPrefix property.

gfedc Use the AggregateFunction property.

Answer:

Create an aggregation design using the DesignAggregation XMLA element.

Use the Aggregation Design Wizard to create an aggregation design.

Page 36 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 37: MCTS exam 70-448 selected Q & A

Explanation: You should use either the DesignAggregation XMLA element or the Aggregation Design Wizard to create an aggregation design. When you need to ensure that different measure group partitions use the same aggregation structure, you should define an aggregation design. Using the same aggregation structure will allow you to use aggregations even after the partitions are merged, or to use aggregations across partitions. Aggregations provide performance improvement by pre-calculating and presenting the data that may be retrieved later by user queries.

You should not merge the partitions and then use the Aggregation Design Wizard to create aggregation design because such aggregation designs would be available within the merged partition only. The scenario requires that aggregations created in different partitions can be used later when partitions are merged.

You should not use the AggregationPrefix property because the AggregationPrefix property is used to specify a prefix for tables

used for aggregation. However, the scenario requires that aggregations created in different partitions can be used later when partitions are merged. Using

the AggregationPrefix property will not ensure that aggregations created in the different partitions can be used later when the partitions are merged.

You should set the AggregateFunction property because this property is used to define the type of aggregation to be performed on a measure. However, as per the given scenario, you are required to ensure that aggregations created in different partitions can be used later when partitions are merged. Setting the AggregateFunction property will not ensure that aggregations created in the different partitions can be used later when the partitions are merged.

You are the database administrator of your company. You are configuring the settings for a dimension named Dim1. You want to ensure that if any new key is encountered that cannot be looked up during the processing of the dimension, the record will be prevented from being processed with the object. You open the Dimension key errors tab in the Change Settings dialog box for Dim1. Which option should you configure on the Dimension key errors tab?

Explanation: You should configure the Key error action option. The Key error action option allows you to specify an action that should be taken when the process encounters a new key that cannot be looked up. You can specify the following two actions for the Key error action option:

� Convert to unknown: This option collects the information for the record into the unknown member.

� Discard record: This option prevents the information for the record from being processed along with the object.

You should not configure the Key not found option, the Duplicate key option, or the Null key not allowed option because these options do not allow you to achieve the goal stated in this scenario. The Key not found option specifies what action should be taken if a key is not found when an object is processed. The Duplicate

Item: 37 (Ref:Cert-70-448.4.5.1)

nmlkji Key not found

nmlkj Key error action

nmlkj Duplicate key

nmlkj Null key not allowed

Answer:

Key error action

Page 37 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 38: MCTS exam 70-448 selected Q & A

key option specifies what action should be taken when a duplicate key is found when an object is processed. The Null key not allowed option specifies what action should be taken when a null key is found, but not allowed, during the processing of an object. You can configure one of the following three actions for the Key not found option, the Duplicate key option, and the Null key not allowed option:

� Ignore error: Specifying this action ignores the error.

� Report and continue: Specifying this action reports the error and continues the processing operation.

� Report and stop: Specifying this action reports the error and stops the processing operation.

You are the person who creates reports with SQL Servers 2008 Report Services (SSRS) for the verigon.com domain. All domain users access the reports. Several of the reports have the same data source properties and execution properties. (Click the Exhibit(s) button to view the configurations.) You will be moving all logins, some databases, and all reports to another SQL Server 2008 server in the same domain. What should you do to simplify maintenance of the reports and ensure a smooth transition to the new server?

Data Sources Properties

Item: 38 (Ref:Cert-70-448.5.1.1)

nmlkj Specify credentials for the new server on each report.

nmlkj Run the reports from a cached temporary copy.

nmlkji Specify a shared data source on each report.

nmlkj Render each report from a report execution snapshot.

Answer:

Specify a shared data source on each report.

Page 38 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 39: MCTS exam 70-448 selected Q & A

Execution Properties

Page 39 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 40: MCTS exam 70-448 selected Q & A

Explanation: You should specify a shared data source on each report. A shared data source can simplify maintenance for multiple reports. If you move reports to a different server, you only have to change the configuration of the shared data source once for all reports, assuming all reports use the same data source. Using a shared data source will ensure a smooth transition if you have to move reports from one server to another server.

You should not specify credentials for the new server on each report. The new server is in the same domain. You should be able to access reports on the new SQL Server with the same domain account as did on the older SQL Server. The current configuration uses Windows Integrated Security. With Windows Integrated Security, the report server passes a security token of the user accessing the report to the server hosting the external data source. The user is not prompt for credentials. Specifying credentials for the new server on each report will require you to configure settings on each report.

You should not run the reports from a cached temporary copy. This action will allow you to run reports from the old server temporarily. A cached temporary copy of the report will not help after the migration of the reports to the other SQL Server 2008 server. If there is information that has changed after the migration of the reports, a cached temporary copy will not contain the new information.

You should not render each report from a report execution snapshot. The action will allow you to run a report from older data. A report snapshot is a report that is retrieved from a specific point in time. Rendering a report from a snapshot will not ensure a smooth transition. You should use a report snapshot to render a report if the report is based on queries that take a long time to run, or on queries that use data from a data source that you prefer no one access during certain hours.

You are the person who creates reports with SQL Servers 2008 Report Services (SSRS) for the verigon.com domain. The report server is running in SharePoint integrated mode. All domain users access the reports. Several of the reports have the same data source properties and execution properties. Due to exponential growth in the number of users in the verigon.com domain, all users, all databases, and all reports will be moved to another SQL Server 2008 server in the same domain, SQL55. SQL55 has Microsoft Windows SharePoint Services 3.0 installed. The reports will be deployed to a SharePoint library. The SharePoint site is http://sharepoint2.verigon.com and the library is the /Documents folder. What should you configure so that users will be able to access the reports after the migration? Choose two.

Explanation: You should configure each report with a Shared Data Source. A Shared Data Source can simplify maintenance for multiple reports. You only have to change the configuration of the shared data source once for all reports, if all reports all use the same data source.

You should not configure each report with a Report Specific Data Source. A Report Specific Data Source is

Item: 39 (Ref:Cert-70-448.5.1.2)

gfedcb Configure each report with a Shared Data Source.

gfedc Configure each report with a Report Specific Data Source.

gfedc Set the target server URL to http://sharepoint2.verigon.com.

gfedcb Set the target server URL to http://sharepoint2.verigon.com/Documents.

gfedc Set the target server URL to http://SQL55.

Answer:

Configure each report with a Shared Data Source.

Set the target server URL to http://sharepoint2.verigon.com/Documents.

Page 40 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 41: MCTS exam 70-448 selected Q & A

embedded within the report. This type of data source is not flexible. You should not use a Report Specific Data Source if you were moving or migrating reports to another location. You should use a Shared Data Source.

You should set the target server URL to http://sharepoint2.verigon.com/Documents. When you deploy reports to a SharePoint library, the URL must reflect the path of the library. In Microsoft Windows SharePoint Services 3.0, the library appears after the server name. In this scenario, the URL would be http://sharepoint2.verigon.com/Documents.

You should not use http://sharepoint2.verigon.com or to http://SQL55 as the target server URL. These addresses do not reflect the path to the SharePoint library.

You design reports for the Globecomm Corporation using SQL Server Report Services 2008. You have created two different reports for the Tax department that track the amount of sales tax paid. The first report named LocalTax displays sales tax paid to local municipalities and the second report named StateTax displays sales tax paid to the state. When users print the LocalTax report the users complain seeing blank pages when they print to the PDF format. What should you do to fix the problem without decreasing the amount of information on the report?

Explanation: You should ensure the body width plus the vertical margins is smaller than the defined page width so that all the contents fit on a single page. A user can specify several information settings when rendering the output format of the report to PDF:

� Page width and height.

� Margin size.

� Number of columns and column spacing.

� Number of pages to render.

� Resolution of the PDF.

You should not ensure the margins plus the body width are no more than .25 cm wider than the defined page width. If the margins plus the body width exceeds the defined page width by any amount, the possibility exists that a blank page will be printed.

You should not delete all text boxes in the report. Deleting all text boxes will help to keep the body of the report from exceeding the size of the page, but it will omit data from the report. In the scenario, you did not want to decrease the amount of information on the report.

You should not change all matrix data regions to automatically Autosize. When you set report items such as matrix data regions and images to grow horizontally, these report items can cause the body of the report to grow. If the size of the report body exceeds the size of the page, a blank page will be reduced. You should ensure that the body width plus the margins is less than the defined page width.

Item: 40 (Ref:Cert-70-448.5.2.1)

nmlkj Ensure that the margin width plus the body width is no more than .25 cm wider than the defined page width.

nmlkji Ensure that the body width plus the margins is less than the defined page width.

nmlkj Delete all text boxes in the report.

nmlkj Change all matrix data regions to automatically Autosize.

Answer:

Ensure that the body width plus the margins is less than the defined page width.

Page 41 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 42: MCTS exam 70-448 selected Q & A

You design reports for the Verigon Corporation using SQL Server Report Services 2008. You have created a report called BirthAnniversaryDay that tracks an employee's birthday and date of hire anniversary. When you render the report in a PDF format, you see blank pages. To avoid the blank pages in the PDF format, you make sure that sum of the body width plus the vertical margin width is less than the defined page width. When you render the report in the Excel format, the report is not broken up into pages. How can you have the report broken up into pages when rendering in the Excel format?

Explanation: You should use the PageBreakAtEnd and PageBreakAtStart properties to set page breaks. The rendering format of Excel does not support page sizes. You will have to set specific page breaks to break the report into multiple pages.

You should not use the InteractiveHeight and InteractiveWidth properties to create a soft page break. Soft page breaks can be used interactive rendering formats such as HTML. The InteractiveHeight and InteractiveWidth properties are used to create soft page breaks. In this scenario, you require hard page breaks.

You should not decrease the margin size. The body width plus the margins should equal less than the defined page width to avoid blank pages in the PDF rendering format. You can manipulate either the body width or the margins to ensure that blank pages are avoided in the PDF rendering format. However, decreasing the margin size will not ensure that hard page breaks will be created.

You should not change the specific number of pages to render. This change may prevent the report from printing all the required pages. Limiting the number of pages will not create hard page breaks.

You write reports for the Verigon Corporation using SQL Server 2008 Report Services. Your company has 2,000 employees, and you are creating reports that list all employees in all departments. You want the report header to display the last name of the first employee that appears on each page of your report, so that users can see where the list of names starts by glancing at the top of the page. What expression should you include in the textbox of the report header?

Item: 41 (Ref:Cert-70-448.5.2.2)

nmlkj Use the InteractiveHeight and InteractiveWidth properties to create a soft page break

nmlkj Set a specific number of pages to render

nmlkj Decrease the margin size

nmlkji Use the PageBreakAtEnd and PageBreakAtStart properties to set page breaks

Answer:

Use the PageBreakAtEnd and PageBreakAtStart properties to set page breaks

Item: 42 (Ref:Cert-70-448.5.2.3)

nmlkj =Globals.Lastname

nmlkj =Last(ReportItems("LastName").Value)

nmlkji =First(ReportItems("LastName").Value)

nmlkj =Parameters!LastName.Value

Page 42 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 43: MCTS exam 70-448 selected Q & A

Explanation: You should use the =First(ReportItems("LastName").Value) expression in the report header. This expression will provides the first value of the LastName text box on the page. This expression should be used if you want to display the first listing of an employee's last name on a page for a

directory listing.

You should not use the =Last(ReportItems("LastName").Value) expression in the report header. This expression will provides the last

value of the LastName text box on the page. In the scenario, you needed to display the last name of the first employee listed on the page in the report header, not the last employee listed on the page.

You should not use the =Globals.Lastname expression in the report header. This expression uses the Globals built-in collection, which represents such global variables as the Report Name and Page Number. The Globals built-in collection will not contain individual fields in the report, such as an employee's last name.

You should not use the =Parameters!LastName.Value expression in the report header. This expression should be used to refer to a parameter in a query parameter, filter expression, text box, or other area of the report, by using the Parameters global collection. This expression should be used to filter a report.

You are creating reports for the Globecomm Corporation using SQL Server 2008 Reporting Services. You create a report called SalesTransactions that displays the sales for the last period. The report is rendered in HTML and viewed online. There is no business requirement to produce a printed copy. When users run the report, 500,000 rows are displayed. Users complain that the reports take a long time to render and the pages are very hard to read. What can you do to fix the problem?

Answer:

=First(ReportItems("LastName").Value)

Item: 43 (Ref:Cert-70-448.5.2.4)

nmlkj Set the InteractiveHeight and InteractiveWidth to 0.

nmlkj Set the InteractiveHeight and InteractiveWidth to 10.

Page 43 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 44: MCTS exam 70-448 selected Q & A

Explanation: You should set the InteractiveHeight and InteractiveWidth to 10. The InteractiveHeight report property is used to define soft pages. Soft pages are used to get around the problem of long pages in a HTML report if there are no logical page breaks placed in the report. Soft pages can help users see where one page ends and a new page begins, which helps the document become more readable, and helps prevent performance problems in Internet Explorer as well.

You should not set the InteractiveHeight and InteractiveWidth to 0 because these settings will disable soft pages. You should set the InteractiveHeight and InteractiveWidth to a value above 0 that suitable for your users.

You should not set the PageHeight and PageWidth to 10 or to 20. The PageHeight and PageWidth properties on the report controls are used to define where a physical page break occurs. In this scenario, the SalesTransactions report is displayed online and not printed. Setting the PageHeight and PageWidth would not affect how users view the report online. Physical page breaks are used in rendering reports that require a physical page, such as a PDF or Microsoft Word file.

You are using SQL Server 2008 Reporting Services to create reports for your company, an automobile supply chain. You create a report called AutomotivePartsSales that displays the sales for the last period. This report displays sales of over 500 different parts. The report is rendered in PDF format. Users complain that the page breaks are not set in the correct places when they print the report. What report controls should you change to so that page breaks occur in the correct place when users print? (Choose two.)

Explanation: You should change the PageHeight and PageWidth properties. The PageHeight and PageWidth properties on the report controls are used to define where a physical page break occurs. In this scenario, the AutomotivePartsSales report is rendered in a PDF format and printed to a hardcopy. Physical page breaks are used in rendering reports that require a physical page, such as a PDF or Microsoft Word document. Setting

nmlkj Set the PageHeight and PageWidth to 10.

nmlkj Set the PageHeight and PageWidth to 20.

Answer:

Set the InteractiveHeight and InteractiveWidth to 10.

Item: 44 (Ref:Cert-70-448.5.2.5)

gfedcb Change the PageHeight properties.

gfedc Change the InteractiveHeight properties.

gfedcb Change the PageWidth properties.

gfedc Change the InteractiveWidth properties.

gfedc Change the PageBreakAtEnd properties.

Answer:

Change the PageHeight properties.

Change the PageWidth properties.

Page 44 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 45: MCTS exam 70-448 selected Q & A

the PageHeight and PageWidth would explicitly determine the correct place to put a physical page break.

You should not change the InteractiveHeight or the InteractiveWidth properties. The InteractiveHeight and InteractiveWidth report properties are used to define soft pages. Soft pages are used to get around the problem of long pages in a HTML report if there are no logical page breaks placed in the report. Soft pages can help users see where one page ends and a new page begins, which helps the document become more readable, and helps prevent performance problems in Internet Explorer as well. In this scenario, you want to change the physical page breaks, not the soft page breaks.

You should not change the PageBreakAtEnd properties. The PageBreakAtStart and PageBreakAtEnd properties are used to define logical page breaks. In this scenario, you want to change the physical page breaks, not the logical page breaks.

You create a report called Sales Orders using SQL Server 2008 Reporting Services. You want the report rows to be alternating colors. The odd rows should be a light grey color and the even rows should be white. What should you configure in your report to accomplish this? (Choose two.)

Explanation: To have the rows in the Sales Orders report display in alternating colors, you should create a conditional expression in the Expression dialog box and do the following:

� Edit the BackgroundColor property for each text box in the rows

� Add the following expression to the BackgroundColor property:

=iif(RowNumber(Nothing) Mod 2, "LightGray", "White")

The background color defaults to Transparent. To set the background color of the text, you should change the color to an expression. An expression allows you to add a conditional statement to allow for alternate colors.

The expression =iif(RowNumber(Nothing) Mod 2, "LightGray", "White")

states that the first and subsequent odd-numbered rows will be a light grey color, and the second and

Item: 45 (Ref:Cert-70-448.5.2.6)

gfedc Edit the Color property for each text box in the rows.

gfedc Edit the Color property for the Tablix data region.

gfedcb Edit the BackgroundColor property for each text box in the rows.

gfedcb Add the following expression to the BackgroundColor property: =iif(RowNumber(Nothing) Mod 2, "LightGray", "White").

gfedc Add the following expression to the Color property: =iif(RowNumber(Nothing) Mod 2, "LightGray", "White").

gfedc Add the following expression to the BackgroundColor property: =iif(RowNumber(Nothing) Mod 3, "LightGray", "White").

gfedc Add the following expression to the Color property: =iif(RowNumber(Nothing) Mod 3, "LightGray", "White").

Answer:

Edit the BackgroundColor property for each text box in the rows.

Add the following expression to the BackgroundColor property: =iif(RowNumber

(Nothing) Mod 2, "LightGray", "White").

Page 45 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 46: MCTS exam 70-448 selected Q & A

subsequent even-numbered rows will be white. No matter how many rows you add to the report, they will continue to alternate between these colors.

You should not add the expression =iif(RowNumber(Nothing) Mod 3, "LightGray", "White")to either the BackgroundColor property or the Color property because it would change the color of every third row. To have every other row display an alternate color, you should include the Mod 2 value in the expression, not Mod 3.

You should not edit or add an expression to the Color property of each text box in the rows. The Color property setting determines the color of the text in the rows. This color defaults to black. If you wanted the font of the text to display in different colors for alternating rows, you would change the settings in the Color properties.

You should not edit the Color property of the Tablix data region. The Color property is used to display the color of the border of the table. This property will not affect the display color of the rows.

You are creating reports for the Verigon Corporation using SQL Server 2008 Reporting Services (SSRS). You want to view reports that are stored on the local file system. You want these reports to be rendered into a Web application. What should you configure?

Item: 46 (Ref:Cert-70-448.5.8.2)

nmlkj Add a WinForms ReportViewer Control to the application.

nmlkji Add a WebForms ReportViewer Control to the application.

nmlkj Configure the ReportViewer Control in the app.config file.

nmlkj Configure the ReportViewer Control in the global.asax file.

Answer:

Add a WebForms ReportViewer Control to the application.

Page 46 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 47: MCTS exam 70-448 selected Q & A

Explanation: You should add a WebForms ReportViewer Control to the application. The application in the scenario is a Web application. You can use a Report Viewer control to render a report to an application. If the application is a Windows-based application developed in either Microsoft Visual C# or Microsoft Visual Basic, you should use a WinForms ReportViewer control. If the application is a Web application, you should use a WebForms Report Viewer control. To add the ReportViewer control to a Windows application, open the Windows application. Drag the ReportViewer control in the Toolbox to the design surface of the Windows

You cannot configure the ReportViewer Control in the app.config file. The app.config file is used to configure a WinForm application, not a WebForm application. The settings in the app.config file affect the whole Windows application, not just a single WinForm. You should add a WebForm ReportViewer Control to the application.

You cannot configure the ReportViewer Control in the global.asax file. The global.asax file is used to declare and handle application and session level events and objects.

You are planning the deployment of the SQL Server 2008 Database Engine and SQL Server 2008 Reporting Services (SSRS) for the Dreamsuites Corporation. You want to create a new role that does the following:

� Publish reports from Report Designer

� Add reports to the report server

� Publish reports that use shared data sources and external files

� Create a folder as part of the publishing process.

What are the minimum tasks that you should assign to this role? (Choose four.)

Explanation: You should assign the following four tasks:

� Manage reports

� Manage data sources

� Manage resources

� Manage folders

Item: 47 (Ref:Cert-70-448.6.3.3)

gfedcb Create linked reports

gfedcb Manage reports

gfedcb Manage data sources

gfedc Manage resources

gfedcb Manage folders

gfedc Consume reports

gfedc Manage models

Answer:

Manage reports

Manage data sources

Manage resources

Manage folders

Page 47 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 48: MCTS exam 70-448 selected Q & A

The "Manage reports" task allows you to publish reports from Report Designer and add the reports to the report server. This task allows you to create and delete reports as well as modify report properties.

You should add the "Manage data sources" and "Manage resources" tasks to publish reports that use shared data sources and external files. The Manage data sources task allows you to create and delete shared data source items as well as modify the properties of the data source. The Manage resources task allows you to create, modify and delete resources, as well as modify the properties of the resource.

You should add the "Manage folders" task to be able to create a folder as part of the publishing process. This task allows you to create, view, and delete folders as well as modifying the properties of the folder.

You should not add the "Create linked reports" task. This task allows you to create linked reports and publish them to the report server folder. The scenario did not require you to add linked reports to the server. A linked report is an existing report that may have different properties or parameter values. Linked reports are created in Report Manager, not Report Designer.

You should not add the "Consume reports" task. This task only allows you to read report definitions. This task does not meet the requirements of the scenario.

You should not add the "Manage models" task. This task allows you create, view, and delete models as well as modifying the properties of the model. This task does not meet the requirements.

You are the administrator for the SQL Server 2008 Reporting Services (SSRS). You want to configure the report server to use Windows integrated security. You want to support both NTLM and Kerberos authentication protocols. You need to add the following XML structure to a configuration file:

Item: 48 (Ref:Cert-70-448.6.3.5)

Page 48 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 49: MCTS exam 70-448 selected Q & A

<AuthenticationTypes> <RSWindowsNegotiate /> <RSWindowsKerberos /> <RSWindowsNTLM /> </AuthenticationTypes>

To which configuration file should you add this XML structure so that your report server will support Windows integrated security?

Explanation: You should add the XML structure to the RSReportServer.config file. This file stores settings used by the Report Manager and the Report Server Web service. Some the settings include the connection string to the database, the connection type, the identifier for the report instance, and the authentication level. You can use the RSReportServer.config file to specify the authentication types accepted by the report server. Authentication can be set to RSWindowsNegotiate, RSWindowsKerberos, or RSWindowsNTLM. You can specify the security packages of NTLM and Kerberos that are used in Windows integrated security by adding the following to the RSReportServer.config file: <AuthenticationTypes>

<RSWindowsNegotiate />

<RSWindowsKerberos />

<RSWindowsNTLM />

</AuthenticationTypes>

You should not add the XML structure to the Web.config for the Report Server Web service. This file includes only settings that are required by an ASP.NET application. The file is located in the \installation directory\Reporting Services\ReportServer folder. This file does not contain authentication settings for the report server.

You should not add the XML structure to the Web.config for Report Manager. This file includes only settings that are required by an ASP.NET application. The file is located in the \installation directory\Reporting Services\ReportManager folder. This file does not contain authentication settings for the report server.

You should not add the XML structure to the RSSrvPolicy.config file. This file is a configuration policy for the report server that includes security policies that affect report expressions and custom assemblies once a report has been deployed. This file does not contain authentication settings for the Report Server.

You are the administrator for SQL Server 2008 Reporting Services (SSRS) in the verigon.com domain. You want to configure SSRS to allow users to receive reports in their electronic mailboxes. You are using an SMTP server as a forwarder that uses e-mail accounts that are different from the domain user account. After implementation, users complain that reports are not received in their inboxes. What should you configure in the RSReportServer.config to allow report delivery from the SMTP server?

nmlkj Web.config for the Report Server Web service

nmlkj Web.config for Report Manager

nmlkji RSReportServer.config

nmlkj RSSrvPolicy.config

Answer:

RSReportServer.config

Item: 49 (Ref:Cert-70-448.6.4.1)

Page 49 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 50: MCTS exam 70-448 selected Q & A

Explanation: You should set SendEmailToUserAlias to False. You can edit the RSReportServer.config file to create a workaround for an SMTP Server that is using a different account than the domain user account. You should set the SendEmailToUserAlias to False and the DefaultHostName to the IP address of the SMTP server or forwarder.

When the SendEmailToUserAlias is set to False, any e-mail address can be specified. If the value is set to True, the e-mail address of the user creating the subscription is used. This will cause an error if the SMTP Server uses e-mail accounts that are different from the domain accounts.

Fundamentally, you can configure a report server for e-mail delivery if you have a SMTP server or gateway and an account that has permission to send e-mail from the SMTP server, and you assign the Send As permission to the Report Server service for the SMTP server.

If you need to configure a remote SMTP service for the report server, you will need to verify several settings in the RSReportServer.config file.

� The <UrlRoot> setting must be set to the report server URL address.

� The <SMTPServer> setting must be set to the IP address or FQDN of the SMTP server

� The <SendUsing> setting must be set to 2

� The <From> field must have the name of an account that has permissions to send e-mail from the SMTP server.

You should not set the SendEmailToUserAlias to True. If the value is set to True, the e-mail address of the user creating the subscription is used. When the SendEmailToUserAlias is set to False, any e-mail address can be specified.

You should not set the authentication type to RSWindowsNegotiate or to RSWindowsKerberos. These values are used to configure the authentication for the report server. These values are not used to configure the SMTP settings of the report server.

You are the administrator for the SQL Server 2008 Reporting Services (SSRS) in the metroil.com domain. You want to configure SSRS to allow users to receive reports in their electronic mailboxes. You want to configure a remote SMTP service for your report server. The Report Server Windows service is given the Send As permission to the SMTP server. You edit the RSReportServer.config file and perform the following actions:

� Set the <UrlRoot> to the report server URL address

� Set the <SMTPServer> to the IP address of the SMTP server

� Set the <From> to an account that has permissions to send e-mail from the SMTP server

What else must you set in the RSReportServer.config file to configure a remote SMTP service for the report server?

nmlkj Set SendEmailToUserAlias to False.

nmlkji Set SendEmailToUserAlias to True.

nmlkj Set the authentication type to RSWindowsNegotiate.

nmlkj Set the authentication type to RSWindowsKerberos.

Answer:

Set SendEmailToUserAlias to False.

Item: 50 (Ref:Cert-70-448.6.4.2)

nmlkj Set <SendUsing> to 1.

Page 50 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 51: MCTS exam 70-448 selected Q & A

Explanation: You should set <SendUsing> to 2. The value of 2 indicates that the report server is configured to use a remote SMTP service. If <SendUsing> is

set to any other value than 2, then the RSReportServer.config file will configure the report server not use a remote SMTP service.

The value of 1 indicates that the report server is configured to use a local SMTP service.

The values of 3 or 4 are not valid values.

To configure a remote SMTP service for the report server, you need to verify the following settings in the RSReportServer.config file:

� The <UrlRoot> setting must be set to the report server URL address.

� The <SMTPServer> setting must be set to the IP address or FQDN of the SMTP server

� The <SendUsing> setting must be set to 2

� The <From> field must have the name of an account that has permissions to send e-mail from the SMTP server.

You are the administrator for the SQL Server 2008 Reporting Services (SSRS) in the metroil.com domain. You plan to move the report server databases to another computer. You have already performed a backup of all the databases on the source computer. How should you move the report server databases to the target computer?

Explanation: You should restore the reportserver and reportservertempdb databases to the new target computer. When moving databases, there are three possible techniques: you can use the backup and restore method, use the attach and detach method, or directly copy the databases and delete the source afterward. The backup and restore method is a better choice than copying the databases because if you use the Copy Database Wizard to move the databases, the permission settings are not preserved.

nmlkj Set <SendUsing> to 2.

nmlkj Set <SendUsing> to 3.

nmlkj Set <SendUsing> to 4.

Answer:

Set <SendUsing> to 2.

Item: 51 (Ref:Cert-70-448.6.5.1)

nmlkj Use the Copy Database Wizard to copy the MSDB, reportserver, and reportservertempdb databases to the new target computer.

nmlkj Use the Copy Database Wizard to copy the reportserver and reportservertempdb databases to the new target computer.

nmlkj Restore the MSDB, reportserver, and reportservertempdb databases to the new target computer.

nmlkji Restore the reportserver and reportservertempdb databases to the new target computer.

Answer:

Restore the reportserver and reportservertempdb databases to the new target computer.

Page 51 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 52: MCTS exam 70-448 selected Q & A

If you use the attach and detach method, you must take the report server offline while you detach the reportserver and reportservertempdb databases. You do not have to take the report server offline to back up the reportserver and reportservertempdb databases.

You should not restore the MSDB, reportserver and reportservertempdb databases to the new target computer. You should restore the reportserver and reportservertempdb databases, but not the MSDB database. The MSDB database contains information on jobs, schedules, and backup histories of the source server. This database does not need to be restored to the target server.

You should not use the Copy Database Wizard to move the reportserver and reportservertempdb databases to the new target computer. If you use the Copy Database Wizard to move the databases, the permission settings are not preserved.

You are the administrator for the SQL Server 2008 Reporting Services (SSRS) in the metroil.com domain. You have performed a backup of all the databases on your SQL servers. The SQL Server server that contains all the reports for the company has experienced a disk failure. You plan to move the databases that contain reports to another computer. What should you do? (Choose two.)

Explanation: You should restore the reportserver and reportservertempdb databases and their logs to the new target computer. You can use the following to restore the report server database to a new instance:

RESTORE DATABASE ReportServer

FROM DISK='C:\ReportServerData.bak'

WITH NORECOVERY,

MOVE 'ReportServer' TO

'C:\Program Files\Microsoft SQL

Server\MSSQL10.MSSQLSERVER\MSSQL\Data\ReportServer.mdf',

MOVE 'ReportServer_log' TO

'C:\Program Files\Microsoft SQL

Server\MSSQL10.MSSQLSERVER\MSSQL\Data\ReportServer_Log.ldf';

GO

You should also restore the log file for the reportserver database. You can use the following to restore the log for the the reportserver database:

RESTORE LOG ReportServer

FROM DISK='C:\ReportServerData.bak'

Item: 52 (Ref:Cert-70-448.6.5.2)

gfedc Restore the MSDB database to the new computer.

gfedcb Restore the reportserver database to the new computer.

gfedcb Restore the reportservertempdb database to the new computer.

gfedc Restore the Model database to the new computer.

gfedc Restore the Tempdb to the new computer.

Answer:

Restore the reportserver database to the new computer.

Restore the reportservertempdb database to the new computer.

Page 52 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 53: MCTS exam 70-448 selected Q & A

WITH NORECOVERY, FILE=2

MOVE 'ReportServer' TO

'C:\Program Files\Microsoft SQL

Server\MSSQL10.MSSQLSERVER\MSSQL\Data\ReportServer.mdf',

MOVE 'ReportServer_log' TO

'C:\Program Files\Microsoft SQL

Server\MSSQL10.MSSQLSERVER\MSSQL\Data\ReportServer_Log.ldf';

GO

You also need to restore the reportservertempdb database and its log file along with the reportserver database and log. You can use the following statements to restore the reportservertempdb database on the new server.

RESTORE DATABASE ReportServerTempdb

FROM DISK='C:\ReportServerTempDBData.bak'

WITH NORECOVERY,

MOVE 'ReportServerTempDB' TO

'C:\Program Files\Microsoft SQL

Server\MSSQL10.MSSQLSERVER\MSSQL\Data\ReportServerTempDB.mdf',

MOVE 'ReportServerTempDB_log' TO

'C:\Program Files\Microsoft SQL

Server\MSSQL10.MSSQLSERVER\MSSQL\Data\REportServerTempDB_Log.ldf';

GO

You can use the following statements to restore the reportservertempdb database log on the new server.

RESTORE LOG ReportServerTempdb

FROM DISK='C:\ReportServerTempDBData.bak'

WITH NORECOVERY, FILE=2

MOVE 'ReportServerTempDB' TO

'C:\Program Files\Microsoft SQL

Server\MSSQL10.MSSQLSERVER\MSSQL\Data\ReportServerTempDB.mdf',

MOVE 'ReportServerTempDB_log' TO

'C:\Program Files\Microsoft SQL

Server\MSSQL10.MSSQLSERVER\MSSQL\Data\REportServerTempDB_Log.ldf';

GO

After you have restored the reportserver and reportservertempdb databases to the new target

computer, you must use the keyword WITH RECOVERY to roll back any uncommitted transactions and roll forward any committed transactions in both databases.

RESTORE DATABASE ReportServer

WITH RECOVERY

GO

RESTORE DATABASE ReportServerTempDB

WITH RECOVERY

GO

You should not restore the MSDB database. The MSDB database contains information on jobs, schedules, and backup histories of the source server. This database does not need to be restored to the target server.

You should not restore the Model database. The Model database is used as a template database to create new databases in the server. The target server will have its own Model database. This database does not need to be restored to the target server.

You should not restore the Tempdb database to the new computer. This database is used to store temporary queries and temporary tables. This information does not need to be restored on the target server. The target server will have its own Tempdb database.

Page 53 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 54: MCTS exam 70-448 selected Q & A

You are the administrator for the SQL Server 2008 Reporting Services (SSRS) in your company. Your company, Verigon, was acquired by another company. The supervisors of the new company, Globecomm, require you to rename the computer that hosts the report server to adhere to the new computer naming policies. After renaming the report server, you must restore a copy of the encryption key that is stored in the rsdbkey.snk file. You have copied the file to a USB drive. What must you type at the command prompt to restore the key?

Explanation: You should type rskeymgmt -a -f d:\rsdbkey.snk -pP@ssw0rd to restore the key. The rskeymgmt utility can be used to restore the key. The -a switch is used to apply the key to the new report server instance after the computer has been renamed. The -f switch is used to specify the location of the key that you are restoring. The -p switch is used to specify the password that was used to backup the key.

You must restore a backup copy of the encryption key when any of the following situations occur:

� You have to recover a report server installation because of hardware failure

� You have the report server use a different report server database

� You rename the report server

� You change the report server's Windows service account name

� You reset the password of the report server's Windows service account

You should not type rskeymgmt -s -f d:\rsdbkey.snk -pP@ssw0rd to restore the key. The -s switch is used to reencrypt the secure information using a new key. In this scenario you must restore the key, not reencrypt secure information with a new key.

You should not type rskeymgmt -e -f d:\rsdbkey.snk -pP@ssw0rd to restore the key. The -e switch is used to create a backup copy of the report server encryption key. In this scenario you must restore the key, not back up the key.

You should not type rskeymgmt -j -f d:\rsdbkey.snk -pP@ssw0rd to restore the key. The -j switch is used to join a remote machine to the same scale-out deployment as the local machine. In this scenario, you must restore the key, not join the computer to another deployment.

You are the administrator of the SQL Server 2008 Reporting Services (SSRS) in the in your company. The processor on the report server has failed. You have made backups of the report server and have a copy of the encryption key that is stored in the rsdbkey.snk file. You want to move the report server to a faster and more

Item: 53 (Ref:Cert-70-448.6.5.3)

nmlkji rskeymgmt -s -f d:\rsdbkey.snk -pP@ssw0rd

nmlkj rskeymgmt -a -f d:\rsdbkey.snk -pP@ssw0rd

nmlkj rskeymgmt -e -f d:\rsdbkey.snk -pP@ssw0rd

nmlkj rskeymgmt -j -f d:\rsdbkey.snk -pP@ssw0rd

Answer:

rskeymgmt -a -f d:\rsdbkey.snk -pP@ssw0rd

Item: 54 (Ref:Cert-70-448.6.5.5)

Page 54 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 55: MCTS exam 70-448 selected Q & A

reliable computer that is not prone to hardware failures. What must you type to restore the encryption key on the report server after the move?

Explanation: You should use rsconfigtool.exe to restore the encryption key on the report after a hardware failure. This executable file is the Reporting Services Configuration Manager tool. This tool can be used to back up encryption keys, restore encryption keys, change the encryption key with a newer version, and delete encrypted content such as credentials, connection strings, and encrypted values. In the scenario, you need to restore the encryption keys. You must restore a backup copy of the encryption key when the following occurs:

� You have to recover a report server installation because of hardware failure

� You have the report server use a different report server database

� You rename the report server

� You change the report server's Windows service account name

� You reset the password of the report server's Windows service account

You can use the rskeymgt utility to restore the backup encryption keys. To restore the encryption key you must use the -a switch to apply the key to the new report server instance. The -f switch is used to specify the location of the key that you are restoring. The -p switch is used to specify the password that was used to back up the key.

You cannot use rskeymgmt -s -f d:\rsdbkey.snk -pP@ssw0rd to restore the backup encryption keys. The -s switch is used to reencrypt the secure information using a new key. In this scenario you must restore the key, not reencrypt secure information with a new key.

You cannot use rskeymgmt -e -f d:\rsdbkey.snk -pP@ssw0rd to restore the encryption key. The -e switch is used to create a backup copy of the report server encryption key. In this scenario, you must restore the key, not back up the key.

You cannot use rs -i d:\rsdbkey.snk -pP@ssw0rd to restore the key. The rs.exe utility is used to execute a script file's contents against a specific report server. You cannot use this utility to restore the encryption key.

nmlkji rskeymgmt -s -f d:\rsdbkey.snk -pP@ssw0rd

nmlkj rskeymgmt -e -f d:\rsdbkey.snk -pP@ssw0rd

nmlkj rsconfigtool.exe

nmlkj rs -i d:\rsdbkey.snk -pP@ssw0rd

Answer:

rsconfigtool.exe

Page 55 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 56: MCTS exam 70-448 selected Q & A

You are the administrator for the SQL Server 2008 Reporting Services (SSRS) in your company. The report server failed 42 days after deployment. You investigate the problem and discover that the password for the account that you associated with the SQL Server Reporting Services has since expired because of the password policies set on the domain. (Click Exhibit(s) to view the account properties.) After changing the password on the account and configuring the password to never expire, you restart the report server. The report server fails to start. What must you do next?

Item: 55 (Ref:Cert-70-448.6.5.6)

nmlkj Use rs.exe to restore the encryption key.

nmlkj Use rs.exe to reinitialize report server.

nmlkji Use rsconfigtool.exe to restore the encryption key.

nmlkj Use rsconfigtool.exe to reinitialize report server.

Answer:

Use rsconfigtool.exe to restore the encryption key.

Page 56 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...

Page 57: MCTS exam 70-448 selected Q & A

Explanation: You should use rsconfigtool.exe to restore the encryption key on the report after resetting the password of the report server's Windows service account. This executable file is the Reporting Services Configuration Manager tool. This tool can be used to back up encryption keys, restore encryption keys, change the encryption key with a newer version, and delete encrypted content such as credentials, connection strings and encrypted values.

In the scenario, you need to restore the encryption keys. You must restore a backup copy of the encryption key when any of the following occurs:

� You have to recover a report server installation because of hardware failure

� You have the report server use a different report server database

� You rename the report server

� You change the report server's Windows service account name

� You reset the password of the report server's Windows service account

You should not reinitialize the report server. In this scenario, the report server's Windows service account has been reset. You must restore the encryption keys before you can reinitialize the report server. You cannot use the rsconfigtool.exe utility or rs.exe utility to reinitialize the server. The rs.exe utility is used to execute a script file contents against a specific report server. You could use this utility to restore the encryption key.

Page 57 of 57

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All ...