Posts Tagged Visual Studio
Using PowerShell to Generate TFS Changed File List for Build Artifact Delivery
Posted by Gary A. Stafford in PowerShell Scripting, Software Development, Team Foundation Server (TFS) Development on August 10, 2012
Delivering Artifacts for Deployment
In many enterprise-software development environments, delivering release-ready code to an Operations or Release team for deployment, as opposed to deploying the code directly, is common practice. A developer ‘kicks off’ a build of project using a build automation system like Hudson, Jenkins, CruiseControl, TeamCity, or Bamboo. The result is a set of build artifacts that are delivered and deployed as part of the release cycle. Build artifacts are logical collections of deployable code and other files, which form the application. Artifacts are often segregated by type, such as database, web code, services, configuration files, and so forth. Each type of artifact may require a different deployment methods.
There are two approaches to delivering artifacts for deployment. Some organizations deliver all the artifacts from each build for deployment. Alternately, others follow a partial delivery and release model, delivering only the artifacts that contain changes since the last delivery. The entire application is not re-deployed, only what changed. This is considered by many to be a quicker and safer method of software release.
The challenge of partial delivery is knowing precisely what changed since the last delivery. Almost all source control systems keep a history of changes (‘changesets’). Based on the time of the last build, a developer can check the history and decide which artifacts to deliver based on the changes. If you have daily releases, changes between deliveries are likely few. However, if your development cycle spans a few weeks or you have multiple developers working on the same project, there will likely be many changesets to examine. Figuring out what artifacts to deliver is tedious and error prone. Missing one small change out of hundreds of changes can jeopardize a whole release. Having to perform this laborious task ever few weeks myself, I was eager to automate this process!
Microsoft Team Foundation PowerShell Snap-In
The solution is of course PowerShell and the Microsoft Team Foundation PowerShell Snap-In. Using these two tools, I was able to write a very simple script that does the work for me. If you are unfamiliar with the Team Foundation Server (TFS) snap-in, review my earlier post, Automating Task Creation in Team Foundation Server with PowerShell. That post discusses the snap-in and explains how to install on your Windows computer.
The PowerShell script begins with a series of variables. The first two are based on your specific TFS environment. Variables include:
- Team Project Collection path;
- Source location within the collection to search for changes;
- Date and time range to search for changes;
- Location of text file that will contain a list of changed files;
- Option to open the text file when the script is complete.
Given the Team Project Collection path, source location, and the date range, the script returns a sorted list of all files that changed. Making sure the list is distinct is important. File may change many times over the course of a development cycle. You only want to know if the file changed. How many times the file changed, or when it changed, is irrelevant. The file list is saved to a text file, a manifest, for review. The values of the script’s variables are also included in the manifest.
Excluding Certain Changes
Testing the initial script, I found it returned to much information. There were three main reasons:
- Unrelated Changes – Not every file that changes within the location selected is directly associated the project being deployed. There may be multiple, related projects in that location’s sub directories (child nodes).
- Secondary Project Files – Not every file that changes is deployed. For example, build definitions files, database publishing profiles, and manual test documents, are important parts of any project, but are not directly part of the applications within the project being deployed. These are often files in the project used by the build system or required by TFS.
- Certain Change Types – Changes in TFS include several types (
Microsoft.TeamFoundation.VersionControl.Client.ChangeType
) that you may not want to include on the list. For example, you may not care about deleted or renamed files. See the post script about how to get a list of allChangeTypes
using PowerShell.
To solve the problem of too much information, we can filter the results of the Get-TfsItemHistory
command, using the Where-Object
command with the Select-Object
command, in the Get-TfsItemHistory
command pipeline. Using the -notlike
property of the Where-Object
command, which accepts wildcards, we exclude certain ChangeTypes
, we exclude files by name and size, and we exclude groups of files based on file path. You will obviously need to change the example’s exclusions to meet your own project’s needs.
Below is the PowerShell script, along with some sample contents of file change manifest text file, based on an earlier post’s SSDT database Solution:
############################################################### # # Search for all unique file changes in TFS # for a given date/time range and collection location. # Write results to a manifest file. # # Author: Gary A. Stafford # Created: 2012-04-18 # Revised: 2012-08-11 # ############################################################### # Clear Output Pane clear # Enforce coding rules Set-StrictMode -version 2.0 # Loads Windows PowerShell snap-in if not already loaded if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null ) { Add-PSSnapin Microsoft.TeamFoundation.PowerShell } # Variables - CHECK EACH TIME [string] $tfsCollectionPath = "http://tfs2010/tfsCollection" [string] $locationToSearch = "$/Development/AdventureWorks/" [string] $outputFile = "c:\ChangesToTFS.txt" [string] $dateRange = "D2012-07-08 00:00:00Z~" [bool] $openOutputFile = $true # Accepts $false or $true # For a date/time range: 'D2012-08-06 00:00:00Z~D2012-08-09 23:59:59Z' # For everything including and after a date/time: 'D2012-07-21 00:00:00Z~' [Microsoft.TeamFoundation.Client.TfsTeamProjectCollection] $tfs = get-tfsserver $tfsCollectionPath # Add informational header to file manifest [string] $outputHeader = "TFS Collection: " + $tfsCollectionPath + "`r`n" + "Source Location: " + $locationToSearch + "`r`n" + "Date Range: " + $dateRange + "`r`n" + "Created: " + (Get-Date).ToString() + "`r`n" + "======================================================================" $outputHeader | Out-File $outputFile Get-TfsItemHistory $locationToSearch -Server $tfs -Version $dateRange ` -Recurse -IncludeItems | Select-Object -Expand "Changes" | Where-Object { $_.ChangeType -notlike '*Delete*'} | Where-Object { $_.ChangeType -notlike '*Rename*'} | Select-Object -Expand "Item" | Where-Object { $_.ContentLength -gt 0} | Where-Object { $_.ServerItem -notlike '*/sql/*' } | Where-Object { $_.ServerItem -notlike '*/documentation/*' } | Where-Object { $_.ServerItem -notlike '*/buildtargets/*' } | Where-Object { $_.ServerItem -notlike 'build.xml'} | Where-Object { $_.ServerItem -notlike '*.proj'} | Where-Object { $_.ServerItem -notlike '*.publish.xml'} | Select -Unique ServerItem | Sort ServerItem | Format-Table -Property * -AutoSize | Out-String -Width 4096 | Out-File $outputFile -append Write-Host `n`r**** Script complete and file written **** If ($openOutputFile) { Invoke-Item $outputFile }
Contents of file change manifest text file, based on my previous post’s SSDT database Visual Studio Solution:
TFS Collection: http://tfs2010/tfsCollection Source Location: $/Development/AdventureWorks2008/ Date Range: D2012-08-02 00:00:00Z~ Created: 8/10/2012 10:28:46 AM ====================================================================== ServerItem ---------- $/Development/AdventureWorks2008/AdventureWorks2008.sln $/Development/AdventureWorks2008/Development/Development.sln $/Development/AdventureWorks2008/Development/Development.sqlproj $/Development/AdventureWorks2008/Development/Schema Objects/Server LevelObjects/Security/Logins/aw_dev.login.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/AdventureWorksSSDT.sqlproj $/Development/AdventureWorks2008/AdventureWorksSSDT/dbo/StoredProcedures/uspGetBillOfMaterials.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/dbo/Stored Procedures/uspLogError.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/HumanResources/Tables/EmployeePayHistory.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Purchasing/Tables/ShipMethod.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Purchasing/Views/vVendorWithContacts.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Security/aw_dev.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Security/jenkins.sql
Conclusion
This script saves considerable time, especially for longer release cycles, and eliminates potential errors from missing changes. To take this script a step further, I would like to have it determine which artifacts to deliver based on the files that changed, not leaving it up to the developer to figure out. A further step, I would also have it generate an artifact manifest that would be passed to the build. The build would use the manifest to deliver those artifacts to the release team. This would really make it an end-to-end solution. Challenge accepted…
Post Script, PowerShell Enumeration
Assume you couldn’t find a resource on the web that listed all the ChangeType
values? How would you use PowerShell to get a list of all the enumerated ChangeType
values (Microsoft.TeamFoundation.VersionControl.Client.ChangeType
)? It only takes one line of code, once the TFS plugin and assembly are loaded.
# Loads Windows PowerShell snap-in if not already loaded if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null ) { Add-PSSnapin Microsoft.TeamFoundation.PowerShell } [Void][Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.VersionControl.Client") [Enum]::GetNames( [Microsoft.TeamFoundation.VersionControl.Client.ChangeType] )
Convert VS 2010 Database Project to SSDT and Automate Publishing with Jenkins – Part 3/3
Posted by Gary A. Stafford in .NET Development, Software Development, SQL Server Development, Team Foundation Server (TFS) Development on August 8, 2012
Objectives of 3-Part Series:
Part I: Setting up the Example Database and Visual Studio Projects
- Setup and configure a new instance of SQL Server 2008 R2
- Setup and configure a copy of Microsoft’s Adventure Works database
- Create and configure both a Visual Studio 2010 server project and Visual Studio 2010 database project
- Test the project’s ability to deploy changes to the database
Part II: Converting the Visual Studio 2010 Database and Server Projects to SSDT
- Convert the Adventure Works Visual Studio 2010 database and server projects to SSDT projects
- Create a second Solution configuration and SSDT publish profile for an additional database environment
- Test the converted database project’s ability to publish changes to multiple database environments
Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins
- Automate the build and delivery of a sql change script artifact, for any database environment, to a designated release location using a parameterized build.
- Automate the build and publishing of the SSDT project’s changes directly to any database environment using a parameterized build.
Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins
In this last post we will use Jenkins to publishing of changes from the Adventure Works SSDT database project to the Adventure Works database. Jenkins, formally Hudson, is the industry-standard, java-based open-source continuous integration server.
Jenkins
If you are unfamiliar with Jenkins, I recommend an earlier post, Automated Deployment to GlassFish Using Jenkins and Ant. That post goes into detail on Jenkins and its associated plug-in architecture. Jenkins’ website provides excellent resources for installing and configuring Jenkins on Windows. For this post, I’ll assume that you have Jenkins installed and running as a Windows Service.
The latest available version of Jenkins, at the time of this post is 1.476. To follow along with the post, you will need to install and configure the following (4) plug-ins:
- Apache Ant Plug-in
- Jenkins MSBuild Plug-in
- Jenkins Artifact Deployer Plug-in
- Jenkins Email Extension Plug-in
User Authentication
In the first two posts, we connected to the Adventure Works database with the ‘aw_dev’ SQL Server user account, using SQL Authentication. This account was used to perform schema comparisons and publish changes from the Visual Studio project. Although SQL Authentication is an acceptable means of accessing SQL Server, Windows Authentication is more common in corporate and enterprise software environments, especially where Microsoft’s Active Directory is used. Windows Authentication with Active Directory (AD) provides an easier, centralized user account security model. It is considered more secure.
With Windows Authentication, we associate a SQL Server Login with an existing Windows user account. The account may be local to the SQL Server or part of an Active Directory domain. For this post, instead using SQL Authentication, passing the ‘aw_dev’ user’s credentials to SQL Server in database project’s connection strings, we will switch to Windows Authentication. Using Windows Authentication will allow Jenkins to connect directly to SQL Server.
Setting up the Jenkins Windows User Account
Let’s outline the process of creating a Jenkins Windows user account and using Windows Authentication with our Adventure Works project:
- Create a new ‘jenkins’ Windows user account.
- Change the Jenkins Windows service Log On account to the ‘jenkins’ Windows account.
- Create a new ‘jenkins’ SQL Server Login, associated with the ‘jenkins’ Windows user account, using Windows Authentication.
- Provide privileges in SQL Server to the ‘jenkins’ user identical to the ‘aw_dev’ user.
- Change the connection strings in the publishing profiles to use Windows Authentication.
First, create the ‘jenkins’ Windows user account on the computer where you have SQL Server and Jenkins installed. If they are on separate computers, then you will need to install the account on both computers, or use Active Directory. For this demonstration, I have both SQL Server and Jenkins installed on the same computer. I gave the ‘jenkins’ user administrative-level rights on my machine, by assigning it to the Administrators group.
Next, change the ‘Log On’ Windows user account for the Jenkins Windows service to the ‘jenkins’ Windows user account. Restart the Jenkins Windows service to apply the change. If the service fails to restart, it is likely you did not give enough rights to the user. I suggest adding the user to the Administrators group, to check if the problem you have encountering is permissions-related.
Setting up the Jenkins SQL Server Login
Next, to use Windows Authentication with SQL Server, create a new ‘jenkins’ Login for the Production instance of SQL Server and it with the ‘jenkins’ Windows user account. Replicate the ‘aw_dev’ SQL user’s various permissions for the ‘jenkins’ user. The ‘jenkins’ account will be performing similar tasks to ‘aw_dev’, but this time initiated by Jenkins, not Visual Studio. Repeat this process for the Development instance of SQL Server.
Windows Authentication with the Publishing Profile
In Visual Studio, switch the connection strings in the Development and Production publishing profiles in both the server project and database projects to Windows Authentication with Integrated Security. They should look similar to the code below. Substitute your server name and SQL instance for each profile.
Data Source=[SERVER NAME]\[INSTANCE NAME];Integrated Security=True;Pooling=False
Important note here, once you switch the profile’s connection string to Windows Authentication, the Windows user account that you logged into your computer with, is the account that Visual Studio will now user to connect to the database. Make sure your Windows user account has at least the same level of permissions as the ‘aw_dev’ and ‘jenkins’ accounts. As a developer, you would likely have greater permissions than these two accounts.
Configuring Jenkins for Delivery of Script to Release
In many production environments, delivering or ‘turning over’ release-ready code to another team for deployment, as opposed to deploying the code directly, is common practice. A developer starts or ‘kicks off a build’ of the job in Jenkins, which generates artifact(s). Artifacts are usually logical collections of deployable code and other associated components and files, constituting the application being built. Artifacts are often separated by type, such as database, web, Windows services, web services, configuration files, and so forth. Each type may be deployed by a different team or to a different location. Our project will only have one artifact to deliver, the sql change script.
This first Jenkins job we create will just generate the change script, which will then be delivered to a specific remote location for later release. We start by creating what Jenkins refers to as a parameterized build job. It allows us to pass parameters to each build of our job. We pass the name of the configuration (same as our environment name) we want our build to target. With this single parameter, ‘TARGET_ENVIRONMENT’, we can use a single Jenkins job to target any environment we have configured by simply passing its name to the build; a very powerful, time-saving feature of Jenkins.
Let’s outline the steps we will configure our Jenkins job with, to deliver a change script for release:
- Copy the Solution from its current location to the Jenkins job’s workspace.
- Accept the target environment as a parameterized build parameter (ex. ‘Production’ or ‘Development’).
- Build the database project and its dependencies based on the environment parameter.
- Generate the sql change script based on the environment parameter.
- Compress and name the sql change script based on the environment parameter and build id.
- Deliver the compressed script artifact to a designated release location for deployment.
- Notify release team that the artifact is ready for release.
- Archive the build’s artifact(s).
Copy the Solution to Jenkins
I am not using a revision control system, such as TFS or Subversion, for our example. The Adventure Works Solution resides in a file directory, on my development machine. To copy the entire Solution from its current location into job’s workspace, we add a step in the Jenkins job to execute a simple xcopy command. With source control, you would replace the xcopy step with a similar step to retrieve the project from a specific branch)within the revision control system, using one of many Jenkins’ revision control plug-ins.
echo 'Copying Adventure Works Solution to Jenkins workspace...' xcopy "[Path to your Project]\AdventureWorks2008" "%WORKSPACE%" /S /E /H /Y /R /EXCLUDE:[Path to exclude file]\[name of exclude file].txt echo 'Deleting artifacts from previous builds...' del "%WORKSPACE%\*_publish.zip" /F /Q
Excluding Solution files from Jenkins job’s workspace that are unnecessary for the job to succeed is good practice. Excluding files saves time during the xcopy and can make troubleshooting build problems easier. To exclude unneeded Solution files, use the xcopy command’s ‘exclude’ parameter. To use exclude, we must first create an exclude text file, listing the directories we don’t need copied, and call it using with the exclude parameter with the xcopy command. Make sure to change the path shown above to reflect the location and name of your exclude file. Here is a list of the directories I chose to exclude. They are either unused by the build, or created as part of the build, for example the sql directories and there subdirectories.
\AdventureWorks2008\sql\ \AdventureWorks2008\Sandbox\ \AdventureWorks2008\_ConversionReport_Files\ \Development\sql\ \Development\Sandbox\ \Development\_ConversionReport_Files\
Build the Solution with Jenkins
Once the Solution’s files are copied into the Jenkins job’s workspace, we perform a build of the database project with an MSBuild build step, using the Jenkins MSBuild Plug-in. Jenkins executes the same MSBuild command Visual Studio would execute to build the project. Jenkins calls MSBuild, which in turn calls the MSBuild ‘Build’ target with parameters that specify the Solution configuration and platform to target.
Generate the Script with Jenkins
After Building the database project, in the same step as the build, we perform a publish of the database project. MSBuild calls the new SSDT’s ‘Publish’ target with parameters that specify the Solution configuration, target platform, publishing profile to use, and whether to only generate a sql change script, or publish the project’s changes directly to the database. In this first example, we are only generating a script. Note the use of the build parameter (%TARGET_ENVIRONMENT%) and environmental variables (%WORKSPACE%) in the MSBuild command. Again, a very powerful feature of Jenkins.
"%WORKSPACE%\AdventureWorks2008\AdventureWorks2008.sqlproj" /p:Configuration=%TARGET_ENVIRONMENT% /p:Platform=AnyCPU /t:Build;Publish /p:SqlPublishProfilePath="%WORKSPACE%\AdventureWorks2008\%TARGET_ENVIRONMENT%.publish.xml" /p:UpdateDatabase=False
Compressing Artifacts with Apache Ant
To streamline the delivery, we will add a step to compress the change script using Jenkins Apache Ant Plug-in. Many consider Ant strictly a build tool for Java development. To the contrary, there are many tasks that can be automated for .NET developers with Ant. One particularly nice feature of Ant is its built-in support for zip compression.
configuration=$TARGET_ENVIRONMENT buildNo=$BUILD_NUMBER
The Ant plug-in calls Ant, which in turn calls an Ant buildfile, passing it the properties we give. First, create an Ant buildfile with a single task to zip the change script. To avoid confusion during release, Ant will also append the configuration name and unique Jenkins job build number to the filename. For example, ‘AdventureWorks.publish.sql’ becomes ‘AdventureWorks_Production_123_publish.zip’. This is accomplished by passing the configuration name (Jenkins parameterized build parameter) and the build number (Jenkins environmental variable), as parameters to the buildfile (shown above). The parameters, in the form of key-value-pairs, are treated as properties within the buildfile. Using Ant to zip and name the script literally took us one line of Ant code. The contents of the build.xml buildfile is shown below.
<?xml version="1.0" encoding="utf-8"?> <project name="AdventureWorks2008" basedir="." default="default"> <description>SSDT Database Project Type ZIP Example</description> <!-- Example configuration ant call with parameter: ant -Dconfiguration=Development -DbuildNo=123 --> <target name="default" description="ZIP sql deployment script"> <echo>$${basedir}=${basedir}</echo> <echo>$${configuration}=${configuration}</echo> <echo>$${buildNo}=${buildNo}</echo> <zip basedir="AdventureWorks2008/sql/${configuration}" destfile="AdventureWorks_${configuration}_${buildNo}_publish.zip" includes="*.publish.sql" /> </target> </project>
Delivery of Artifacts
Lastly, we add a step to deliver the zipped script artifact to a ‘release’ location. Ideally, another team would retrieve and execute the change script against the database. Delivering the artifact to a remote location is easily accomplished using the Jenkins Artifact Deployer Plug-in. First, if it doesn’t already exist, create the location where you will deliver the scripts. Then, ensure Jenkins has permission to manage the location’s contents. In this example, the ‘release’ location is a shared folder I created. In order for Jenkins to access the ‘release’ location, give the ‘jenkins’ Windows user Read/Write (Change) permissions to the shared folder. With the deployment plug-in, you also have the option to delete the previous artifact(s) each time there is a new deployment, or leave them to accumulate.
Email Notification
Lastly, we want to alert the right team that artifacts have been turned-over for release. There are many plug-ins Jenkins to communicate with end-users or other system. We will use the Jenkins Email Extension Plug-in to email the release team. Configuring dynamic messages to include the parameterized build parameters and Jenkins’ environmental variables is easy with this plug-in. My sample message includes several variables in the body of the message, including target environment, target database, artifact name, and Jenkins build URL.
I had some trouble passing the Jenkins’ parameterized build parameter (‘TARGET_ENVIRONMENT’) to the email plug-in, until I found this post. The format required by the plug-in for the type of variable is a bit obscure as compared to Ant, MSBuild, or other plug-ins.
Artifact: AdventureWorks_${ENV,var="TARGET_ENVIRONMENT"}_${BUILD_NUMBER}_publish.zip Environment: ${ENV,var="TARGET_ENVIRONMENT"} Database: AdventureWorks Jenkins Build URL: ${BUILD_URL} Please contact Development for questions or problems regarding this release.
Publishing Directly to the Database
As the last demonstration in this series of posts, we will publish the project changes directly to the database. Good news, we have done 95% of the work already. We merely need to copy the Jenkins job we already created, change one step, remove three others steps, and we’re publishing! Start by creating a new Jenkins job by copying the existing script delivery job. Next, drop the Invoke Ant, Artifact Deployer, and Archive Artifacts steps from the job’s configuration. Lastly, set the last parameter of the MSBuild task, ‘UpdateDatabase’, to True from False. That’s it! Instead of creating the script, compressing it, and sending it to a location to be executed later, the changes are generated and applied to the database in a single step.
Hybrid Solution
If you are not comfortable with the direct approach, there is a middle ground between only generating a script and publishing directly to the database. You can keep a record of the changes made to the database as part of publishing. To do so, change the ‘UpdateDatabase’ parameter to True, and only drop the Artifact Deployer step; leave the Invoke Ant and Archive Artifacts steps. The resulting job generates the change script, publishes the changes to the database, and compresses and archives the script. You now have a record of the changes made to the database.
Conclusion
In this last of three posts we demonstrated the use of Jenkins and its plug-ins to created three jobs, representing three possible SSDT publishing workflows. Using the parameterized build feature of Jenkins, each job capable of being executed against any database environment that we have a configuration and publishing profile defined for. Hopefully, one of these three workflows may fit your particular release methodology.
Convert VS 2010 Database Project to SSDT and Automate Publishing with Jenkins – Part 2/3
Posted by Gary A. Stafford in .NET Development, Software Development, SQL Server Development, Team Foundation Server (TFS) Development on August 1, 2012
Objectives of 3-Part Series:
Part I: Setting up the Example Database and Visual Studio Projects
- Setup and configure a new instance of SQL Server 2008 R2
- Setup and configure a copy of Microsoft’s Adventure Works database
- Create and configure both a Visual Studio 2010 server project and Visual Studio 2010 database project
- Test the project’s ability to deploy changes to the database
Part II: Converting the Visual Studio 2010 Database and Server Projects to SSDT
- Convert the Adventure Works Visual Studio 2010 database and server projects to SSDT projects
- Create a second Solution configuration and SSDT publish profile for an additional database environment
- Test the converted database project’s ability to publish changes to multiple database environments
Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins
- Automate the build and delivery of a sql change script artifact, for any database environment, to a designated release location using a parameterized build.
- Automate the build and publishing of the SSDT project’s changes directly to any database environment using a parameterized build.
Part II: Converting the Visual Studio 2010 Database and Server Projects to SSDT
Picking up where part one of this three-part series left off, we are ready to convert the Adventure Works VS 2010 database project and associated server project to the new SSDT project type. Once converted, we will create an additional Solution configuration for Production. Finally, we will publish (vs. deploy) changes to database project’s schema to both the Development and Production environments. Note that Microsoft refers to the new format as either SSDT project type or SQL Server Database Project. I chose the prior in this post, it seemed clearer.
Convert the Projects to SSDT
Microsoft could not have made the conversion to the new SSDT project-type any simpler. Right-click on the Development server project and select ‘Convert to SQL Server Database project’. Select ‘Yes’, select ‘Backup originals for converted files’, and click ‘OK’. The conversion process should take only a minute or two. Following that, you are presented with a Conversion Report when the process is complete. The report should show the successful conversion to the SSDT project type.
Repeat this process for the AdventureWorks2008 database project. Again, you see a Conversion Report when complete. It should also not contain any errors, nor files marked as ‘Not converted’.
The New Project File Format
Reviewing the Conversion Report for the databae project, note the change to the primary project file. This is the first key difference between the VS 2010 project types and the new SSDT project types. The project file was converted from ‘AdventureWorks2008.dbproj’ to ‘AdventureWorks2008.sqlproj’ (see Conversion Report screen grab, above). Although the earlier project file with the ‘.dbproj’ file suffix is still in the project’s file directory, the Visual Studio Solution is now associated with the new ‘.sqlproj’ project file. This is the same for the server project. The ‘.dbproj’ files are no longer needed. You can drop then from the project’s file directory or from your source control system. This will prevent any confusion going forward.
Publishing Profiles
The second change you will note after the conversion is in the Solution Explorer tab. Each project has three items with the file suffix ‘.publish.xml’. These are publishing profiles. There are profiles for the each Solution configuration – Debug, Release, and Development. A publishing profile has all the settings necessary to publish changes made to the SSDT database project to a specific database environment. As part of the conversion to SSDT, all existing project settings migrated into the current project. Portions of the configuration-specific settings stay in the converted Solution configurations, while publish-specific settings are in the publishing profiles. Publishing profiles, like pre- and post-deployment scripts, are not part of the build. Select a profile in the Project Explorer tab. Note the ‘Build Action’ property in the Properties tab is set to ‘None’.
Additional Project and Profile Settings
There are also new settings in the converted projects. They support newer technologies like SSDT, SQL 2012, and Azure. As part of our first major conversion to SSDT, took the opportunity to review all project and publish settings with our database developers and DBAs. We stove to understand all the setting’s purpose and make sure they were correctly configured and documented for each of our many database environments.
Testing the Converted Projects
To test the successful conversion of the both project to the SSDT project types, select the Development Solution configuration and perform a Rebuild on the Solution. In the Build section of the Output tab, you should see both projects built successfully.
Development Publishing Profile
Right-click on the ‘Development.publish.xml’ file in the AdventureWorks2008 project and select ‘Publish…’ Wait for the project to build. Selecting Publish or opening a publishing profile causes the project to build. Afterwards, you should see the ‘Publish Database’ window appear. Here is where you change the settings of Development profile. When converting the Adventure Works project to SSDT, I’ve found the database connection information does not migrate to the profile. Setup the ‘Target database connection’ information in the ‘Connection Properties’ pop-up window by clicking ‘Edit…’. When finished, click ‘OK’ to return to the ‘Publish Database’ window. Finally, save the revised Development publishing profile by clicking ‘Save Profile As…’. I will not cover the specific profile settings, accessed by clicking ‘Advanced…’. Many of these settings will be specific to your environments and workflows. They can be left as default for this demonstration.
Generate Script for Adventure Works Database
Without leaving the ‘Publish Database’ window, click ‘Generate Script’. As in the first post, this action will initiate a schema comparison resulting the generation of a script that aggregates all the schema changes to the project, not already reflected in the database. The script represents the schema ‘delta’ (the difference) between the project and the database. The script will automatically open in Visual Studio’s main window after being created. In the ‘Data Tools Operations’ tab you should see messages indicating generation of the script was successful.
Also included in the script, along with the schema changes, are any pre- and post-deployment scripts. You should see the single post-deployment script that we created in part one of this series. Pre- and post-deployment scripts are always included in the script, whether or not they have already been executed. This is why it is imperative that pre- and post-deployment scripts are re-runnable. Scripts must have the ability to be executed more than once without producing unintended changes to the database.
Publishing to the Development Database
Next, right-click on the ‘Development.publish.xml’ file, and select ‘Publish…’. This will return you to the same window you were just in to generate the script. Click ‘Publish’. Again, in the ‘Data Tools Operations’ tab you should see messages that the publish operation completed successfully.
Congratulations, you have completed and tested conversion of the Adventure Works database project to SSDT project type.
Note with SSDT, the term ‘Deploy’, which refers to a specific MSBuild target, is replaced with ‘Publish’, a SSDT-specific build target. Instead of deploying changes to the database, like we did in the first post, we will publish changes with SSDT. To understand how MSBuild is able to call the new SSDT Publish target, open the AdventureWorks2008.sqlproj file by right-clicking on the project and selecting ‘Edit Project File’. In the project file’s xml you will find an ‘Import’ tag that imports the SSDT targets into the project, making them accessible to MSBuild.
<!--Import the settings--> MSBuildExtensionsPath)\Microsoft\VisualStudio\v$(VisualStudioVersion)\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" />
New Production Environment
Between posts, I installed another instance of SQL Server 2008 R2, named ‘Production’. Into this instance I installed another copy of the Adventure Works database. I added the same ‘aw_dev’ user that we used in Development, with the same permissions. This SQL Server instance and Adventure Works database simulates a second database environment, Production. Normally, this instance would be installed on to separate server, but for simplicity sake I installed the Production instance on the same physical server as the Development instance. It makes no difference for the purposes of this post.
If you wish to follow all examples presented in the next two posts, you will need to install and configure the Production instance of SQL Server. Otherwise, you can disregard the portions of the two posts on publishing to Production, and just stick with the single Development environment. The conversion to SSDT doesn’t require the added Production environment.
New Production Configuration and Publish Profile
Next, we will create a new configuration in the SSDT project’s Solution and configure the resulting publishing profile, targeting the Production environment. We will use this to publish changes from the project to the Production environment. Using the Solution’s Configuration Manager, create a new Solution configuration. This process is unchanged from the VS 2010 database project-type.
Right click on the AdventureWorks2008 project and select ‘Publish…’. This will return us to the ‘Publish Database’ window. Like before, with the Development publishing profile, complete the connection string information, this time targeting the Production instance. Change the ‘Publish script name’ setting to ‘Production.sql’. Click ‘Save Profile As…’, and save this profile configuration into the project file path as ‘Production.publish.xml’. Repeat this process for the Development SSDT server project.
Database Project
Server Project
We now have a new Production Solution configuration and corresponding publishing profiles in each of our two projects.
We can now target two different database environments from our AdventureWorks2008 project, Development and Production. In a typical production workflow, as a developer, you would make changes to the database project directly, or copy using a source control system like TFS. After testing your changes locally, you execute the publish task to send your schema changes and/or pre- and post-deployment scripts to the Development database instance. This process is be repeated by other developers in your department.
After successfully testing your application(s) against the Development database, you are ready to release the database changes to Testing, or in this example, directly to Production. You execute the Publish task again, this time choosing the Production Solution configuration and Production publishing profile. The schema changes and any pre- and post-deployment scripts are now executed against the Production database. You would follow the same process for other environments, such as Testing or Staging.
Making Schema Changes to Multiple Environments
For this test, we will make schema changes to the ‘Employee’ table, part of the ‘HumanResources’ schema. In Visual Studio, open the Employee table and add two new columns to the end of the table, as shown below. If you have not worked with the SSDT project type before, the view of the table will look very different to you. Microsoft has changed the earlier table view to include a friendlier design view as seen in SSMS, versus the earlier sql create statement only view. There is also a window which details all the key, indexes, and triggers associated with the table. I consider this light years better in term of usability from the developer’s standpoint. Save the changes to the table object and close it.
Select the Development Solution configuration. Right-click on the Development profile in the AdventureWorks2008 project and click ‘Publish…’ Wait for the project to build. When the ‘Publish Database’ window appears, click ‘Publish’. You have just deployed the Employee table schema changes to the Development instance of the database.
Repeat this same process for Production. Don’t forget to switch to the Production Solution configuration and select the Production publish profile. You have now applied the same schema changes to the Production environment. Your customer will be happy they can now track the drug testing of their employees.
There are other methods available with SSDT to deploy changes to the database. Using a script is the method I have chosen to show in this post.
Conclusion
In this post we converted the Adventure Works database project and Development server project to SSDT project-types. We created a new Solution Configuration and publishing profiles, targeting Production. We made schema changes to the SSDT database project. Finally, we deployed those changes to both the Development and Production database environments.
In Part III of this series, I will show how to use Jenkins CI Server to automate building, testing, delivering scripts, and publishing to a database from the SSDT database project.
Convert VS 2010 Database Project to SSDT and Automate Publishing with Jenkins – Part 1/3
Posted by Gary A. Stafford in .NET Development, Software Development, SQL Server Development, Team Foundation Server (TFS) Development on July 31, 2012
Objectives of 3-Part Series:
Part I: Setting up the Example Database and Visual Studio Projects
- Setup and configure a new instance of SQL Server 2008 R2
- Setup and configure a copy of Microsoft’s Adventure Works database
- Create and configure both a Visual Studio 2010 server project and Visual Studio 2010 database project
- Test the project’s ability to deploy changes to the database
Part II: Converting the Visual Studio 2010 Database and Server Projects to SSDT
- Convert the Adventure Works Visual Studio 2010 database and server projects to SSDT projects
- Create a second Solution configuration and SSDT publish profile for an additional database environment
- Test the converted database project’s ability to publish changes to multiple database environments
Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins
- Automate the build and delivery of a sql change script artifact, for any database environment, to a designated release location using a parameterized build.
- Automate the build and publishing of the SSDT project’s changes directly to any database environment using a parameterized build.
Background
Microsoft’s Visual Studio 2010 (VS 2010) IDE has been available to developers since April, 2010. Microsoft’s SQL Server 2008 R2 (SQL 2008 R2) has also been available since April, 2010. If you are a modern software development shop or in-house corporate development group, using the .NET technology stack, you probably use VS 2010 and SQL 2008 R2. Moreover, odds are pretty good that you’ve implemented a Visual Studio 2010 Database Project SQL Server Project to support what Microsoft terms a Database Development Life Cycle (DDLC).
Now, along comes SSDT. Recently, along with the release of SQL Server 2012, Microsoft released SQL Server Data Tools (SSDT). Microsoft refers to SSDT as “an evolution of the existing Visual Studio Database project type.” According to Microsoft, SSDT offers an integrated environment within Visual Studio 2010 Professional SP1 or higher for developers to carry out all their database design work for any SQL Server platform (both on and off premise). The term ‘off premises’ refers to SSDT ‘s ability to handle development in the Cloud – SQL Azure. SSDT supports all current versions of SQL Server, including SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012 and SQL Azure.
SSDT offers many advantages over the VS 2010 database project type. SSDT provides a more database-centric user-experience, similar to SQL Server Management Studio (SSMS). Anyone who has used VS 2010 database project knows the Visual Studio experience offered a less sophisticated user-interface than SSMS or similar database IDE’s, like Toad for SQL. Installing Microsoft’s free SSDT component, after making sure you have SP1 installed for VS 2010 Professional or higher, you can easily convert you VS 2010 database projects to the new SSDT project type. The conversion offers a better development and deployment experience, and prepare you for an eventual move to SQL Server 2012.
Part I: Setting up the Example Database and Visual Studio Projects
Setting up the Example
To avoid learning SSDT with a copy of your client’s or company’s database, I suggest taking the same route I’ve taken in this post. To demonstrate how to convert from a VS 2010 database project to SSDT, I am using a copy of Microsoft’s Adventure Works 2008 database. Installed, the database only takes up about 180 MBs of space, but is well designed and has enough data to use as a good training tool it, as Microsoft intended. There are several version of the AdventureWorks2008 database available for download depending on your interests – OLTP, SSRS, Analysis Services, Azure, or SQL 2012. I chose to download the full database backup of the AdventureWorks2008R2 without filestream for this post.
Creating the SQL Server 2008 R2 Instance and Database
Before installing the database, I used the SQL Server Installation Center to install a new instance of SQL Server 2008 R2, which I named ‘Development’. This represents a development group’s database environment. Other environments in the software release life-cycle commonly include Testing, Staging, and Production. Each environment usually has its own servers with their own instances of SQL Server, with its own copy of the databases. Environments can also include web servers, application servers, firewalls, routers, and so forth.
After installing the Development instance, I logged into it as an Administrator and created a new empty database, which I named ‘AdventureWorks’. I then restored the downloaded backup copy of Adventure Works 2008 R2, which I downloaded, to the newly created Adventure Works database.
You may note some differences between the configuration settings displayed below in the screen grabs and the default configuration of the Adventure Works database. This post is not intended to recommend configuration settings for your SQL Server databases or database projects. Every company is different in the way it configures its databases. The key is to make sure that the configuration settings in your database align with the equivalent configuration settings in the database project in Visual Studio 2010. If not, when you initially publish changes to the database from the database project, the project will script the differences and change the database to align to the project.
Creating the Server Login and Database User
Lastly in SSMS, I added a new login account to the Development SQL Server instance and a user to the Adventure Works database, named ‘aw_dev’. This user represents a developer who will interact with the database through the SSDT database project in VS 2010. For simplicity, I used SQL authentication for this user versus Windows authentication. I gave the user the minimal permissions necessary for this example. Depending on the types of interactions you have with the database, you may need to extend the rights of the user.
Two key, explicit permissions must be assigned for the user for SSDT to work properly. First is the ‘view any definition’ permission on the Development instance. Second is the ‘view definition’ permission on the Adventure Works database. These enable the SSDT project to perform a schema comparison to the Adventure Works database, explained later in the post. Lack of the view definition permission is one of the most common errors I’ve seen during deployments. They usually occur after adding a new database environment, database, database user, or continuous integration server.
Setting up Visual Studio Database Project
In VS 2010, I created a new SQL Server 2008 database project, named ‘AdventureWorks2008’. In the same Visual Studio Solution, I also created a new SQL Server 2008 server project, named ‘Development’. The database projects mirrors the schema of the Adventure Works database, and the server project mirrors the instance of SQL Server 2008 R2 on which the database is housed. The exact details of creating and configuring these two projects are too long for this post, but I have a set of screen grabs, hyperlinked below, to aid in creating these two projects. For the database project, I only showed the screens that are signficantly different then the server project screens to save some space.
Server Project
Database Project
Reference from Database Project to Server Project
After creating both projects, I created a reference (dependency) from the Adventure Works 2008 database project to the Development server project. The database project reference to the server project creates the same parent-child relationship that exists between the Development SQL Server instance and the Adventure Works database. Once both projects are created and the reference made, your Solution should look like the second screen grab, below.
Creating the Development Solution Configuration
Next, I created a new ‘Development’ Solution configuration. This configuration defines the build and deployment parameters for the database project when it targets the Development environment. Again, in a normal production environment, you would have several configurations, each targeting a specific database environment. In the context of this post, a database environment refers to a unique combination of servers, server instances, databases, data, and users. For this first example we are only setting up one database environment, Development.
The configuration specifies the parameters specific to the Adventure Works database in the Development environment. The connection string containing the server, instance, and database names, user account, and connection parameters, are all specific to the Development environment. They are different in the other environments – Testing, Staging, and Production.
Testing the Development Configuration
Once the Development configuration was completed, I ran the ‘Rebuild’ command on the Solution, using the Development configuration, to make sure there are no errors or warnings. Next, with the Development configuration set to only create the deployment script, not to create and deploy the changes to the database, I ran the ‘Deploy’ command. This created a deployment script, entitled ‘AdventureWorks2008.sql’, in the ‘sql\Development’ folder of the AdventureWorks2008 database project.
Since I just created both the Adventure Works database and the database project, based on the database, there are no schema changes in the deployment script. You will see ‘filler’ code for error checking and so forth, but no real executable schema changes to the database are present at this point. If you do see initial changes included in the script, usually database configuration changes, I suggest modifying the settings of the database project and/or the database to align to one another. For example, you may see code in the script to change the database’s default cursor from global to local, or vice-versa. Or, you may also see code in the script to the databases recovery model from full to simple, or vice-versa. You should decide whether the project or the database is correct, and change the other one to match. If necessary, re-run the ‘Deploy’ command and re-check the deployment script. Optionally, you can always execute the script with the changes, thus changing the database to match the project, if the project settings are correct.
Testing Deployment
After successfully testing the development configuration and the deployment script, making any configuration changes necessary to the project and/or the database, I then tested the project’s ability to successfully execute a Deploy command against the database. I changed the Development configuration’s deploy action from ‘create a deployment script (.sql)’ to from ‘create a deployment script (.sql) and deploy to the database’. I then ran the ‘Deploy’ command again, and this time the script is created and executed against the database. Since I still had not made any changes to the project, there were no schema changes made to the database. I just tested the project’s ability to create and deploy to the database at this point. Most errors at this stage are insufficient database user permissions (see example, below).
Testing Changes to the Project
Finally, I tested the project’s ability to make changes to the database as part of the deployment. To do so, I created a simple post-deployment script that changes the first name of a single, existing employee. After adding the post-deployment script to the database project and adding the script’s path to the post-deployment script file, I again ran the ‘Deploy’ command again, still using the Development configuration. This time the deployment script contained the post-deployment script’s contents. When deployed, one record was affected, as indicated in VS 2010 Output tab. I verified the change was successful in the Adventure Works database table, using SSMS.
Conclusion
We now have a SQL Server 2008 R2 database instance representing a Development environment, and a copy of the Adventure Works database, being served from that instance. We have corresponding VS 2010 database and server projects. We also have a new Development Solution configuration, targeting the Development environment. Lastly, we tested the database project’s capability to successfully build and deploy a change to the database.
In Part II of this series, I will show how to convert the VS 2010 database and server projects to SSDT.
Automating Work Item Creation in TFS 2010 with PowerShell, Continued
Posted by Gary A. Stafford in .NET Development, PowerShell Scripting, Software Development, Team Foundation Server (TFS) Development on July 18, 2012
In a previous post, Automating Task Creation in Team Foundation Server with PowerShell, I demonstrated how to automate the creation of TFS Task-type Work Items using PowerShell. After writing that post, I decided to go back and further automate my own processes. I combined two separate scripts that I use on a regular basis, one that creates the initial Change Request (CR) Work Item, and a second that creates the Task Work Items associated with the CR. Since I usually run both scripts successively and both share many of the same variables, combining the scripts made sense. I now have a single PowerShell script that will create the parent Change Request and the associated Tasks in TFS. The script reduces my overall time to create the Work Items by a few minutes for each new CR. The script also greatly reduces the risk of input errors from typing the same information multiple times in Visual Studio. The only remaining manual step is to link the Tasks to the Change Request in TFS.
The Script
Similar to the previous post, for simplicity sake, I have presented a basic PowerShell script. The script could easily be optimized by wrapping the logic into a function with input parameters, further automating the process. I’ve placed a lot of comments in the script to explain what each part does, and help make customization easier. The script explicitly declares all variables, adhering to PowerShell’s Strict Mode (Set-StrictMode -Version 2.0
). I feel this makes the script easier to understand and reduces the possibility of runtime errors.
############################################################# # # Description: Automatically creates # (1) Change Request-type Work Item and # (5) Task-type Work Items in TFS. # # Author: Gary A. Stafford # Created: 07/18/2012 # Modified: 07/18/2012 # ############################################################# # Clear Output Pane clear # Loads Windows PowerShell snap-in if not already loaded if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null ) { Add-PSSnapin Microsoft.TeamFoundation.PowerShell } # Set Strict Mode - optional Set-StrictMode -Version 2.0 ############################################################# # Usually remains constant [string] $tfsServerString = "http://[YourServerNameGoesHere]/[PathToCollection]" [string] $areaPath = "Development\PowerShell" [string] $workItemType = "Development\Change Request" [string] $description = "Create Task Automation PowerShell Script" # Usually changes for each Sprint - both specific to your environment [string] $iterationPath = "PowerShell\TFS2010" # Usually changes for each CR and Tasks [string] $requestName = "Name of CR from Service Manager" [string] $crId = "000000" [string] $priority = "1" [string] $totalEstimate = "10" # Total of $taskEstimateArray [string] $assignee = "Doe, John" [string] $testType = "Unit Test" # Task values represent units of work, often 'man-hours' [decimal[]] $taskEstimateArray = @(2,3,10,3,.5) [string[]] $taskNameArray = @("Analysis", "Design", "Coding", "Unit Testing", "Resolve Tasks") [string[]] $taskDisciplineArray = @("Analysis", "Development", "Development", "Test", $null) ############################################################# Write-Host `n`r**** Create CR started...`n`r # Build string of field parameters (key/value pairs) [string] $fields = "Title=$($requestName);Description=$($description);CR Id=$($crId);" $fields += "Estimate=$($totalEstimate);Assigned To=$($assignee);Test Type=$($testType);" $fields += "Area Path=$($areaPath);Iteration Path=$($iterationPath);Priority=$($priority);" #For debugging - optional console output Write-Host `n`r $fields # Create the CR (Work Item) tfpt workitem /new $workItemType /collection:$tfsServerString /fields:$fields Write-Host `n`r**** Create CR completed...`n`r ############################################################# # Loop and create of eack of the (5) Tasks in prioritized order [int] $i = 0 Write-Host `n`r**** Create Tasks started...`n`r # Usually remains constant $workItemType = "Development\Task" while ($i -le 4) { # Concatenate name of task with CR name for Title and Description fields $taskTitle = $taskNameArray[$i] + " - " + $requestName # Build string of field parameters (key/value pairs) [string] $fields = "Title=$($taskTitle);Description=$($taskTitle);Assigned To=$($assignee);" $fields += "Area Path=$($areaPath);Iteration Path=$($iterationPath);Discipline=$($taskDisciplineArray[$i]);Priority=$($i+1);" $fields += "Estimate=$($taskEstimateArray[$i]);Remaining Work=$($taskEstimateArray[$i]);Completed Work=0" #For debugging - optional console output Write-Host `n`r $fields # Create the Task (Work Item) tfpt workitem /new $workItemType /collection:$tfsServerString /fields:$fields $i++ } Write-Host `n`r**** Create Tasks completed...`n`r
Deleting Work Items with PowerShell
Team Foundation Server Administrators know there is no delete button for Work Items in TFS. So, how do you delete (destroy, as TFS calls it) a Work Item? One way is from the command line, as demonstrated in the previous post. You can also use PowerShell, calling the witAdmin command-line tool, but this time from within PowerShell, as follows:
[string] $tfsServerString = "http://[YourServerNameGoesHere]/[PathToCollection]" [string] $tfsWorkIemId = "00000" $env:path += ";C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE" witadmin destroywi /collection:$tfsServerString /id:$tfsWorkIemId /noprompt
First, use PowerShell to set your path environmental variable to include your local path to witadmin.exe
. Then set your TFS Server path and the TFS Work Item ID of the Work Item you want to delete. Or, you can call witAdmin
, including the full file path, avoiding setting the path environmental variable. True, you could simplify the above to a single line of code, but I feel using variables is easier to understand for readers then one long line of code.
Consuming Cross-Domain WCF REST Services with jQuery using JSONP
Posted by Gary A. Stafford in .NET Development, Software Development, SQL Server Development on September 25, 2011
Introduction
In a previous article, Interactive Form Functionality on the Client-Side Using jQuery, I demonstrated the use of HTML, JavaScript, jQuery, and jQuery’s AJAX API to create a simple restaurant menu/order form. Although the previous article effectively demonstrated the use of these client-side technologies, the source of the restaurant’s menu items, a static XML file, was not intended to represent a true ‘production-class’ data source. Nowadays, to access data and business logic across the Enterprise or across the Internet, developers are more apt to build service-oriented applications that expose RESTful web services, and client applications that consume those services. RESTful services are services which conform to the REST (Representational State Transfer) architectural pattern. More information on REST can be obtained by reading Chapter 5 and 6 of REST’s author Roy Fielding’s Doctoral Dissertation. Most modern web technologies communicate with RESTful web services, including Microsoft’s Silverlight, Web Forms, and MVC, JavaFX, Adobe Flash, PHP, Python, and Ruby on Rails.
This article will expand on the restaurant menu/order form example from the previous article, replacing the static XML file with a WCF Service. The article will demonstrate the following:
- Use of jQuery’s AJAX API to bi-bidirectionally communicate with WCF Services
- Cross-domain communication with WCF Services using JSONP
- Serialization of complex, nested .NET objects into JSONP-format HTTP Response Messages
- Deserialization of JSONP-format HTTP Request Messages into complex, nested .NET objects
- Optimization of JavaScript and the use of caching to maximize the speed of content delivery to the Client
Source code is now available on GitHub. As of May 2014, there is a revised version of the project on the ‘rev2014′ branch, on GitHub. The below post describes the original code on the ‘Master’ branch. All details are posted on GitHub.
Background
WCF
For .NET developers, Windows Communication Foundation (WCF), Microsoft’s platform for Service Oriented Architecture (SOA), is the current preferred choice for building service-oriented applications. According to Microsoft, WCF is part of the .NET Framework that provides a unified programming model for rapidly building service-oriented applications that communicate across the web and the enterprise.
Prior to WCF, Microsoft offered ASP.NET XML Web Service, or ASP.NET Web Services for short. ASP.NET Web Services send and receive messages using Simple Object Access Protocol (SOAP) via HTTP. Data is serialized from instances of .NET objects into XML-format SOAP messages (or, ‘XML in a SOAP envelop’ as they are also known), and vice-versus. Metadata about the ASP.NET Web Services is contained in the Web Services Description Language (WSDL). Although still prevalent, ASP.NET Web Services is now considered a legacy technology with the advent of WCF, according to Microsoft. SOAP, a protocol for accessing a Web Service, does not conform to REST architecture guidelines.
Hosted on Microsoft’s IIS (Internet Information Services) Web Server, WCF is a complex, yet robust and flexible service-oriented framework. By properly configuring WCF Services, developers can precisely expose business logic and data sources to clients in a variety of ways. WCF Services can send and receive messages as XML in a SOAP envelop, as well as RESTful formats, including POX (plain old XML), ATOM (an XML language used for web feeds), and JSON (JavaScript Object Notation).
JSON/JSONP
The example in this article uses JSON, more specifically JSONP (JSON with Padding), a specialized type of JSON, to exchange information with WCF Services. JSON is an open and text-based data exchange format that provides a standardized data exchange format better suited for AJAX-style web applications. Compared to XML, JSON-formatted messages are smaller in size. For example, the restaurant menu used in this article, formatted as XML, is 927 bytes. The same message, formatted in JSONP is only 311 bytes, about one-third the size. The savings when transmitting JSON-format messages over slow connections, to mobile devices, or to potentially millions of simultaneous web-browsers, is significant.
Since the WCF Service will be hosted in a different domain (a different port in the example) than the web site with the restaurant menu and order form, we must use JSONP. JSONP, based on JSON, that allows pages to request data from a server in a different domain, normally disallowed, due to ‘same origin policy’. The same origin policy is an important security concept for browser-side programming languages, such as JavaScript. According to Wikipedia, same origin policy permits scripts running on pages originating from the same site to access each others methods and properties with no specific restrictions, but prevents access to most methods and properties across pages on different sites. JSONP takes advantage of the open policy for HTML <script>
elements.
Below is an example of the article’s restaurant menu formatted in JSONP, and returned by the WCF Service as part of the HTTP Response to the client’s HTTP Request’s GET
method.
RestaurantMenu([ {"Description":"Cheeseburger","Id":1,"Price":3.99}, {"Description":"Chicken Sandwich","Id":4,"Price":4.99}, {"Description":"Coffee","Id":7,"Price":0.99},{"Description":"French Fries", "Id":5,"Price":1.29},{"Description":"Hamburger","Id":2,"Price":2.99}, {"Description":"Hot Dog","Id":3,"Price":2.49}, {"Description":"Ice Cream Cone","Id":9,"Price":1.99}, {"Description":"Soft Drink","Id":6,"Price":1.19},{"Description":"Water", "Id":8,"Price":0}]);
AJAX (well, not really…)
AJAX (Asynchronous JavaScript and XML) asynchronously exchanges data between the browser and web server, avoiding page reloads, using object. Despite the name, XMLHttpRequest
, AJAX can work with JSON in addition to XML message formatting. Other formats include JSONP, JavaScript, HTML, and text. Using jQuery’s AJAX API, we will make HTTP Requests to the server using the GET
method. Other HTTP methods include POST
, PUT
, and DELETE
. To access cross-domain resources, in this case the WCF Service, the client makes a HTTP Request using the GET
method.
Writing this article, I discovered that using JSONP technically isn’t AJAX because it does not use the XMLHttpRequest
object, a primary requirement of AJAX. JSONP-format HTTP Requests are made by inserting the HTML <script>
tag into the DOM, dynamically. The Content-Type
of the HTTP Response from the WCF Service, as seen with Firebug, is application/x-javascript
, not application/json
, as with regular JSON. I’m just happy if it all works, AJAX or not.
Using the Code
The Visual Studio 2010 Solution used in this article contains (3) projects shown below. All code for this article is available for download at on The Code Project.
- Restaurant – C# Class Library
- RestaurantWcfService – C# WCF REST Service Application
- RestaurantDemoSite – Existing Web Site
Restaurant Class Library
The C# Class Library Project, Restaurant, contains the primary business objects and business logic. Classes that will be instantiated to hold the restaurant menu and restaurant orders include RestaurantMenu
, MenuItem
, RestaurantOrder
, and OrderItem
. Both RestaurantMenu
and RestaurantOrder
inherit from System.Collections.ObjectModel.Collection<T>
. RestaurantMenu
contains instances of MenuItem
, while RestaurantOrder
contains instances of OrderItem
.
The business logic for deserializing the JSON-format HTTP Request containing the restaurant order is handled by the ProcessOrder
class. I struggled with deserializing the JSONP-formatted HTTP Request into an instance of RestaurantOrder
with the standard .NET System.Web.Script.Serialization.JavaScriptSerializer
class. I solved the deserialization issue by using Json.NET
. This .NET Framework, described as a flexible JSON serializer to convert .NET objects to JSON and back again, was created by James Newton-King. It was a real lifesaver. Json.NET is available on Codeplex. Before passing the RAW JSONP-format HTTP Request to Json.NET, I still had to clean it up using the NormalizeJsonString
method I wrote.
Lastly, ProcessOrder
includes the method WriteOrderToFile
, which writes the restaurant order to a text file. This is intended to demonstrate how orders could be sent from the client to the server, stored, and then reloaded and deserialized later, as needed. In order to use this method successfully, you need to create the ‘c:\RestaurantOrders‘ folder path and add permissions for the IUSR
user account to read and write to the RestaurantOrders folder.
The ProcessOrder
class (note the reference to Json.NET: Newtonsoft.Json
):
using Newtonsoft.Json; using System; using System.Collections.Generic; using System.IO; using System.Linq; namespace Restaurant { public class ProcessOrder { public const string STR_JsonFilePath = @"c:\RestaurantOrders\"; public string ProcessOrderJSON(string restaurantOrder) { if (restaurantOrder.Length < 1) { return "Error: Empty message string..."; } try { var orderId = Guid.NewGuid(); NormalizeJsonString(ref restaurantOrder); //Json.NET: http://james.newtonking.com/projects/json-net.aspx var order = JsonConvert.DeserializeObject <restaurantorder>(restaurantOrder); WriteOrderToFile(restaurantOrder, orderId); return String.Format( "ORDER DETAILS{3}Time: {0}{3}Order Id: {1}{3}Items: {2}", DateTime.Now.ToLocalTime(), Guid.NewGuid(), order.Count(), Environment.NewLine); } catch (Exception ex) { return "Error: " + ex.Message; } } private void NormalizeJsonString(ref string restaurantOrder) { restaurantOrder = Uri.UnescapeDataString(restaurantOrder); int start = restaurantOrder.IndexOf("["); int end = restaurantOrder.IndexOf("]") + 1; int length = end - start; restaurantOrder = restaurantOrder.Substring(start, length); } private void WriteOrderToFile(string restaurantOrder, Guid OrderId) { //Make sure to add permissions for IUSR to folder path var fileName = String.Format("{0}{1}.txt", STR_JsonFilePath, OrderId); using (TextWriter writer = new StreamWriter(fileName)) { writer.Write(restaurantOrder); } } } }
Restaurant WCF Service
If you’ve built WCF Services before, you’ll be familiar with the file structure of this project. The RestaurantService.svc, the WCF Service file, contains no actual code, only a pointer to the code-behind RestaurantService.cs file. This file contains each method which will be exposed to the client through the WCF Service. The IRestaurantService.cs Interface file, defines the Service Contract between the RestaurantService
class and the WCF Service. The IRestaurantService
Interface also defines each Operational Contract with the class’s methods. The Operational Contract includes Operational Contract Attributes, which define how the Service Operation (a method with an Operational Contract) will operate as part of the WCF Service. Operational Contract Attributes in this example include the required invocation (HTTP method – GET
), format of the HTTP Request and Response (JSON), and caching (for the restaurant menu). The WFC Service references (has a dependency on) the Restaurant Class Library.
The WCF Web Service Project, RestaurantWcfService
, contains two methods that are exposed to the client. The first, GetCurrentMenu
, serializes an instance of RestaurantMenu
, containing nested instances of MenuItem
. It returns the JSONP-format HTTP Response to the client. There are no parameters passed to the method by the HTTP Request.
The second method, SendOrder
, accepts the JSONP-format order, through an input parameter of the string
data type, from the client’s HTTP Request. SendOrder
then passes the order to the ProcessOrderJSON
method, part of the Restaurant.ProcessOrder
class. ProcessOrderJSON
returns a string
to SendOrder
, containing some order information (Order Id, date/time, and number of order items). This information is serialized and returned in the JSONP-format HTTP Response to the client. The Response verifies that the order was received and understood.
Lastly, the web.config file contains the WCF bindings, behaviors, endpoints, and caching configuration. I always find configuring this file properly to be a challenge due to the almost-infinite number of WCF configuration options. There are many references available on configuring WCF, but be careful, many were written prior to .NET Framework 4. Configuring WCF for REST and JSONP became much easier with .NET Framework 4. Make sure you refer to the latest materials from MSDN on WCF for .NET Framework 4.
The IRestaurantService.cs Interface:
using Restaurant; using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.ServiceModel; using System.ServiceModel.Web; namespace RestaurantWcfService { [ServiceContract] public interface IRestaurantService { [OperationContract] [Description("Returns a copy of the restaurant menu.")] [WebGet(BodyStyle = WebMessageBodyStyle.Bare, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] [AspNetCacheProfile("CacheFor10Seconds")] RestaurantMenu GetCurrentMenu(); [OperationContract] [Description("Accepts a menu order and return an order confirmation.")] [WebGet(BodyStyle = WebMessageBodyStyle.Bare, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, UriTemplate = "SendOrder?restaurantOrder={restaurantOrder}")] string SendOrder(string restaurantOrder); } }
The RestaurantService.cs Class (inherits from IRestaurantService.cs):
using Restaurant; using System; using System.Collections.Generic; using System.Collections.ObjectModel; using System.Linq; using System.ServiceModel.Activation; namespace RestaurantWcfService { [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RestaurantService : IRestaurantService { public RestaurantMenu GetCurrentMenu() { //Instantiates new RestaurantMenu object and //sorts MeuItem objects by byDescription using LINQ var menuToReturn = new RestaurantMenu(); var menuToReturnOrdered = ( from items in menuToReturn orderby items.Description select items).ToList(); menuToReturn = new RestaurantMenu(menuToReturnOrdered); return menuToReturn; } public string SendOrder(string restaurantOrder) { //Instantiates new ProcessOrder object and //passes JSON-format order string to ProcessOrderJSON method var orderProcessor = new ProcessOrder(); var orderResponse = orderProcessor.ProcessOrderJSON(restaurantOrder); return orderResponse; } } }
The WCF Service’s web.config File:
<?xml version="1.0"?> <configuration> <system.web> <compilation debug="false" targetFramework="4.0" /> <caching> <outputCacheSettings> <outputCacheProfiles> <add name="CacheFor10Seconds" duration="10" varyByParam="none" /> </outputCacheProfiles> </outputCacheSettings> </caching> </system.web> <system.serviceModel> <bindings> <webHttpBinding> <binding name="webHttpBindingWithJsonP" crossDomainScriptAccessEnabled="true" /> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="webHttpBehavior"> <webHttp helpEnabled="true"/> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <services> <service name="RestaurantWcfService.RestaurantService"> <endpoint address="" behaviorConfiguration="webHttpBehavior" binding="webHttpBinding" bindingConfiguration="webHttpBindingWithJsonP" contract="RestaurantWcfService.IRestaurantService" /> </service> </services> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> </configuration>
WCF Web HTTP Service Help
Once you have the article’s code installed and running, you can view more details about the WCF Service’s operations (methods) using the new .NET Framework 4 WCF Web HTTP Service Help Page feature. Depending on your IIS configuration, the local address should be similar to: http://localhost/MenuWcfRestService/RestaurantService.svc/Help.
Restaurant Demo Site
RestaurantDemoSite
is a non-ASP.NET website, just HTML and JavaScript. For this article, I chose to host the RestaurantDemoSite
web site on a different port (2929) than the WCF Service on default port 80. I did this to demonstrate the necessity of JSONP for cross-domain scripting. Hosting them on two different ports is considered hosting on two different domains. Port 2929 is a randomly-selected open port on my particular development machine. Both the WCF Service and the website were setup as Virtual Directories in IIS, and then added to the Visual Studio 2010 Solution, along with the Restaurant Class Library.
Following the format of the first article, the website contains two identical pages, each with the same restaurant menu/order form. The ‘Development’ version is optimized for debugging and demonstration. The other, ‘Production’, with the JavaScript and CSS files minified and packed, is optimized for use in production. The demo uses the latest available jQuery JavaScript Library (jquery-1.6.3.js) and the jQuery plug-in, Format Currency (jquery.formatCurrency-1.4.0.js).
The page contains the new HTML5 <!DOCTYPE>
declaration. I used HTML5’s new numeric input type for inputting the number of items to order. I defined a min and max value, also a new HTML5 feature. You can these HTML features working in the latest version of Google Chrome.
All of the client-side business logic is contained in the restaurant.js JavaScript file. This file makes calls to jQuery and Format Currency. I chose the sometimes controversial, static code analysis tool JSLint to help debug and refactor my JavaScript code. Even if you don’t agree with all of JSLint’s warnings, understanding the reason for them will really enhance your overall knowledge of JavaScript. A good alternative to JSLint, which I’ve also tried, is JSHint, a fork of the JSLint project. JSHint advertises itself as a more configurable version of JSLint.
The restaurant.js JavaScript file:
var addMenuItemToOrder, calculateSubtotal, clearForm, clickRemove, formatRowColor, formatRowCurrency, getRestaurantMenu, handleOrder, orderTotal, populateDropdown, tableToJson, sendOrder, wcfServiceUrl; // Populate drop-down box with JSON data (menu) populateDropdown = function () { var id, price, description; id = this.Id; price = this.Price; description = this.Description; $("#select_item") .append($("<option></option>") .val(id) .html(description) .attr("title", price)); }; // Use strict for all other functions // Based on post at: // http://ejohn.org/blog/ecmascript-5-strict-mode-json-and-more/ (function () { "use strict"; wcfServiceUrl = "http://localhost/MenuWcfRestService/RestaurantService.svc/"; // Execute when the DOM is fully loaded $(document).ready(function () { getRestaurantMenu(); }); // Add selected item to order $(function () { $("#add_btn").click(addMenuItemToOrder); }); // Place order if it contains items $(function () { $("#order_btn").click(handleOrder); }); // Retrieve JSON data (menu) and loop for each menu item getRestaurantMenu = function () { $.ajax({ cache: true, url: wcfServiceUrl + "GetCurrentMenu", data: "{}", type: "GET", jsonpCallback: "RestaurantMenu", contentType: "application/javascript", dataType: "jsonp", error: function () { alert("Menu failed!"); }, success: function (menu) { $.each(menu, populateDropdown); // must call function as var } }); }; // Add selected menu item to order table addMenuItemToOrder = function () { var order_item_selected_quantity, selected_item, order_item_selected_id, order_item_selected_description, order_item_selected_price, order_item_selected_subtotal; // Limit order quantity to between 1-99 order_item_selected_quantity = parseInt($("#select_quantity").val(), 10); if (order_item_selected_quantity < 1 || order_item_selected_quantity > 99 || isNaN(order_item_selected_quantity)) { return; } // Can't add 'Select an Item...' to order if ($("#select_item").get(0).selectedIndex === 0) { return; } // Get values selected_item = $("#select_item option:selected"); order_item_selected_id = parseInt(selected_item.val(), 10); order_item_selected_description = selected_item.text(); order_item_selected_price = parseFloat(selected_item.attr("title")); // Calculate subtotal order_item_selected_subtotal = calculateSubtotal(order_item_selected_price, order_item_selected_quantity); // Write out menu selection to table row $("<tr class='order_row'></tr>").html("<td>" + order_item_selected_quantity + "</td><td class='order_item_id'>" + order_item_selected_id + "</td><td class='order_item_name'>" + order_item_selected_description + "</td><td class='order_item_price'>" + order_item_selected_price + "</td><td class='order_item_subtotal'>" + order_item_selected_subtotal + "</td><td><input type='button' value='remove' /></td>") .appendTo("#order_cart").hide(); // Display grand total of order_item_selected_id $("#order_cart tr.order_row:last").fadeIn("medium", function () { // Callback once animation is complete orderTotal(); }); formatRowCurrency(); formatRowColor(); clickRemove(); clearForm(); }; // Calculate subtotal calculateSubtotal = function (price, quantity) { return price * quantity; }; // Create alternating colored rows in order table formatRowColor = function () { $("#order_cart tr.order_row:odd").css("background-color", "#FAF9F9"); $("#order_cart tr.order_row:even").css("background-color", "#FFF"); }; // Format new order item values to currency formatRowCurrency = function () { $("#order_cart td.order_item_price:last").formatCurrency(); $("#order_cart td.order_item_subtotal:last").formatCurrency(); }; // Bind a click event to the correct remove button clickRemove = function () { $("#order_cart tr.order_row:last input").click(function () { $(this).parent().parent().children().fadeOut("fast", function () { $(this).parent().slideUp("slow", function () { // the row (tr) $(this).remove(); // the row (tr) orderTotal(); }); }); }); }; // Clear order input form and re-focus cursor clearForm = function () { $("#select_quantity").val(""); $("#select_item option:first-child").attr("selected", "selected"); $("#select_quantity").focus(); }; // Calculate new order total orderTotal = function () { var order_total = 0; $("#order_cart td.order_item_subtotal").each(function () { var amount = ($(this).html()).replace("$", ""); order_total += parseFloat(amount); }); $("#order_total").text(order_total).formatCurrency(); }; // Call functions to prepare order and send to WCF Service handleOrder = function () { if ($("#order_cart tr.order_row:last").length === 0) { alert("No items selected..."); } else { var data = tableToJson(); sendOrder(data); } }; // Convert HTML table data into an array // Based on code from: // http://johndyer.name/post/table-tag-to-json-data.aspx tableToJson = function () { var data, headers, orderCartTable, myTableRow, rowData, i, j; headers = ["Quantity", "Id"]; data = []; orderCartTable = document.getElementById("order_cart"); // Go through cells for (i = 1; i < orderCartTable.rows.length - 1; i++) { myTableRow = orderCartTable.rows[i]; rowData = {}; for (j = 0; j < 2; j++) { rowData[headers[j]] = myTableRow.cells[j].innerHTML; } data.push(rowData); } return data; }; // Convert array to JSON and send to WCF Service sendOrder = function (data) { var jsonString = JSON.stringify({ restaurantOrder: data }); $.ajax({ url: wcfServiceUrl + "SendOrder?restaurantOrder=" + jsonString, type: "GET", contentType: "application/javascript", dataType: "jsonp", jsonpCallback: "OrderResponse", error: function () { alert("Order failed!"); }, success: function (confirmation) { alert(confirmation.toString()); } }); }; } ());
Using Firebug to Look Behind the Scenes
In real life, a restaurant’s menu changes pretty infrequently. Therefore, to speed page delivery, I chose to cache the restaurant’s menu on the client-side. Caching is configured as part of the Operational Contract in IRestaurantService
, as well as in the jQuery AJAX call to GetCurrentMenu
in restaurant.js. In this example, I set the cache to 10 seconds, which can be confirmed by looking at the Cache-Control
property in the HTTP Response Header of the call to GetCurrentMenu
, using Firebug.
Below is a screen grab of initial load of the restaurant menu/order form page in Firefox with Firebug running. Note the ‘Domain’ of the AJAX call is different than the page and associated files. Also, both the ‘Status’ and ‘Remote IP’ indicate the HTTP Response to GetCurrentMenu
(the restaurant’s menu) is cached, along with the page and associated files. Firebug is an invaluable tool in the development and debugging of JavaScript, especially when working with AJAX.
Points of Interest
Several things stood out to me as a result of writing this article:
- WCF – No matter how many times I work with WCF Services, getting them configured properly seems like 90% technical knowledge and 10% luck. Ok, maybe 20% luck! Seriously, there are a lot of great resources on the web regarding WCF configuration issues. If you have a specific problem with WCF, odds are someone else already had it and has published a solution. Make sure the information is current to the .NET Framework you are working with.
- Third-party Libraries, Plug-ins, and Frameworks – Don’t confine yourself to using the out-of-the-box .NET Framework, JavaScript, or jQuery to solve all your coding challenges. There are an endless variety of Frameworks, JavaScript Libraries, and jQuery Plug-ins, available. Being a good developer is about providing the best solution to a problem, not necessarily writing each and every line of code, yourself. A few minutes of research can be worth hours of coding!
- Refactoring – Refactoring your code is critical. Just making it work is not good enough. Added bonus? I’ve personally gained a considerable amount of knowledge about software development through refactoring. Forcing yourself to go back and optimize code can be a tremendous learning opportunity. Using third-party refactoring tools such JSLint/JSHint, FxCop, RefactorPro!, CodeRush, ReSharper, and others is a great way to improve both your refactoring and coding skills. I use all these tools as much as possible.
- Cross-Domain with JSONP – Using JSONP is one technique to get around the limitations imposed by the same origin policy. JSONP has its pros and cons. Spend some time to research other methods that might better benefit your project requirements.
Interactive Form Functionality on the Client-Side Using jQuery
Posted by Gary A. Stafford in Software Development on April 3, 2010
Introduction
Many of us have used ASP.NET Web Forms for years, combined more recently with ASP.NET AJAX, to build robust web-solutions for our clients. Although Web Forms are not going away, it is also not the only technology available to ASP.NET developers to build web-solutions, or necessarily always the best. A developer’s ability to understand and implement multiple development technologies is critical to ensuring the best solution for the client.
Recently, the popularity of serious client-side development with JavaScript, jQuery, and AJAX has exploded. Much of the server-side processing required with ASP.NET Web Forms can easily be moved to the client-side with the help of increasingly sophisticated scripting tools such as jQuery and Ajax. The following article demonstrates and discusses a simple client order form, built using HTML, JavaScript, jQuery, AJAX, XML, and CSS. This example demonstrates many basic as well as some advanced capabilities of jQuery, including:
- Asynchronous HTTP (Ajax) request to populate a drop-down menu with XML data
- jQuery animation and CSS manipulation to enhance the client UI experience
- Use of jQuery plug-ins, specifically FormatCurrency to format text
- JavaScript and CSS minification to increase performance and obfuscate client-side code
- Use of Content Delivery Networks (CDN) to further optimize performance through web caching
In this example, a user individually chooses products from a drop-down menu, inputs the desired quantity, and adds the selection to their order. The selections along with a subtotal of their costs are displayed in the order table. Items can be removed from the order and additional items added. The order’s total cost is updated and displayed as items are added and removed. All events are handled on the client-side, without any server-side processing. A working example of this form can be accessed here.
About the Code
The files which make up the web directory of the order form example are as follows: (2) versions of the HTML order form, (1) XML data file with menu items, (6) JavaScript files, and (2) versions of the Style Sheets. Shown below is the directory of those files as seen in Visual Studio 2008. All code for this article is available for download at on The Code Project.
The order form example comes in two flavors – an easy-to-understand, development-oriented copy (order_dev.htm
), and a production-oriented copy (order_prd.htm
), optimized for faster web-serving. The development version has all my JavaScript left in the bottom of the HTML file. The Style Sheet, jQuery library and FormatCurrency
jQuery plug-in scripts are externally linked to non-minified sources. Conversely, the production version has the Style Sheet and all JavaScript externally-linked to minified files. I created two versions of the order form in order to compare the effects of optimization techniques on web-serving performance.
Code Optimization and Obfuscation
Using CSS Drive’s CSS Compressor online utility, I decreased the size of my externally-linked Style Sheet file by 26%. I selected the ‘Super Compact’ and “Strip ALL Comments’ options. Using Google’s Closure Compiler online utility, I decreased the size of my JavaScript by 43%. I selected the ‘Simple’ Optimization option. The more aggressive ‘Advanced’ option resulted in JavaScript errors. I did not select a Formatting option. According to the results from Firefox using Yahoo! YSlow, externally linking to minified copies of my Style Sheet and JavaScript files reduced the total size of the information sent to the browser from 175.8K to 79.9K, a savings of nearly 55%.
You can further test page performance by replacing the local link to the jQuery script file with a link to the minified copy of jQuery on Google’s Content Delivery Network (CDN). The current link is commented out within order-prd.htm. For an explanation of the advantages and disadvantages of using a CDN, I recommend Dave Ward’s post on Encosia.com, entitled 3 reasons why you should let Google host jQuery for you.
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
jQuery IntelliSense in Visual Studio 2008
The obvious advantage of keeping the JavaScript in the HTML page, at least during development, is the ability to take advantage of IntelliSense in Visual Studio 2008 with jQuery. IntelliSense makes the jQuery learning process much quicker! See a good post on this topic, jQuery Intellisense in VS 2008, by Scott Guthrie, at ScottGu’s Blog. Note, as of the date of publication of this article, the latest version of jQuery to have the necessary ‘-vsdoc’ file available for use with IntelliSense was version 1.4.1. I used this for the development version of the example. The production copy uses a later, minified version of jQuery 1.4.2, which is notably faster than 1.4.1.
Placing the Order
The form contains a button to place the final order. In this example, pressing the button returns a simple JavaScript alert()
, depending on the contents of the order. In actual production, the order page could submit form data to a secondary page or code-behind class (ASP.NET Web Forms) for order processing. Alternatively, data could be formatted and sent directly back to an XML file or to a database using Ajax. Order processing could be done on the client- or server-side, depending on the technology implemented.
Using the Code
The order page contains two HTML tables. One table holds the menu selection elements and the other table displays the current order. Since jQuery so eloquently handles all interactions within the UI, there is very little HTML code to write.
<table id="select"> <caption> Menu</caption> <tr> <td> Qnt.: </td> <td> <input id="select_quantity" type="text" /> (*1-99) </td> <td> <select id="select_item"> <option selected="selected">Select an Item...</option> </select> </td> <td> <input id="add_btn" type="button" value="Add" /> </td> </tr> </table> <br /> <br /> <table id="order_cart"> <caption> Order</caption> <thead> <tr> <th> Qnt. </th> <th> ID </th> <th> Description </th> <th> Price </th> <th> Subtotal </th> <th> Remove </th> </tr> </thead> <tbody> </tbody> <tfoot> <tr> <th colspan="4"> Total: </th> <th id="order_total"> $0.00 </th> <th> <input id="order_btn" type="button" value="Order!" /> </th> </tr> </tfoot> </table> <script src="scripts/jquery-1.4.1.js" type="text/javascript"></script> <script src="scripts/jquery.formatCurrency-1.3.0.js" type="text/javascript"></script>
The JavaScript contained in order_dev.htm immediately precedes the closing </body>
tag. Keeping the JavaScript at the bottom of the page whenever possible allows the CSS and DOM elements to load first. I have included a large number of comments detailing much of the functionality contained in each part of the JavaScript.
<script type="text/javascript"> //Retrieve XML document and loop for each item jQuery(function($) { //just like $(document).ready() $.ajax({ type: "GET", url: "data/menu.xml", dataType: "xml", error: function() { $("<p>Error loading XML file...</p>") .replaceAll("#order_form") }, success: function(xml) { $(xml).find("item").each(fWriteXML); //must call function as var } }); }); //Populate drop-down box with XML contents var fWriteXML = function writeXML() { var id = $(this).attr("id"); var cost = $(this).attr("cost"); var item = $(this).text(); $("#select_item") .append($("<option></option>") .val(id) //same as .attr("value", id)) .html(item) .attr("title", cost)); }; //Add selected item to order $(function() { $("#add_btn").click(function() { var order_item_selected_quantity = $("#select_quantity").val() var selected_item = $("#select_item option:selected"); var order_item_selected_id = selected_item.val(); var order_item_selected_name = selected_item.text(); var order_item_selected_cost = selected_item.attr("title"); var pattern = new RegExp("^[1-9][0-9]?$"); //Select between 1-99 items //Do not proceed if input is incorrect if (pattern.test(order_item_selected_quantity) && order_item_selected_cost != "") { //Calculate subtotal var order_item_selected_subtotal = parseFloat(order_item_selected_cost) * parseInt(order_item_selected_quantity); $("<tr class='order_row'></tr>").html("<td>" + order_item_selected_quantity + "</td><td>" + order_item_selected_id + "</td><td class='order_item_name'>" + order_item_selected_name + "</td><td class='order_item_cost'>" + order_item_selected_cost + "</td><td class='order_item_subtotal'>" + order_item_selected_subtotal + "</td><td>" + "<input type='button' value='remove' /></td>") .appendTo("#order_cart").hide(); $("#order_cart tr.order_row:last").fadeIn("medium", function() { orderTotal(); //Callback once animation is complete }); //Format new order item values to currency $("#order_cart td.order_item_cost:last").formatCurrency(); $("#order_cart td.order_item_subtotal:last").formatCurrency(); clickRemove(); clearForm(); } }); }); //Bind a click event to the correct remove button function clickRemove() { $("#order_cart tr.order_row:last input").click(function() { $(this).parent().parent().children().fadeOut("fast", function() { $(this).parent().slideUp("slow", function() { //the row (tr) $(this).remove(); //the row (tr) orderTotal(); }); }); }); }; //Clear order input form and re-focus cursor function clearForm() { $("#select_quantity").val(""); $("#select_item option:first-child").attr("selected", "selected"); $("#select_quantity").focus(); }; //Calculate new order total function orderTotal() { var order_total = 0; $("#order_cart td.order_item_subtotal").each(function() { var amount = ($(this).html()).replace("$", ""); order_total += parseFloat(amount); }); $("#order_total").text(order_total).formatCurrency(); //Create alternating colored rows in order table $("#order_cart tr.order_row:odd").css("background-color", "#F0F0F6"); $("#order_cart tr.order_row:even").css("background-color", "#FFF"); }; //Pretend to place order if it contains items $(function() { $("#order_btn").click(function() { if ($("#order_cart tr.order_row:last").length == 0) { alert("No items selected..."); } else { alert("Order placed..."); } }); }); </script>
Additional Resources
I highly recommend the following resources to both beginner and intermediate jQuery developers who want to learn more about this great client-side development tool: