Using Windows Azure Service Management REST API in automated tasks

Windows Azure platform gives us a lot of new possibilities starting from the ability to auto scale instances of the deployed application to performing an on demand automated changes. When handling multiple applications deployed to the cloud there is a need to automate daily processes in order to save the development time.

In this article I will show you how to automate process of creating new cloud service using Windows Azure REST API. In our example we will create custom Api helper to instantiate our request object that will be then used to invoke the Azure RestFul API procedure.

In order to access WA Api the Azure subscription password and user name is not required, all you need is the subscription ID and the management certificate. This creates the possibility to give some administrative tasks to other people in the company not necessarily having access to the subscription account.

First thing to do is to create and upload management certificate into WA Management Portal. One of the ways to create certificate is to do it from within Visual Studio. In order to do that, we need to right click on our cloud project and open remote desktop configuration wizard. Next we need to select “create new” from the menu. After our certificate is created we can view it and export it to the .cer file. At this stage we also need to read the certificate’s thumb-print that will be used to find it in the local store.

The image below shows the process of configuring new RDP connection and creating new certificate

cert-config-azure

After we have created and exported certificate to the file, we can upload it to the WA Management Portal as shown below

azure-management-certificate

Please note that certificate thumb-print is the same as our local one.

We also need to make sure that our Api helper will find the certificate in our local store. In order to check it’s location, please open Windows Management Console (mmc) and add snap-in for the current user and local computer certificates. Next you need to copy it as depicted below

certificates

At this stage we can start implementing our Api request helper. Let’s create custom PayLoadSettings class first that we will use to hold the basic request settings.

 public class PayLoadSettings
 {
    public string CloudServiceUrlFormat { get; set; }
    public string SubscriptionId { get; set; }
    public string Thumbprint { get; set; }
    public string ServiceName { get; set; }
    public string Label { get; set; }
    public string Description { get; set; }
    public string Location { get; set; }
    public string AffinityGroup { get; set; }
    public string VersionId { get; set; }
 }

Next let’s create function that retrieves our newly created (and uploaded to the WAM portal) certificate from the local machine store

/// <summary>
/// Get certificate from the local machine by thumbprint
/// </summary>
/// <returns></returns>
private X509Certificate2 GetX509Certificate()
{
    X509Certificate2 x509Certificate = null;
    var certStore = new X509Store(StoreName.My, StoreLocation.LocalMachine);
    try
    {
        certStore.Open(OpenFlags.ReadOnly);

        var x509CertificateCollection = certStore.Certificates.Find(X509FindType.FindByThumbprint, this.PayLoadSettings.Thumbprint, false);

        x509Certificate = x509CertificateCollection[0];
    }
    finally
    {
        certStore.Close();
    }

    return x509Certificate;
 }

Next, we want to create function that inserts our cert into new request object to be sent to execute remote action. We also need to set the requested Api version (not required though).

/// <summary>
/// Create http request object with the certificate added
/// </summary>
/// <param name="uri"></param>
/// <param name="httpWebRequestMethod"></param>
/// <returns></returns>
private HttpWebRequest CreateHttpWebRequest(Uri uri, string httpWebRequestMethod)
{
    var x509Certificate = GetX509Certificate();
    var httpWebRequest = (HttpWebRequest)HttpWebRequest.Create(uri);

    httpWebRequest.Method = httpWebRequestMethod;
    httpWebRequest.Headers.Add("x-ms-version", this.PayLoadSettings.VersionId);

    httpWebRequest.ClientCertificates.Add(x509Certificate);
    httpWebRequest.ContentType = "application/xml";

    return httpWebRequest;
}

Next step is to create payload document object containing the operation parameters that we want to execute. The names are self-explanatory.

/// <summary>
/// Create payload document
/// </summary>
/// <returns></returns>
private XDocument CreatePayload()
{
    var base64LabelName = Convert.ToBase64String(Encoding.UTF8.GetBytes(this.PayLoadSettings.Label));

    var xServiceName = new XElement(azureNamespace + "ServiceName", this.PayLoadSettings.ServiceName);
    var xLabel = new XElement(azureNamespace + "Label", base64LabelName);
    var xDescription = new XElement(azureNamespace + "Description", this.PayLoadSettings.Description);
    var xLocation = new XElement(azureNamespace + "Location", this.PayLoadSettings.Location);
    var xAffinityGroup = new XElement(azureNamespace + "AffinityGroup", this.PayLoadSettings.AffinityGroup);
    var createHostedService = new XElement(azureNamespace + "CreateHostedService");

    createHostedService.Add(xServiceName);
    createHostedService.Add(xLabel);
    createHostedService.Add(xDescription);
    createHostedService.Add(xLocation);
    createHostedService.Add(xAffinityGroup);

    var payload = new XDocument();
    payload.Add(createHostedService);

    payload.Declaration = new XDeclaration("1.0", "UTF-8", "no");

    return payload;
}

Having payload document created, we can send our request and retrieve request id if operation is successful.

/// <summary>
/// Invoke Api operation by sending payload object
/// </summary>
/// <param name="uri"></param>
/// <param name="payload"></param>
/// <returns></returns>
private string InvokeAPICreateRequest(string uri, XDocument payload)
{
    string requestId;
    var operationUri = new Uri(uri);

    var httpWebRequest = CreateHttpWebRequest(operationUri, "POST");

    using (var requestStream = httpWebRequest.GetRequestStream())
    {
        using (var streamWriter = new StreamWriter(requestStream, UTF8Encoding.UTF8))
        {
            payload.Save(streamWriter, SaveOptions.DisableFormatting);
        }
    }

    using (var response = (HttpWebResponse)httpWebRequest.GetResponse())
    {
        requestId = response.Headers["x-ms-request-id"];
    }

    return requestId;
}

The final function just puts it all together as follows

/// <summary>
/// Execute create cloud service request
/// </summary>
/// <returns></returns>
public string CreateCloudService()
{
    var cloudServiceUrl = string.Format(this.PayLoadSettings.CloudServiceUrlFormat, this.PayLoadSettings.SubscriptionId);
    
    var payload = CreatePayload();

    var requestId = InvokeAPICreateRequest(cloudServiceUrl, payload);

    return requestId;
}

If we will invoke the code from the console, the code should look as below

 static void Main(string[] args)
 {
    //load this from your configuration file
    var payLoadSettings = new PayLoadSettings()
    {
        CloudServiceUrlFormat = "https://management.core.windows.net/{0}/services/hostedservices",
        SubscriptionId = "92533879-88c9-41fe-b24e-5251bcf49a8f",//fake subscription id - please provide yours
        Thumbprint = "3a f6 67 24 d8 d8 b3 71 b0 c4 d3 00 c2 04 0d 62 e5 30 76 1c", //fake cert thumbprint - please provide yours
        ServiceName = "newService1234567",//name your new service
        Label = "newService1234567", //give it a tracking label
        Description = "My new cloud service", //service description
        Location = "North Europe",//select centre
        AffinityGroup = "", //not created yet
        VersionId = "2011-10-01"//api version
    };

    var api = new RestAPIHelper(payLoadSettings);

    try
    {
        var requestId = api.CreateCloudService();

        Console.WriteLine("Cloud service has been created successfully :)" + Environment.NewLine + "Request id: " + requestId);
        Console.ReadLine();
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
        Console.ReadLine();
    }
}

Let’s run the console application now

cloud-service-created

After executing above we can check in WA Management Portal if the cloud service is created. This should look like image below

newcloudservice

I have attached project files for your tests. Please note that you need to set your own configuration settings for it to be working. You can also use above example to create your own automated tasks for Windows Azure – simply implement other operations in similar way. You can then use for example TeamCity to run it automatically when needed. This gives you a lot of possibilities and simply saves your precious development time.

AzureRestAPI

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...Loading...

Using automation for gathering release changes

If you need to gather release changes when creating installers, the script below can help you. Script simply gets all or current database release changes stored in file. I use it to automatically gather the latest changes after TeamCity builds release installer. The changes could be sent to PR for documentation. You can of course use more complicated patterns when extracting script text when required. To execute it, just pass in update script location, version number and email addresses.

release_changes

 

 param (
 [string]$ScriptFolder = "C:\Database\Updates",
 [string]$Version = "2.11.0.0",
 [string]$SendEmailsTo = "test@test.com",
 [boolean]$GetAllChanges = 0
)

#sends notification emails
 Function SendEmail($SendEmailsTo_, $EmailSubject_, $changes_)
 {
   $emailFrom = "teamcity@test.com" 
   $smtpserver="smtp.test.com" 
   $smtp=new-object Net.Mail.SmtpClient($smtpServer) 

   foreach ($email in $SendEmailsTo_.split(';'))
   {   
      $smtp.Send($emailFrom, $email, $EmailSubject_, $changes_)
   }
 }

 #get file by version number
 Function GetReleaseFileName($Version_)
 {
    $VersionFilename_ ="v";
     $Version_.split('.') | ForEach-Object{
        $VersionFilename_ +=  [System.Convert]::ToDouble($_).ToString("00"); 
    }
    $VersionFilename_ += ".sql";

     #format e.g. v12.12.00.10.sql
    return $VersionFilename_;
 }

#return if no emails added
if($SendEmailsTo.trim() -eq "") { "no emails defined!"; return;}

$VersionFilename =  GetReleaseFileName $Version;

$EmailSubject = "Release $Version database changes";
$changes = "Changes in release file $VersionFilename `r`n `r`n";
$hasChanges = 0;

if(!$GetAllChanges) {  
    (dir $ScriptFolder) | sort CreationTime -Descending |
     ForEach-Object {  
        $fileContent = get-content ($_.Directory.FullName + "\" + $VersionFilename) -Raw;

        $patterns = 'alter','drop','insert','delete','update','create'; #paterns/strings to find
        $hasChanges = 0;
        foreach ($pattern in $patterns)
        {
           $fileContent.split("`r`n") | ?  { $_ -match $pattern } | % { $changes += $_.trim() + "`r`n"; $hasChanges = 1; } 
        } 

        if($hasChanges -eq 0) { $changes += "`r`nNo database changes for release $Version !"}

        write-output $changes

        SendEmail $SendEmailsTo $EmailSubject $changes;

        break;        
    }
}
else
{
  #parse all files for all previous releases
  $EmailSubject = "All database changes";
  $changes = "All database changes: `r`n `r`n";

  (dir $ScriptFolder) | sort CreationTime -Descending |
    ForEach-Object {  
        $fileContent = get-content ($_.Directory.FullName + "\" + $_.name) -Raw;

        $patterns = 'alter','drop','insert','delete','update','create';
        $changes += "`r`nChanges in release file $_`r`n";
        $hasChanges = 0;

        foreach ($pattern in $patterns)
        {          
           if($fileContent)
           {
              $fileContent.split("`r`n") | ?  { $_ -match $pattern } | % { $changes += $_.trim() + "`r`n"; $hasChanges = 1; }  
           } 
        } 

        if($hasChanges -eq 0) { $changes += "`r`nNo database changes for release " + $_.name.Replace(".sql","").Replace("v0","");}

        $changes += "`r`n-------------------------------`r`n";      
    }

    write-output $changes 

    SendEmail $SendEmailsTo $EmailSubject $changes;
}

I have included full script sample below
release_changes_notifier

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 4.00 out of 5)
Loading...Loading...

Getting data from TeamCity Rest API – PowerShell

Simple script that gets build data from TeamCity server rest api. When executed as a build step, you can pass in current build id to it. Having that, we can get the previous build and it’s data. Having that data we can proceed with further processing which can be for example gathering documentation, statistics etc. Please make sure you have enabled running ps scripts on your server.

Enjoy!

teamcity_api

 param (
[int]$BuildId = 2274,
)

#gets last build date by build id
Function GetLastBuildDateByBuildId($LastBuildId_)
{
  $secpasswd = ConvertTo-SecureString "mypassword" -AsPlainText -Force
  $cred = New-Object System.Management.Automation.PSCredential("username", $secpasswd)

  $build = (Invoke-RestMethod -Credential $cred -Uri "https://teamcity:88/httpAuth/app/rest/builds/id:$LastBuildId_").build;
  return [DateTime]::ParseExact($build.startdate.split("T")[0],'yyyyMMdd', $null) 
}

#get last build date
$lastBuildDate = GetLastBuildDateByBuildId  ($BuildId -1)

$lastBuildDate
1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...Loading...

AutoBuilds and deployment using TeamCity CI server

When developing software solutions sometimes there is a need to automate process of builds, deployments and testing tasks. One of the possible solutions is the TeamCity server. This Continuous Integration server is especially useful when managing multiple SVN branches that need to be deployed to multiple web applications when any predefined change occurs.

One of the triggers that we can use to start the build and deployment is SVN change. In this article I will show you how to configure auto build and deployment of .net web application triggered by SVN changes.

After we have installed TeamCity, we need to create project and add VCS root (I won’t be describing that as it is quite straightforward). After that we will have to define SVN trigger that detects repo changes and triggers the build. Configuration below triggers build when one of 3 developers will commit his/her changes (to the main trunk).

svntrigger

Next thing to do is to configure Visual Studio project file .csproj to auto build and deploy application on AfterBuild event task. To do it please open .csproj in notepad and add the code below:

 <PropertyGroup>
    <DeployTarget>0</DeployTarget>
    <DeployTargetFolder>\\TestServer\MyApplication</DeployTargetFolder>
    <PublishTarget>0</PublishTarget>
    <PublishFolder>..\Deployment\MyApplication</PublishFolder>
  </PropertyGroup>
  <Target Name="PublishProperties">
    <CreateProperty Value="$(PublishFolder)">
      <Output TaskParameter="Value" PropertyName="WebProjectOutputDir" />
    </CreateProperty>
    <CreateProperty Value="$(PublishFolder)\bin\">
      <Output TaskParameter="Value" PropertyName="OutDir" />
    </CreateProperty>
  </Target>
  <Target Name="WebPublish" DependsOnTargets="BeforeBuild;PublishProperties">
    <RemoveDir Directories="$(PublishFolder)" ContinueOnError="true" />
    <CallTarget Targets="ResolveReferences;_CopyWebApplication" />
  </Target>
  <Target Name="Deploy" DependsOnTargets="WebPublish">
    <CreateProperty Value="$(DeployTargetFolder)" Condition="$(DeployFolder) == ''">
      <Output TaskParameter="Value" PropertyName="DeployFolder" />
    </CreateProperty>
    <RemoveDir Directories="$(DeployFolder)" Condition="$(CleanDeploy) == 1" />
    <ItemGroup>
      <DeploymentFiles Include="$(PublishFolder)\**\*.*" />
    </ItemGroup>
    <Copy SourceFiles="@(DeploymentFiles)" DestinationFolder="$(DeployFolder)\%(RecursiveDir)" />
  </Target>
  <Target Name="AfterBuild">
    <CallTarget Targets="WebPublish" Condition="$(PublishTarget) == 1" />
    <CallTarget Targets="Deploy" Condition="$(DeployTarget) == 1" />
  </Target>

You can notice that in the PropertyGroup there are some params the build will be configured with. In our case we configure DeployTarget and PublishTarget to 0 becase we want to avoid visual studio to trigger the deployment each time we hit F5. We will override this params from the TeamCity triggered build. See the configuration below:

autobuildconfig

The whole process illustrates this diagram:
ci-build-and-deploy

P.S. If you want to run FxCop as build step, additional change is required. After installing FxCop you need to manually adjust the build agent properties in following location: C:\TeamCity\buildAgent\conf\buildAgent.properties

Following configuration needs to be added:

 system.FxCopRoot=C\:\\Program Files (x86)\\Microsoft Fxcop 10.0
 system.FxCopCmdFileVersion=10.0.30319.1
 system.WindowsSDK7.1

Otherwise you will get “no suitable Build Agent” error

Good luck with your configuration 🙂

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...Loading...

Live database restoration to test server using Powershell

It is very common requirement during development stage to copy live database to test server on the daily/weekly basis. In such a scenario there is also very often a need to clean the database and perform some operations before restoring it in test environment.

For example if the live database is very big and you don’t need to have all tables on your test server, you can drop it during the copy process.
Similar applies if you need to adjust permissions, change the database name, optimize indexes or just shrink the database file.

CopyLiveDBToTest-300x137

The script below will help you automate the copy database process. On every run, it will check for the latest database backup file in defined location.
Next, it will copy that file to your local temp folder. After that, it will restore the database by giving it a temp name, so you can continue working while the restoration lasts (useful for big databases).
When the database is restored, it will drop the old test database and rename the current one to the same name. Also the old database files will be deleted to free-up the disc space. The database files will have the names like “TestDB_19_08_2014.mdf”.

As a result, you will have up-to-date live database restored on your test server.

 param (
#local test datatabase server to restore your live database 
[string]$TargetSQLServer= ".\MSSQLSERVER_2012",

#name of your live database
[string]$LiveSQLDatabase= "TestDB",

#name of your final test database being set after restoration
[string]$TargetSQLDatabase= "TestDB_DEV",

#data root of mssql server - containing .mdf files of your test database
[string]$TargetSQLDatabaseDataRoot= "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER_2012\MSSQL\DATA",

#folder where current backup file will be copied and processed - must be local folder
[string]$TempLocalFolder= "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER_2012\MSSQL\Backup_temp",

#backup location of your live database - can be located on the network drive
[string]$BackupLiveFolder = "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER_2012\MSSQL\Backup",

#show copy progress or copy in bacground
[bool]$silentCopying = 0,

#log location
[string]$Logfile = "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER_2012\MSSQL\Log\CopyLiveDBToTestLog.txt"
)

Function LogWrite
{
   Param ([string]$logstring)
   $currentdate = get-date -format "M-d-yyyy HH:mm:ss"
   Add-content $Logfile -value "$currentdate | $logstring"
   write-host "$currentdate | $logstring"
}

try
{ 
    LogWrite "acquiring backup file..."
    
    #get most recent backup file
    $liveBackupFile  = (dir -Path $BackupLiveFolder) | where {$_.extension -eq ".bak" }  | 
       sort lastwritetime –Descending | select -first 1;
      
    LogWrite "validating paths..."
    
    if($liveBackupFile -eq $null -or -not (Test-path $liveBackupFile.FullName)){ 
       LogWrite "Could not find the backup file, check the backup folder location!";
       return;}

    if(-not(Test-path $TempLocalFolder)) { LogWrite "Incorrect local backup location!"; return;}

    #copy backup file to temp location
    if(-not(Test-Path "$TempLocalFolder\$liveBackupFile"))
    {
       #remove all backups in temp location   
        dir -path $TempLocalFolder |  Foreach-Object {
        get-childitem -Path $_.FullName |  
         where-object { 
            ($_.FullName.tolower().EndsWith(".bak") -and $_.FullName.tolower().contains($LiveSQLDatabase.ToLower()))
          } | 
         Foreach-Object {
            LogWrite "removing file: $_.FullName"
            remove-item -path $_.FullName
          } 
        }

        LogWrite "copying backup file $liveBackupFile ..."
        
        if(-not($silentCopying -eq 1))
        {
            #use shell to see progress
            $FOF_CREATEPROGRESSDLG = "&H0&"
            $objShell = New-Object -ComObject "Shell.Application"
            $objFolder = $objShell.NameSpace($TempLocalFolder) 
            $objFolder.CopyHere($liveBackupFile.FullName, $FOF_CREATEPROGRESSDLG)
        }
        else
        {
           copy-item -Path $liveBackupFile.FullName -Destination "$TempLocalFolder\$liveBackupFile" -PassThru -Verbose
        }

        LogWrite "copying backup file $liveBackupFile finished"
    }
    else
    {
       LogWrite "backup file already exists: $TempLocalFolder\$liveBackupFile"
    }

    #----------------------------------------------------------

    LogWrite "connecting to $TargetSQLServer"

    $TargetSQLDatabase_temp = $TargetSQLDatabase + "_"+ (get-date).tostring("yy_MM_dd");
    
    $mdfCurrentPath = "$TargetSQLDatabaseDataRoot\$TargetSQLDatabase_temp.mdf"
    if(Test-Path $mdfCurrentPath)
    {
        LogWrite "database $TargetSQLDatabase_temp.mdf already exists!"
        return;
    }

    LogWrite "restoring database from file: $TempLocalFolder\$liveBackupFile to temp database $TargetSQLDatabase_temp ..."

    $DBLogicalFileName = $LiveSQLDatabase;
    $DBLogicalFileName_log = $LiveSQLDatabase + "_Log";

    $query = "RESTORE DATABASE [$TargetSQLDatabase_temp] FROM   
    DISK = N'$TempLocalFolder\$liveBackupFile' WITH  FILE = 1,  
    MOVE N'$DBLogicalFileName' TO N'$TargetSQLDatabaseDataRoot\"+$TargetSQLDatabase_temp+".mdf',   
    MOVE N'$DBLogicalFileName_log' TO N'$TargetSQLDatabaseDataRoot\"+$TargetSQLDatabase_temp+".ldf', 
    NORECOVERY,  NOUNLOAD,  REPLACE,  STATS = 10";


    SQLCMD -S $TargetSQLServer -E -Q $query 

    Start-Sleep -s 10

    #bring it to live from restoring mode
    $query = "RESTORE DATABASE "+$TargetSQLDatabase_temp+" WITH RECOVERY"
    SQLCMD -S $TargetSQLServer -E -Q $query 
    
    LogWrite "dropping old $TargetSQLDatabase database..."

    #drop current database
    $query = "ALTER DATABASE [$TargetSQLDatabase]
    SET OFFLINE WITH ROLLBACK IMMEDIATE
    DROP DATABASE [$TargetSQLDatabase]"
    SQLCMD -S $TargetSQLServer -E -Q $query 
    
    Start-Sleep -s 10
    
    LogWrite "renaming temp database $TargetSQLDatabase_temp to $TargetSQLDatabase ..."

    #rename temp database 
    $query = "ALTER DATABASE ["+$TargetSQLDatabase_temp+"] SET SINGLE_USER WITH ROLLBACK IMMEDIATE    
    ALTER DATABASE ["+$TargetSQLDatabase_temp+"] MODIFY NAME = [$TargetSQLDatabase]
    ALTER DATABASE [$TargetSQLDatabase] SET MULTI_USER"
    SQLCMD -S $TargetSQLServer -E -Q $query 

    LogWrite "recreating db permissions..."
    
    #recreate permissions
    $query = "ALTER AUTHORIZATION ON DATABASE::[$TargetSQLDatabase] TO [sa]
    use  [$TargetSQLDatabase]
    ALTER User [domain\Administrators] with LOGIN = [domain\Administrators]"

    #enable if used
    #SQLCMD -S $TargetSQLServer -E -Q $query 
           
    LogWrite "cleaning and shrinking database $TargetSQLDatabase ..."
    
    $query = "use $TargetSQLDatabase;
    ALTER DATABASE $TargetSQLDatabase SET RECOVERY SIMPLE;   
    truncate table dbo.TestTable;

    DBCC SHRINKFILE('$DBLogicalFileName_Log');
    DBCC SHRINKFILE('$DBLogicalFileName')"
    SQLCMD -S $TargetSQLServer -E -Q $query 
    
    
    LogWrite "optimizing table indexes..."
    
     $query = "USE [$TargetSQLDatabase];
        
        EXEC [dbo].[IndexOptimize]
		@IndexFragmentationMin  = 5,
        @IndexFragmentationRebuildMin  = 30,
		@skiplargetables = 0,
		@PrintInformationOnly = 0"
 
    #enable if used
    #SQLCMD -S $TargetSQLServer -E -Q $query 
    
    
    LogWrite "removing old database files..."
       
    #remove all old database files   
    $fileEntries = [IO.Directory]::GetFiles("$TargetSQLDatabaseDataRoot");
    $todayDatePart = (get-date).tostring("yy_MM_dd");

     foreach($fileName in $fileEntries) 
     { 
        if($fileName.tolower().contains("$TargetSQLDatabase".tolower()) -and -not($fileName.tolower().contains($todayDatePart)))
        {
           LogWrite $fileName
           #remove-item -path $fileName
        }
     }    
      
    LogWrite "Done!"

} 
catch { 
     $msg = "Unexpected Error. Error details: $_.Exception.Message"
     LogWrite $msg 
     Write-Host $msg -foreground Red 
}

Enjoy!
CopyLiveDBToTest

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 3.00 out of 5)
Loading...Loading...

Auto creating SVN tags and custom release installers

Automation within the software development company allows to save tremendous amount of time in the long run. The time spend for initial configuration of the automated tasks is almost nothing compared to running all processes manually. In this article I will show you how to configure auto SVN tagging and creating custom installation files each time after the TeamCity build is performed.

In our scenario we want to copy our current branch files to the https://svn.mycompany.com/Releases/Tags/ReleaseVersion location and from that location we want to create installation file that we want to deploy on the client server.

The first step is to create new TeamCity build configuration and as a first build step define following configuration:

svn_tags

Let’s create our svn-create-TAGS.bat file. The implementation is quite simple. There are 3 parameters that we will pass in from the TeamCity step: ReleaseVersion (%1), SVNRevision (%2), BranchSVNRoot (%3). Based on that we will trigger script below.

Because subversion copy function doesn’t have replace functionality that we can use, we then decide to delete current tag if exists. If the tag doesn’t exist then we will get svn error that could be passed to TeamCity Log. In our case we will use -q param to not display any error messages as this script will be run multiple times so each time the tags will have to replaced if the same release version is used.

After running this script we will have following tag created eg. https://svn.mycompany.com/Releases/Tags/Release.2.8.0.3″

 echo version %1
 echo revision %2
 echo branch path %3

  echo delete first if exists as there is no overwrite function for svn copy
  "C:\Program Files\TortoiseSVN\bin\svn.exe" delete "https://svn.mycompany.com/Releases/Tags/Release.%1" -q 
    -m "TeamCity generated: Deleting TAG branch for the release: %1" 

 echo copy to tag
 "C:\Program Files\TortoiseSVN\bin\svn.exe" copy -r%2 "%3/MyMainProjectFolder" "https://svn.mycompany.com/Releases/Tags/Release.%1" 
   -m "TeamCity generated: Creating TAG branch for the release: %1"

We also need to configure TeamCity parameters as follows:
svntags_params

When we have release tag created, we can now make release installer that can be deployed to the client. To do that we will define new build step (configuration is similar as the step above – script name and params will only be different).

Let’s create Installer_build.bat script then. We will pass in 2 parameters ReleaseVersion (%1), SVNRevision (%2). The script will create folder structure in Windows Temp location that we will use to import and compress our release files to. After we done with it we will clear that location and move executable file to our final release file location.

The script is basically importing files from the tag we created earlier, then the files are being compressed using 7za.exe command line utility. We can re-build the project in between but in our case project was built already by the other TeamCity process before it was moved to the release tag. After creating zip package we can build custom installer based on that file. This depends on your custom solution. In our example we will simply output the final zip file to be deployed to the client. Please see in-line comments for detailed information:

 echo params
 echo version %1
 echo revision %2

 echo top level folder stucture
 set root=%TEMP%\MyProjectBuild
 set export="%root%\Export"
 set build=%root%\Build
 set installer=%root%\Installer

 echo make sure nothing already exists
 rmdir /s /q "%root%"

 mkdir "%root%"
 mkdir "%build%"

 echo Create the installer
 echo %installer%
 mkdir %installer%

 svn export -r%2 "https://svn.mycompany.com/Releases/Tags/Release.%1/" "%export%"

 echo you may build you project here or proceed
 set MSBUILD_PATH=C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe

 echo Copy files ready to be compressed
 set InstallerFiles=%build%

 echo copy from export filder to the final build folder
 xcopy "%export%\*.*" "%InstallerFiles%\" /E /Y

 echo set tools location that contains your compressor file, set path the same as the executing batch file
 set Tools="%~dp0"
 set BuildZip=%installer%\MyProject.%1.zip

 echo current path %Tools%

 echo build your custom installer or export just zip file
 set FinalExecutable=%BuildZip%
 %Tools%\7za.exe a -tzip -r "%BuildZip%" "%build%\*.*

 echo copy it to the releases location and clear up
 set Releases=C:\Installers

 copy "%installer%\%FinalExecutable%" "%Releases%\%FinalExecutable%"

 if exist "%Releases%\%FinalExecutable%" (
	rmdir /s /q "%root%"
	echo The build is complete.
 ) else (
	echo Unable to copy to final destination. The installer is here: %installer%\%FinalExecutable%
 )

After creating above TeamCity build steps we can configure the release installer build to be triggered on successful main project build.

svntag_build_trigger

In order to get above scripts to work you need to adjust it to your requirements as I took it from the running solution I have created and for obvious reasons had to change the paths and names.
I hope this will help you to be more productive in your daily programming 🙂

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...Loading...

Programmatically creating scheduled task in Windows Task Scheduler

If you are creating windows forms application that has to perform some tasks on the regular basis the good solution is to use built-in Windows Task Scheduler. It is very stable and reliable as long as the user’s computer is working.

In this article I will show you how to create scheduled task by using Microsoft.Win32.TaskScheduler.dll (managed wrapper dll).

Let’s create our Scheduler class with the function to set up win scheduler. In our example we will run either an external file or test application depending on the parameters passed in.

After creating new instance of the TaskService, we need to set up some basic parameters (names and description to be shown in windows scheduler UI). Next, we need to create trigger, in our case we will use weekly trigger as the task will be performed every day of the week that user will configure.

After that we have to set up execution path and additional startup arguments if required. This is useful to detect from the target application that it has been run automatically and based on that to apply different layout, colors, buttons etc.

 public class Scheduler
 {
    public static string TaskName = "MyTask";

    public static void SetTask(string filePath, DateTime startDate, DaysOfTheWeek daysOfWeek, bool enabled)
    {
        var ts = new TaskService();
        var td = ts.NewTask();
        td.RegistrationInfo.Author = "My company";
        td.RegistrationInfo.Description = "Runs test application";
        td.Triggers.Add(new WeeklyTrigger { StartBoundary = startDate, DaysOfWeek = daysOfWeek, Enabled = enabled });

        //run this application or setup path to the file
        var action = new ExecAction(Assembly.GetExecutingAssembly().Location, null, null);
        if (filePath != string.Empty && File.Exists(filePath))
        {
            action = new ExecAction(filePath);
        }
        action.Arguments = "/Scheduler";
        td.Actions.Add(action);

        ts.RootFolder.RegisterTaskDefinition(TaskName, td);
    }

Next let’s create other basic functions to get and delete current task. This is quite straightforward.

 public static void DeleteTask()
 {
    var ts = new TaskService();
    var task = ts.RootFolder.GetTasks().Where(a => a.Name.ToLower() == TaskName.ToLower()).FirstOrDefault();
    if (task != null)
    {
        ts.RootFolder.DeleteTask(TaskName);
    }
 }

 public static Task GetTask()
 {
    var ts = new TaskService();
    var task = ts.RootFolder.GetTasks().Where(a => a.Name.ToLower() == TaskName.ToLower()).FirstOrDefault();

    return task;
 }

Now, lets create function to display next scheduled task run date. It is quite useful to notify the user about the next planned execution. In our implementation we will use NextRunTime property of the task we created. We also have to check if the task is enabled.

 public static string GetNextScheduleTaskDate()
 {
    try
    {
        var task = Scheduler.GetTask();
        if (task != null)
        {
            var trigger = task.Definition.Triggers.FirstOrDefault();
            if (trigger != null)
            {
                if (trigger.TriggerType == TaskTriggerType.Weekly)
                {
                    if (trigger.Enabled)
                    {
                        var weeklyTrigger = (WeeklyTrigger)trigger;
                        
                        return task.NextRunTime.ToString("yyyy MMM. dd dddd 'at ' HH:mm");
                    }
                }
            }
        }
    }
    catch (Exception)
    { }

    return "no scheduled date!";
 }

Within the UI, we have to create basic helper methods in order to support interface values. In our case we will use checkboxes to set and enable task. In our example, the user will be able to set the task in any day of the week, disable it or delete it permanently. We will also display the next run time information.

 void RefreshNextRunInfo()
 {
    lblNextRun.Text = Scheduler.GetNextScheduleTaskDate();
 }

 //Load selected days from checkboxes
 DaysOfTheWeek GetTaskDays()
 {
    var daysOfWeek = DaysOfTheWeek.AllDays;
    daysOfWeek &= ~DaysOfTheWeek.AllDays;

    if (chkbMonday.Checked) { daysOfWeek = daysOfWeek | DaysOfTheWeek.Monday; }
    if (chkbSaturday.Checked) { daysOfWeek = daysOfWeek | DaysOfTheWeek.Saturday; }
    if (chkbSunday.Checked) { daysOfWeek = daysOfWeek | DaysOfTheWeek.Sunday; }
    if (chkbThursday.Checked) { daysOfWeek = daysOfWeek | DaysOfTheWeek.Thursday; }
    if (chkbTuestday.Checked) { daysOfWeek = daysOfWeek | DaysOfTheWeek.Tuesday; }
    if (chkbWednesday.Checked) { daysOfWeek = daysOfWeek | DaysOfTheWeek.Wednesday; }
    if (chkbFriday.Checked) { daysOfWeek = daysOfWeek | DaysOfTheWeek.Friday; }

    return daysOfWeek;
 }

 void SetTaskForm(bool enabled)
 {
    timePicker.Enabled = enabled;
    chkbFriday.Enabled = enabled;
    chkbMonday.Enabled = enabled;
    chkbSaturday.Enabled = enabled;
    chkbSunday.Enabled = enabled;
    chkbThursday.Enabled = enabled;
    chkbTuestday.Enabled = enabled;
    chkbWednesday.Enabled = enabled;
 }

 //reload current task state (if exists)
 public void RefreshSchedulerSettings()
 {
    //set initial time to 5 minutes ahead for testing
    timePicker.Value = DateTime.Now.AddMinutes(5);

    try
    {
        SetTaskForm(false);
        chkbFriday.Checked = false;
        chkbMonday.Checked = false;
        chkbSaturday.Checked = false;
        chkbSunday.Checked = false;
        chkbThursday.Checked = false;
        chkbTuestday.Checked = false;
        chkbWednesday.Checked = false;
        chkbScheduler.Checked = false;

        var task = Scheduler.GetTask();
        if (task != null)
        {
            var trigger = task.Definition.Triggers.FirstOrDefault();
            if (trigger != null)
            {
                if (trigger.TriggerType == TaskTriggerType.Weekly)
                {
                    if (trigger.Enabled) { SetTaskForm(true); }

                    chkbScheduler.Checked = trigger.Enabled;
                    var weeklyTrigger = (WeeklyTrigger)trigger;
                    timePicker.Value = trigger.StartBoundary;

                    if (weeklyTrigger.DaysOfWeek.HasFlag(DaysOfTheWeek.Friday)) { chkbFriday.Checked = true; }
                    if (weeklyTrigger.DaysOfWeek.HasFlag(DaysOfTheWeek.Monday)) { chkbMonday.Checked = true; }
                    if (weeklyTrigger.DaysOfWeek.HasFlag(DaysOfTheWeek.Saturday)) { chkbSaturday.Checked = true; }
                    if (weeklyTrigger.DaysOfWeek.HasFlag(DaysOfTheWeek.Sunday)) { chkbSunday.Checked = true; }
                    if (weeklyTrigger.DaysOfWeek.HasFlag(DaysOfTheWeek.Thursday)) { chkbThursday.Checked = true; }
                    if (weeklyTrigger.DaysOfWeek.HasFlag(DaysOfTheWeek.Tuesday)) { chkbTuestday.Checked = true; }
                    if (weeklyTrigger.DaysOfWeek.HasFlag(DaysOfTheWeek.Wednesday)) { chkbWednesday.Checked = true; }
                }
            }
        }
    }
    catch (Exception)
    { }
 } 

When saving the task we will use following method that validates and gathers user input information. Your implementation of course will depend on your requirements.

 private void btnSave_Click(object sender, EventArgs e)
 {
    var filePath = string.Empty;//set path to the exe file or leave it empty to run this app for test
    var daysOfWeek = GetTaskDays();

    if (chkbScheduler.Checked && daysOfWeek == 0)
    {
        MessageBox.Show(this, "Select at least one day or turn off scheduler!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning);
        return;
    }

    try
    {
        if (daysOfWeek != 0)
        {
            var startDate = DateTime.Now.Date.AddHours(timePicker.Value.Hour).AddMinutes(timePicker.Value.Minute);
            Scheduler.SetTask(filePath, timePicker.Value, daysOfWeek, chkbScheduler.Checked);
        }
        else
        {
            Scheduler.DeleteTask();
        }
    }
    catch (Exception ex)
    {
        MessageBox.Show(this, "An error occured " + ex.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Warning);
    }
    RefreshNextRunInfo();
 }

I have included fully working example below for your tests.

TaskScheduler

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...Loading...

Using automation when executing SQL scripts stored in separate files

When working on a big projects, sometimes there is a need to update database from the sql files scattered around multiple files or folders especially when using source control for maintaining database changes. Also when using auto builds via CI server this functionality seems to be useful.

In this article I will show you how to automate the process of updating multiple database sql files so you can do it in one go – like that:

  //this is command line executed by TeamCity build (change params with your build configuration values)
 c:\\UpdateScripts.exe "\scripts\Procedures" %Server% %Database% %ToBeReplacedSelectUser% %DatabaseSelectUser% "exec dbo.proc_GrantExecutePermissionsOnProcedures '%DatabaseUser%'"

teamCitySqlCmd

Lets create C# console application first. Once that is done we can implement basic validation rules for parameters being passed in. In our example we will support 6 parameters: Server, Database, StringToFind, StringToReplace, DatabaseUser, FinalQuery (we will use windows integrated database connection).

    if (args.Length == 0)
    {
        Console.WriteLine("No arguments passed!");
        Console.WriteLine(Environment.NewLine);
        Console.WriteLine(getHelp());
        Console.ReadLine();
        return 1;
    }

    if (args.Length < 3)
    {
        if (!silent)
        {
            Console.WriteLine("Not all required arguments passed!");
            Console.WriteLine(Environment.NewLine);
            Console.WriteLine(getHelp());
            Console.ReadLine();
        }
        return 1;
    } 
    ///////////////
    var scriptPath = args[0];
    var server = args[1];
    var database = args[2];
    var findString = args[3];
    var replaceString = args[4];
    var endQuery = args.Length >= 5 ? args[5] : string.Empty;
    /////////////

    if (!Directory.Exists(scriptPath) || Directory.GetFiles(scriptPath).Length == 0)
    {
        if (!silent)
        {
            Console.WriteLine("Directory does not exist or is empty!");
            Console.WriteLine(Environment.NewLine);
            Console.WriteLine(getHelp());
            Console.ReadLine();
        }
        return 1;
    }

The code below will get all .sql files from the directory and execute containing queries. If an file has “GO” statements inside then it needs to be executed in one block to avoid errors. Normally when executing such a scripts, Sql Management Studio cares about that. In our case we simply need to split content by “GO” blocks.

Console.WriteLine("params to find: " + findString);
Console.WriteLine("replace string: " + replaceString);

using (var con = new SqlConnection("Server=" + server + ";Database=" + database + ";Trusted_Connection=True;"))
{
    con.Open();

    var files = Directory.GetFiles(scriptPath).Where(f => f.ToLower().EndsWith(".sql"));

    foreach (var filePath in files)
    {
        #region execute query from file
        var query = File.ReadAllText(filePath);

        if (findString.Length > 0)
        {
            query = query.Replace(findString, replaceString);
        }
        Console.WriteLine("Executing: " + Path.GetFileName(filePath));

        query = query.Replace("go\r\n", "GO\r\n");
        query = query.Replace("go\t", "GO\t");
        query = query.Replace("\ngo", "\nGO");

        //divide into GO groups to avoid errors
        var itemQuery = query.Split(new string[] { "GO\r\n", "GO\t", "\nGO" }, StringSplitOptions.None);

        foreach (var queryItem in itemQuery)
        {
            if (queryItem.Trim().Length < 3) { continue; }

            using (var cmd = new SqlCommand(queryItem, con))
            {
                try
                {
                    cmd.ExecuteNonQuery();
                }
                catch (Exception ex)
                {
                    if (!silent)
                    {
                        Console.WriteLine("Error: " + ex.Message);
                    }
                    exitResult = 1;
                }
            }
        }
        #endregion
    }

At the end we can execute final sql statement. It is useful when for example there is a need to assign database permissions after the update is performed.

 //////////////////////execute final query (e.g permission update)
if (endQuery.Trim().Length > 3)
{
    using (var cmd = new SqlCommand(endQuery, con))
    {
        try
        {
            Console.WriteLine("Executing final query: " + endQuery);
            cmd.ExecuteNonQuery();
        }
        catch (Exception ex)
        {
            Console.WriteLine("Error: " + ex.Message);
            return 1;
        }
    }
}

The Console.WriteLine output will also be picked up by the TeamCity build log, so you can see all update information when needed. I have attached solution files, feel free to adjust it to your needs 🙂
UpdateDBScripts

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...Loading...

Executing sql scripts in direcory using .bat file

If you are in situation where you have to execute all sql scripts located as separate files in one folder, executing .bat file is the one of the options you can use. Executing batch command is quite flexible solution as you don’t have to recompile it every time the script will change.

The code below loops through all files within the specified directory, loads each of them into variable and replaces the param string if exists before execution. It is helpful if you need to replace database specific users (e.g. execute with ‘username’) etc.

Please note that variables within the loop are accessed by !myVar! instead of %myVar% because of delayed expansion applied.

Command cd /d %1 sets the current directory to the specified by the user in first parameter.

 @ECHO OFF
 echo script folder: %1
 echo server: %2
 echo database: %3
 echo database user: %4
 echo ----------------

setlocal enabledelayedexpansion
cd /d %1

for %%d in (*.sql) do (
   set filecontent=
   for /f "delims=" %%a in (%%d) do (set "filecontent=!filecontent!%%a")

   set "temp=!filecontent!"

   set "filecontent=!temp:#PARAM#=%4!"

   echo executing %%d
   echo ----------
   sqlcmd -S%2 -E -Q"!filecontent!" -d%3 
)
pause

Above code can be executed as follows (trusted database connection is used)

 C:\updateDB.bat "C:\scripts\" "myServer" "databaseName" PARAM_TO_REPLACE
1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...Loading...