Proxmedia https://pxm-software.com Software Outsourcing & Nearshore Development Mon, 27 Feb 2017 14:01:08 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.2 Automating Microsoft Team Foundation (TFS) check-ins https://pxm-software.com/automating-microsoft-team-foundation-tfs-check-ins/ Wed, 10 Aug 2016 19:17:43 +0000 https://proxmedia.pl/en/?p=5358 Have you ever wanted to automate code check-ins to TSF e.g. at the end of the day or at the custom interval you define? That’s easy to do using TFS remote API provided by Microsoft. In our exercise we are going to use following packages to make the solution working: As a authorization mechanism, we will […]

Artykuł Automating Microsoft Team Foundation (TFS) check-ins pochodzi z serwisu Proxmedia.

]]>
Have you ever wanted to automate code check-ins to TSF e.g. at the end of the day or at the custom interval you define? That’s easy to do using TFS remote API provided by Microsoft.

In our exercise we are going to use following packages to make the solution working:

<packages>
 <package id="Microsoft.AspNet.WebApi.Client" version="5.2.2" targetFramework="net452" />
 <package id="Microsoft.AspNet.WebApi.Core" version="5.2.2" targetFramework="net452" />
 <package id="Microsoft.IdentityModel.Clients.ActiveDirectory" version="2.16.204221202" targetFramework="net452" />
 <package id="Microsoft.TeamFoundationServer.Client" version="14.95.3" targetFramework="net452" />
 <package id="Microsoft.TeamFoundationServer.ExtendedClient" version="14.95.3" targetFramework="net452" />
 <package id="Microsoft.VisualStudio.Services.Client" version="14.95.3" targetFramework="net452" />
 <package id="Microsoft.VisualStudio.Services.InteractiveClient" version="14.95.3" targetFramework="net452" />
 <package id="Microsoft.WindowsAzure.ConfigurationManager" version="1.7.0.0" targetFramework="net452" />
 <package id="Newtonsoft.Json" version="6.0.8" targetFramework="net452" />
 <package id="System.IdentityModel.Tokens.Jwt" version="4.0.0" targetFramework="net452" />
 <package id="WindowsAzure.ServiceBus" version="3.3.1" targetFramework="net452" />
</packages>

As a authorization mechanism, we will use token which can be generated on your Visual Studio Online account.
Tokens are more secure and flexible than using standard user/pass credentials as we can safely share it and allow it to expire after a predefined period of time.

so, in our app.config we will have following settings:

 <add key="TfsURL" value="https://your_name.visualstudio.com/DefaultCollection/"/>
 <add key="workspaceName" value="AUTOMATED_CHECKINS"/>
 <add key="tfsServerFolderPath" value="$/remote_folder"/>
 <add key="localWorkingPath" value="D:\MyCodefolder"/>
 <add key="tfsUser" value="not-used"/>
 <add key="tfsPass" value="-your token--"/>
 <add key="tfsDomain" value=""/>

Our main function will look as following (see inline comments):

static void TFSCheckIn(string comment = "")
{
  var TfsURL = new Uri(ConfigurationManager.AppSettings["TfsURL"]);

  //use it for standard user/pass authentication
  //var credential = new NetworkCredential(ConfigurationManager.AppSettings["tfsUser"], ConfigurationManager.AppSettings["tfsPass"], ConfigurationManager.AppSettings["tfsDomain"]);
  //var collection = new TfsTeamProjectCollection(TfsURL, credential);
  //collection.EnsureAuthenticated();

  //token based authentication
  var simpleWebToken = new SimpleWebToken(ConfigurationManager.AppSettings["tfsPass"]);
  var networkCredential = new NetworkCredential(ConfigurationManager.AppSettings["tfsUser"], ConfigurationManager.AppSettings["tfsPass"]);
  
  var basicCredential = new BasicAuthCredential(networkCredential);
  var tfsCredentials = new TfsClientCredentials(basicCredential);
  tfsCredentials.AllowInteractive = false;
  var collection = new TfsTeamProjectCollection(TfsURL, tfsCredentials);
  collection.EnsureAuthenticated();

  var versionControl = (VersionControlServer)collection.GetService(typeof(VersionControlServer));

  Workspace WS = null;
  try
  {
    //Get the current workspace
    WS = versionControl.GetWorkspace(ConfigurationManager.AppSettings["workspaceName"], versionControl.AuthorizedUser);
  }
  catch (Exception)
  { }

  //create workspace if not yet created
  if (WS == null)
  {
     WS = versionControl.CreateWorkspace(ConfigurationManager.AppSettings["workspaceName"], versionControl.AuthorizedUser);
  }

  //map local folder if not already mapped
  if (!WS.IsLocalPathMapped(ConfigurationManager.AppSettings["localWorkingPath"]))
  {
    //Mapping TFS Server and code generated
    WS.Map(ConfigurationManager.AppSettings["tfsServerFolderPath"], ConfigurationManager.AppSettings["localWorkingPath"]);
  }
 
  //download remote changes (check-out)
  WS.Get();

 //auto-resolve conflicts
 Conflict[] conflicts = WS.QueryConflicts(new string[] { ConfigurationManager.AppSettings["tfsServerFolderPath"] }, true);
  
  foreach (Conflict conflict in conflicts)
  {
    if (WS.MergeContent(conflict, false))
    {
     conflict.Resolution = Resolution.AcceptTheirs;
     WS.ResolveConflict(conflict);
    }

   Console.WriteLine("conflict: " + conflict.FileName);
 }

 //Add all files just created to pending change
 int NumberOfChange = WS.PendAdd(ConfigurationManager.AppSettings["localWorkingPath"], true);

 //Get the list of pending changes
 PendingChange[] pendings = WS.GetPendingChanges(ConfigurationManager.AppSettings["tfsServerFolderPath"], RecursionType.Full);

  if (pendings.Any())
  {
    //Auto check in code to Server
     WS.CheckIn(pendings, "Auto check-in code (#changes: " + pendings.Count() + "). " + comment);
  }
 }

Need  custom solution tailored to your business needs? Contact us today!

Artykuł Automating Microsoft Team Foundation (TFS) check-ins pochodzi z serwisu Proxmedia.

]]>
Auto publishing reports to Tableau Server https://pxm-software.com/auto-publishing-reports-to-tableau-server/ Sat, 09 Jan 2016 11:03:38 +0000 https://proxmedia.pl/en/?p=5168 Tableau it’s a great tool for data visualization, however if you are using it a lot, you may want to automate some stuff. One of them is publishing/updating reports to Tableau Server. This is when Tableau Utility Command comes in handy, you can install it on your development server and use in the power-shell script. […]

Artykuł Auto publishing reports to Tableau Server pochodzi z serwisu Proxmedia.

]]>
Tableau it’s a great tool for data visualization, however if you are using it a lot, you may want to automate some stuff. One of them is publishing/updating reports to Tableau Server. This is when Tableau Utility Command comes in handy, you can install it on your development server and use in the power-shell script. One of the solutions is to use CI server for auto deployments, you only need to give the git/svn access for users changing the reports or adding new ones.

tableau

Following script can be run as build step in TeamCity to detect workbooks that have been changed recently and publish them automatically to Tableau server. Parent folder of each workbook will be used as project name when publishing. In order to run it, just pass in email notification list and server password – of course you need to configure the params (server url, smtp etc.).

param (
[string]$cmddir = "C:\Program Files\Tableau\Tableau Server\8.2\extras\Command Line Utility", #location where tabcmd has been installed
[string]$server = "https://tableau:81", #this is url of the Tableau server 
[string]$currentDir = (split-path -parent $MyInvocation.MyCommand.Definition) +"\", #current script location
[string]$notificationEmailList = "test1@test.com,test2@test.com", #send email notifications if successful
[string]$admin = "user", #admin account for the server
[string]$pass = "" #to be passed in as param
)
 
function SendEmail($emailTo,$title,$body)
{ 
   $smtp=new-object Net.Mail.SmtpClient("my_smtp_server"); $smtp.Send("sentAs@mydomain.com", $emailTo, $title, $body);
}
 
$global:temp_ = "";
 
#login to Tableau
cd  $cmddir
.\tabcmd login -s $server -u $admin -p $pass
 
 get-childitem -Path $currentDir –recurse |  where-object { 
    $_.LastWriteTime -gt (get-date).AddMinutes(-10) -and $_.FullName.EndsWith(".twb")
  } | 
  Foreach-Object {
 
       [string]$projectName = [System.IO.DirectoryInfo]$_.Directory.Name;
        $global:temp_ += [string][System.IO.Path]::GetFileName($_.FullName) + " | ";
 
       #publish or overwrite workbook on the server
       .\tabcmd publish $_.FullName -r $projectName  -o  
  } 
 
 
#more commands
#.\tabcmd publish "workbook.twbx" -r "project name" -n "Workbook Name" --db-user "" --db-password "" -o
 
 
#log out to release the session
.\tabcmd logout
 
if(-not $global:temp_ -eq "")
{
   SendEmail $notificationEmailList "Tableau report published" "Following report(s) has just been successfully published to Tableau Server: $global:temp_"
}

enjoy!

Artykuł Auto publishing reports to Tableau Server pochodzi z serwisu Proxmedia.

]]>
10 reasons why to outsource in-house software development https://pxm-software.com/10-reasons-why-to-outsource-in-house-software-development/ Sat, 05 Dec 2015 18:26:53 +0000 https://proxmedia.pl/en/?p=5100 Software development outsourcing has always been an very attractive option for many IT companies operating in developed countries and perceived as a way to boost the competitiveness on the local market. A lot of businesses operating on the global market, already proved the effectiveness of software outsourcing by reducing internal costs while maintaining high quality of […]

Artykuł 10 reasons why to outsource in-house software development pochodzi z serwisu Proxmedia.

]]>
Software development outsourcing has always been an very attractive option for many IT companies operating in developed countries and perceived as a way to boost the competitiveness on the local market. A lot of businesses operating on the global market, already proved the effectiveness of software outsourcing by reducing internal costs while maintaining high quality of products and services.

Offshore development team can greatly increase your productivity by allowing your in-house stuff to fucus on your core business, while delivering products and services to your clients in a usual manner. At the same time, your payroll costs may deacreese greatly, so you will have more money for future investments.

 

What are the main reasons for software outsourcing?

 

1 – Cost Savings

Outsourcing may reduce your costs by up-to 50% while reducing workload on your in-house employees. Taking into account the fact, that outsourcing usually does not require any up-front costs, makes it very attractive. This is especially true for the UK, as currently the British pound is on the seven-year high against the Euro and other European currencies.

 

2 – Time Savings

Outsourcing will save you a lot of time as you dont need to manage directly the team of dedicated developers, so you can deliver your products to the market more quickly, below the budget.

 

3 – Extending in house capabilities

Companies struggling on the global market, may experience lack of experience in some areas of expertise necessary to develop innovative products. Software development outsourcing may bridge this gap by providing these missing in-house skills.

 

4 – Flexibility

With the outsourcing team, you are more flexible being able quickly increase or decrease the team size when necessary. You will also save time on recruiting new in-house employees. This is especially true for short time projects where local, short term freelancers, may cost you a fortune.

 

5 – Talented IT Professionals

You will have much broader access to talented IT professionals by going overseas and bypassing the gaps in hiring pools in more developed countries.

 

6 – Risk Mitigation

You can mitigate the risk by choosing outsourcing company specializing in area of your business with the good track of record operating in well managed software development environment.

 

7 – Enhanced Accuracy

Dedicated team of developers is more likely to meet the pre-defined deadlines on the project as they are more flexible in scaling out their internal team.

 

8 – Focused Strategy

Dedicated team of developers will streamline your business strategy by allowing you to focus on the core of your business.

 

9 – Technological Advances

By using dedicated team of developers specialized in specific area of technology, you will gain competitive advantage over your competitors.

 

10 –  Increased Availability

Dedicated team of developers operating in different time zone e.g. GMT +2 will increase your aviability by being able e.g. to do some urgent tasks before or after your client’s business hours.

 

Finally

Outsourcing software development may become a game changer for the companies operating on highly competitive markets, in the age of globalization and technological advances.

 

Please contact us to find out what we can do for you..

 

 

 

 

Artykuł 10 reasons why to outsource in-house software development pochodzi z serwisu Proxmedia.

]]>
Using Windows Azure Service Bus Topics in distributed systems https://pxm-software.com/using-windows-azure-service-bus-topics-in-distributed-systems/ Wed, 02 Dec 2015 19:47:26 +0000 https://proxmedia.pl:99/en/?p=4087 Service Bus topics allow to perform one way communication using publish/subscribe model. In that model the Service Bus topic can be treated as a intermediary queue that multiple users/components can subscribe to. When publishing a message we can choose to route it to all subscribers or to apply filters for each subscription resulting in each […]

Artykuł Using Windows Azure Service Bus Topics in distributed systems pochodzi z serwisu Proxmedia.

]]>
Service Bus topics allow to perform one way communication using publish/subscribe model. In that model the Service Bus topic can be treated as a intermediary queue that multiple users/components can subscribe to.

When publishing a message we can choose to route it to all subscribers or to apply filters for each subscription resulting in each subscriber receiving messages that are addressed to him. With Service Bus topics we can easily scale distributed applications communicating with each other within or across multiple networks.

In this article I will show you how to build and test Service Bus topic on your local computer. In our example we will simulate sending messages from the web, mobile and service application to the Service Bus Topic.

These messages will be then routed to relevant subscriptions based on defined filters we assigned for each of them. Subscription for messages from the web application will be using multiple auto scalable worker roles to process the business logic. Same will apply for service messages. If we don’t expect a lot of traffic coming from mobile application, we can then use single worker role (with failover settings).

Autoscaling worker roles can be performed using Enterprise Library 5.0 – Autoscaling Application Block (aka WASABi). This will ensure that appropriate number of worker roles will be automatically started when traffic increases and stopped if the traffic will ease.

See high level architecture diagram below:

ServiceBusTopic

 

In order to start, we need to first install “Service Bus 1.0 for Windows Server” (runs on Win7 as well). After installation please go to start > Service Bus 1.0 > Service Bus Configuration. Then use the wizard to set up the web farm first and after that, join your computer to that farm – this will essentially create namespace within the namespace on your local machine.

ServiceBus-configuration

After you configure the Service Bus installation you will get endpoint address that your local application will use to connect to the Service Bus. You may notice that after installation there are 2 databases created on you local MSSQL server, see below image:

ServiceBus-tables

In order to connect to our Service Bus we will use the function below. This will create connection string the pass in to the NamespaceManager class.

  //use this setting when deploying to Windows Azure
  <add key="Microsoft.ServiceBus.ConnectionString" value="Endpoint=sb://[your namespace].servicebus.windows.net;SharedSecretIssuer=owner;SharedSecretValue=[your secret]" />

 public static string getLocalServiceBusConnectionString()
 {
    var ServerFQDN = System.Net.Dns.GetHostEntry(string.Empty).HostName;
    var ServiceNamespace = "ServiceBusDefaultNamespace";
    var HttpPort = 9355;
    var TcpPort = 9354;

    var connBuilder = new ServiceBusConnectionStringBuilder();
    connBuilder.ManagementPort = HttpPort;
    connBuilder.RuntimePort = TcpPort;
    connBuilder.Endpoints.Add(new UriBuilder() { Scheme = "sb", Host = ServerFQDN, Path = ServiceNamespace }.Uri);
    connBuilder.StsEndpoints.Add(new UriBuilder() { Scheme = "https", Host = ServerFQDN, Port = HttpPort, Path = ServiceNamespace }.Uri);

    return connBuilder.ToString();
 }

Within the worker role we will use NamespaceManager to create Service Bus Topic (if does not exist). We will also create subscriptions and associated filters.
Please notice that subscription will filter messages using MessageOrigin property. This property will be assigned to the message in the message send method for each application separately (web, mobile, service).

 public void CreateServiceBusTopicAndSubscriptions(NamespaceManager namespaceManager)
 {
    #region Configure and create Service Bus Topic
    var serviceBusTestTopic = new TopicDescription(TopicName);
    serviceBusTestTopic.MaxSizeInMegabytes = 5120;
    serviceBusTestTopic.DefaultMessageTimeToLive = new TimeSpan(0, 1, 0);

    if (!namespaceManager.TopicExists(TopicName))
    {
        namespaceManager.CreateTopic(serviceBusTestTopic);
    }
    #endregion

    #region Create filters and subsctiptions
    //create filters
    var messagesFilter_Web = new SqlFilter("MessageOrigin = 'Web'");
    var messagesFilter_Mobile = new SqlFilter("MessageOrigin = 'Mobile'");
    var messagesFilter_Service = new SqlFilter("MessageOrigin = 'Service'");

    if (!namespaceManager.SubscriptionExists(TopicName, "WebMessages"))
    {
        namespaceManager.CreateSubscription(TopicName, "WebMessages", messagesFilter_Web);
    }

    if (!namespaceManager.SubscriptionExists(TopicName, "MobileMessages"))
    {
        namespaceManager.CreateSubscription(TopicName, "MobileMessages", messagesFilter_Mobile);
    }

    if (!namespaceManager.SubscriptionExists(TopicName, "WCfServiceMessages"))
    {
        namespaceManager.CreateSubscription(TopicName, "WCfServiceMessages", messagesFilter_Service);
    }
    #endregion
}

We also need to create subscription clients in “OnStart” method giving each subscriber a unique name.

 public override bool OnStart()
 {
    // Set the maximum number of concurrent connections 
    ServicePointManager.DefaultConnectionLimit = 12;

    // Create the queue if it does not exist already
    //string connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
    var connectionString = getLocalServiceBusConnectionString();
    var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);

    //create topic and subscriptions
    CreateServiceBusTopicAndSubscriptions(namespaceManager);

    // Initialize subscription for web, mobile and service
    SubscriptionClients.Add(SubscriptionClient.CreateFromConnectionString(connectionString, TopicName, "WebMessages"));
    SubscriptionClients.Add(SubscriptionClient.CreateFromConnectionString(connectionString, TopicName, "MobileMessages"));
    SubscriptionClients.Add(SubscriptionClient.CreateFromConnectionString(connectionString, TopicName, "WCfServiceMessages"));

    IsStopped = false;
    return base.OnStart();
}

Inside the run method we will use Task Parallel foreach method to create separate task for each subscriber listening for incoming messages.
This is only to simulate multiple subscribers in one place. Normally each worker role will connect to the topic listening for the messages appropriate for it’s type (separate for web, mobile and service).

public override void Run()
{
 Parallel.ForEach(SubscriptionClients, currentSubscrtiption =>
 {
    while (!IsStopped)
    {
        #region Receive messages
        try
        {
            // Receive the message
            var receivedMessage = currentSubscrtiption.Receive();
         
            if (receivedMessage != null)
            {
                var messageFrom = receivedMessage.Properties["MessageOrigin"].ToString();

                switch (messageFrom)
                {
                    case "Web":
                        //send it to web processing logic

                        break;
                    case "Mobile":
                        //send it to mobile processing logic

                        break;
                    case "Service":
                        //send it to service processing logic

                        break;
                    default:
                        break;
                }

                // Process the message
                Trace.WriteLine(Environment.NewLine + "--------------------------" + Environment.NewLine);
                Trace.WriteLine(string.Format("{0} message content: {1}", messageFrom, receivedMessage.GetBody<string>()));

                receivedMessage.Complete();
            }
        }
        catch (MessagingException e)
        {
            if (!e.IsTransient)
            {
                Trace.WriteLine(e.Message);
                throw;
            }

            Thread.Sleep(10000);
        }
        catch (OperationCanceledException e)
        {
            if (!IsStopped)
            {
                Trace.WriteLine(e.Message);
                throw;
            }
        }
        #endregion
    }
});
}

Finally we can simulate sending messages from the MVC application. We will use 3 different buttons to create and send messages.

 [HttpPost]
 public ActionResult SendWebMessage()
 {
    SendMessage("Web");

    return RedirectToAction("Index", "Home");
 }

 [HttpPost]
 public ActionResult SendMobileMessage()
 {
    SendMessage("Mobile");

    return RedirectToAction("Index", "Home");
 }

 [HttpPost]
 public ActionResult SendServiceMessage()
 {
    SendMessage("Service");

    return RedirectToAction("Index", "Home");
 }

See the image below:

ServiceBusTopic-subscriptions

Please note that when sending messages we have to assign value to message.Properties[“MessageOrigin”]. This will be used by the Service Bus Topic to route messages to appropriate subscriptions.

 void SendMessage(string type)
 {
    var connectionString = getLocalServiceBusConnectionString();

    var Client = TopicClient.CreateFromConnectionString(connectionString, "ServiceBusTestTopic");

    var message = new BrokeredMessage("test message");
    message.Properties["MessageOrigin"] = type;

    Client.Send(message);
 }

As usual, I have attached working project files for your tests 🙂

AzureServiceBus

Artykuł Using Windows Azure Service Bus Topics in distributed systems pochodzi z serwisu Proxmedia.

]]>
Social media Sentiment Analysis using C# API https://pxm-software.com/social-media-sentiment-analysis-using-c-api/ Fri, 20 Nov 2015 21:05:53 +0000 https://proxmedia.pl:99/en/?p=4204 Sentiment Analysis in Social media has become very important for many companies over the last few years as more and more people use this communication channel not only to exchange the messages, but also to express their opinions about the products and services they use or used in the past. In order to start proper […]

Artykuł Social media Sentiment Analysis using C# API pochodzi z serwisu Proxmedia.

]]>
Sentiment Analysis in Social media has become very important for many companies over the last few years as more and more people use this communication channel not only to exchange the messages, but also to express their opinions about the products and services they use or used in the past.

In order to start proper sentiment analysis, we need to use correct Text Analytics and Text Mining techniques. The fastest way is to use external API to get the information we need and then import that data on the daily basis to our system database, where we can visualize and track the results.

In this article I will show you, how you can convert unstructured data e.g. tweets using text2data.org API, into structured information that you can use to track sentiment towards the brand, product or company.

First step is to register in the service and get the private key to be used in your API calls. Next, you need to download API SDK from here

The code below, will just create request object containing your information and post that to platform using json or xml format. The response object will have information you need: Detected Entities, Themes, Keywords, Citations, Slang words and abbreviations. For each of these you can get sentiment result as double, polarity and score (positive, negative or neutral). Received information can be formatted the way you want. Below is complete sample (except for private key):

  static void Main(string[] args)
   {
        var inputText = "I am not negative about the LG brand.";
        var privateKey = "-------------"; //add your private key here (you can find it in the admin panel once you sign-up)
        var secret = ""; //this should be set-up in admin panel as well.

       var doc = new Document()
       {
          DocumentText = inputText,
          IsTwitterContent = false,
          PrivateKey = privateKey,
          Secret = secret
       };

       var docResult = API.GetDocumentAnalysisResult(doc); //execute request

       if (docResult.Status == (int)DocumentResultStatus.OK) //check status
       {
        //display document level score
        Console.WriteLine(string.Format("This document is: {0}{1} {2}", docResult.DocSentimentPolarity, docResult.DocSentimentResultString, docResult.DocSentimentValue.ToString("0.000")));

        if (docResult.Entities != null && docResult.Entities.Any())
        {
         Console.WriteLine(Environment.NewLine + "Entities:");
         foreach (var entity in docResult.Entities)
         {
           Console.WriteLine(string.Format("{0} ({1}) {2}{3} {4}", entity.Text, entity.KeywordType, entity.SentimentPolarity, entity.SentimentResult, entity.SentimentValue.ToString("0.0000")));
          }
       }
 
      if (docResult.Keywords != null && docResult.Keywords.Any())
      {
         Console.WriteLine(Environment.NewLine + "Keywords:");
         foreach (var keyword in docResult.Keywords)
          {
             Console.WriteLine(string.Format("{0} {1}{2} {3}", keyword.Text, keyword.SentimentPolarity, keyword.SentimentResult, keyword.SentimentValue.ToString("0.0000")));
            }
        }

        //display more information below if required 

      }
      else
      {
          Console.WriteLine(docResult.ErrorMessage);
      }

       Console.ReadLine();
  }

You can always validate results by checking the demo site (on text2data.org platform)

sentiment-analysis

Enjoy 🙂

Social-Media-Icons

Artykuł Social media Sentiment Analysis using C# API pochodzi z serwisu Proxmedia.

]]>
Creating custom WCF message router – load balancer https://pxm-software.com/creating-custom-wcf-message-router-load-balancer/ Sun, 15 Nov 2015 18:10:30 +0000 https://proxmedia.pl:99/en/?p=4009 The very common requirement is to route WCF messages from the publicly exposed front-end service (perimeter network) to other services sitting within the local network. This is the good security model that lets us implement routing or load balancer if needed. The front-end service will also act as a firewall in this scenario. The routing […]

Artykuł Creating custom WCF message router – load balancer pochodzi z serwisu Proxmedia.

]]>
The very common requirement is to route WCF messages from the publicly exposed front-end service (perimeter network) to other services sitting within the local network. This is the good security model that lets us implement routing or load balancer if needed. The front-end service will also act as a firewall in this scenario. The routing may be performed based on the requested service method, content or custom algorithm.

The image below shows typical hight level service infrastructure within the organization.

wcf_router

Let’s start with defining our service that the client will use to send requests. The “Calculate” method will be used to perform time consuming operation that will require more CPU usage.

  [ServiceContract]
    public interface IMessageService
    {
        [OperationContract]
        string SendMessage(string value);

        [OperationContract]
        int Calculate(string value); 
    }

  public class MessageService : IMessageService
  {
        public string SendMessage(string value)
        {
            return string.Format("You have sent: {0}", value);
        }

        public int Calculate(string value)
        {
            //do calculation
            return 999;
        }
    }

Next, we need to define IRouter interface. In our example we will use two methods: RouteMessage, that will process and route the messages and AddAvailableEndPoints, that will be used to assign available endpoints for the requests to be routed to. Please note that our router will not be using service contracts which is very useful as we don’t want to recompile router service each time our message service interface changes. You may also implement function to assign routing rules, in our example we will hard-code the rules for better clarity.

  [ServiceContract]
    public interface IRouter
    {
        [OperationContract(Action = "*", ReplyAction = "*")]
        Message RouteMessage(Message message);

        void AddAvailableEndPoints(List<EndpointAddress> addressList);        
    }

Let’s create our router class now. Please note that we are using static constructor as we will only use one instance of the service (creating and destroying channel factory is very costly operation).

 static IChannelFactory<IRequestChannel> factory;
 List<EndpointAddress> addressList;
 int routeCount;

  //create only one instance
  static Router()
  {
    try
    {
        var binding = new BasicHttpBinding();
        factory = binding.BuildChannelFactory<IRequestChannel>(binding);

        factory.Open();
    }
    catch (Exception e)
    {
        Console.WriteLine("Exception: {0}", e.Message);
    }
  }

Now we need to implement the actual routing algorithm. In our case, when message arrives we will check the action method that proxy object will use to execute on the target service. We have hard-coded the method “Calculate” to be routed to service2, the other request will be routed equally among available services. You can also use XmlDocument class to parse message content if needed.
Of course in the production environment you may want to pass in routing rules as a object and examine that when processing arrived message prior routing it to appropriate service.

 public Message RouteMessage(Message message)
 {
    IRequestChannel channel = null;
    Console.WriteLine("Action {0}", message.Headers.Action);
    try
    {
        //Route based on custom conditions
        if (message.Headers.Action.ToLower().EndsWith("calculate")) //or use custom routing table stored in other location
        {
            //use second endpoint as it has more resources for time consuming operations
            channel = factory.CreateChannel(this.addressList.Skip(1).First());
            Console.WriteLine("Routed to: {0}\n", this.addressList.Skip(1).First().Uri);
        }
        else
        {
            //or
            #region Route other requests equally
            //we assume only 2 endpoints for this example 
            if (routeCount % 2 == 0)
            {
                channel = factory.CreateChannel(this.addressList.First());
                Console.WriteLine("Routed to: {0}\n", this.addressList.First().Uri);
            }
            else
            {
                channel = factory.CreateChannel(this.addressList.Skip(1).First());
                Console.WriteLine("Routed to: {0}\n", this.addressList.Skip(1).First().Uri);
            }
            //reset route counter
            if (routeCount > 10000) { routeCount = 0; } else { routeCount++; }
            #endregion           
        }

        //remove context as the message will be redirected
        message.Properties.Remove("ContextMessageProperty");

        channel.Open();

        var reply = channel.Request(message);

        channel.Close();

        return reply;
    }
    catch (Exception e)
    {
        Console.WriteLine(e.Message);
        return null;
    }
}

At this time, we can create client application that will call our front-end service(router) multiple times for our test.

 static void Main(string[] args)
 {
    Console.Write("Press enter to send multiple messages");
    Console.ReadLine();

    var baseFrontOfficeAddress = new Uri("https://localhost:81/FrontOffice");

    try
    {
        var binding = new BasicHttpBinding();
        var endpoint = new EndpointAddress(baseFrontOfficeAddress);

        var factory = new ChannelFactory<IMessageService>(binding, endpoint);
        var channel = factory.CreateChannel();

        //simulate multiple requests
        for (var i = 0; i < 10; i++)
        {
            var reply = channel.SendMessage("test message");
            Console.WriteLine(reply);

            var replyCalulated = channel.Calculate("test message");
            Console.WriteLine("Calculated value: " + replyCalulated);
        }

        Console.ReadLine();
        factory.Close();
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
        Console.ReadLine();
    }
}

In order to receive client requests we need to start our services. In our example we will host router and available services within the same application. Normally we would deploy router service within DMZ zone and target services on the separate computers withing our local network. For security reasons, only router service would be exposed to public network.

 static void Main(string[] args)
 {
    //execute this first using cmd (win7)
    //netsh http add urlacl url=https://+:81/FrontOffice user=Domain(or PC name)\UserName
    //netsh http add urlacl url=https://+:81/service1 user=Domain(or PC name)\UserName
    //netsh http add urlacl url=https://+:81/service2 user=Domain(or PC name)\UserName

    //front office endpoint - all clients will call this address before requests will be routed
    var baseFrontOfficeAddress = new Uri("https://localhost:81/FrontOffice");

    #region Target service endpoints that requests can be routed to (can be more than 2)
    var baseEndPointAddress1 = new Uri("https://localhost:81/service1");
    var baseEndPointAddress2 = new Uri("https://localhost:81/service2");

    var endPoint1 = new EndpointAddress(baseEndPointAddress1);
    var endPoint2 = new EndpointAddress(baseEndPointAddress2); 
    #endregion

    #region These services should normally be deployed to different servers within the organization
    //start service 1
    var hostService1 = new ServiceHost(typeof(MessageService1), baseEndPointAddress1);
    hostService1.AddServiceEndpoint(typeof(IMessageService), new BasicHttpBinding(), "");
    hostService1.Open();
    Console.WriteLine("MessageService1 service running");

    //start service 2
    var hostService2 = new ServiceHost(typeof(MessageService2), baseEndPointAddress2);
    hostService2.AddServiceEndpoint(typeof(IMessageService), new BasicHttpBinding(), "");
    hostService2.Open();
    Console.WriteLine("MessageService2 service running");        
    #endregion

    #region Start router service
    var router = new Router();

    //add available service enpoints
    router.AddAvailableEndPoints(new List<EndpointAddress>() { endPoint1, endPoint2 });

    var routerHost = new ServiceHost(router);
    routerHost.AddServiceEndpoint(typeof(IRouter), new BasicHttpBinding(), baseFrontOfficeAddress);
    routerHost.Open();

    Console.WriteLine("Router service running");           
    #endregion           
    
    Console.WriteLine("---Run client to send messages---");  
    Console.ReadLine(); 
}

The results of our test is shown below. In this example we chose to route all requests for “Caluculate” method to “service2” (as it located on faster machine within the network), the other requests are routed equally to service1 or service2.

routed_messages

Above way of routing messages is very flexible as it is implemented on the very low level. The other way of routing messages is to use RoutingService class. In this approach you would define the routing table, endpoints and filters in your configuration file. Based on these settings you would add service behavior that will use RoutingService class to route messages.

I have attached project files below for your tests. Please execute below commands before testing it:

netsh http add urlacl url=https://+:81/FrontOffice user=Domain(or PC name)\UserName”
netsh http add urlacl url=https://+:81/service1 user=Domain(or PC name)\UserName
netsh http add urlacl url=https://+:81/service2 user=Domain(or PC name)\UserName

WCF_Router

Artykuł Creating custom WCF message router – load balancer pochodzi z serwisu Proxmedia.

]]>
Integrating automated Selenium tests with TeamCity https://pxm-software.com/integrating-automated-selenium-tests-with-teamcity/ Fri, 06 Nov 2015 20:01:24 +0000 https://proxmedia.pl:99/en/?p=4110 Automated tests are crucial these days to achieve high quality software products as they are being prone to errors especially during agile process where changes can occur at any stage causing our solution to stop functioning properly. Apart from standard testing using TDD, Mocking etc. there is always a need to perform interface tests simulating […]

Artykuł Integrating automated Selenium tests with TeamCity pochodzi z serwisu Proxmedia.

]]>
Automated tests are crucial these days to achieve high quality software products as they are being prone to errors especially during agile process where changes can occur at any stage causing our solution to stop functioning properly. Apart from standard testing using TDD, Mocking etc. there is always a need to perform interface tests simulating an user using our website. The ideal solution in that case are automated Selenium tests.

 

Selenium tests give us possibility to create user case scenarios that we can test either on our dev applications or live websites deployed to the clients. This will give us a chance to test all cases after the build will happen either on demand or when we deploy live version of our product.

Selenium tests can be created by the developer or business analyst or testers as this not requires programming skills. In order to start creating tests we need to download Selenium IDE plug-in for Firefox: https://docs.seleniumhq.org/download/

Creating tests is similar to recording macros in Excel, all we need to do is to record our actions that we want to test, image below shows the example:

 

selenium-ide

Please note that you need to install Firefox version 21.0 the newer version is not working with selenium (Firefox bug).

After our test is created, we need to export it to .cs C# file as pictured below:

selenium-export

The next step is to create small library project application. We will use this project to compile our test files exported from the Selenium IDE. After we have the project created, we simply need to copy our exported .cs file to our project and compile dll.

selenium-project-files

If we want to integrate Selenium tests with the TeamCity server, we can do it by including previously created project into our development project files. After that, we can configure TeamCity build step to run our Selenium tests on every triggered build. To configure build step, we need to choose NUnit runner type (or other test framework of your choice)

buildstep

If we have configured everything correctly, the test will run on every application change. Of course Selenium tests must be run at the end, when project is compiled and deployed on the server (if this is our local dev environment).

When tests finish, we can check the status in TeamCity admin panel or configure email notification or use TeamCity Tray Notifier to track the results.

test-results

The final thing that needs to be set-up is the Selenium server. You can run the Selenium server locally by turning it on when needed and then running NUnit application to check the test locally. In our case we want the Selenium to be running on the server as a Windows Service.

After downloading Selenium RC https://docs.seleniumhq.org/projects/remote-control/ we unpack the files to following location C:\Selenium\selenium-server-1.0.3 You not necessarily need all that files as it also includes web drivers for other languages. Anyway, our folder structure should look similar to following:

selenium-files

We also need to download nssm-2.16 (“Non-Sucking Service Manager”), we will use it to install Windows Service running our Selenium server. When we run run-server-manually.bat, the Selenium server will be launched, it will stop after you close console window – this will help when testing on your local machine.
The script looks like this:

 java -jar selenium-server-standalone-2.33.0.jar -port 4444

I have also created 2 scripts: install-windows-service.bat and remove-windows-service.bat that install and remove Windows Service on the target server. The logic is as follows:

//install Windows Service
C:\Selenium\selenium-server-1.0.3\nssm-2.16\win64\nssm.exe install Selenium-Server "C:\Program Files (x86)\Java\jre7\bin\java.exe" "-jar C:\Selenium\selenium-server-1.0.3\selenium-server-standalone-2.33.0.jar"

//Uninstall Windows Service
C:\Selenium\selenium-server-1.0.3\nssm-2.16\win64\nssm.exe remove Selenium-Server

After installing Windows Service, please make sure it is running by going to Control panel > Administrative Tools > Services. You also need to remember to check java path as it differs for x64 systems.
That’s it. I have included configuration files that may help you. Hope you will be successful with your configuration 🙂

Selenium-VisualStudioApp

Artykuł Integrating automated Selenium tests with TeamCity pochodzi z serwisu Proxmedia.

]]>
Auto creating installers using WIX (Open source) and CI server https://pxm-software.com/auto-creating-installers-using-wix-open-source-and-ci-server/ Mon, 02 Nov 2015 21:03:27 +0000 https://proxmedia.pl:99/en/?p=4195 Creating installer packages has become more complicated recently as the Microsoft decided to not to support installation projects (.vdproj) in VS 2012 upwards. On the top of that there is additional difficulty if we want to auto-create installers on the server that hasn’t got commercial Visual Studio installed – we just cannot do it 🙁 […]

Artykuł Auto creating installers using WIX (Open source) and CI server pochodzi z serwisu Proxmedia.

]]>
Creating installer packages has become more complicated recently as the Microsoft decided to not to support installation projects (.vdproj) in VS 2012 upwards. On the top of that there is additional difficulty if we want to auto-create installers on the server that hasn’t got commercial Visual Studio installed – we just cannot do it 🙁

The perfect solution for that is to use WIX installer (Open source). WIX enables you to build installation packages from xml files without relying on Visual Studio. The only component required is to install WIX toolset on the build server (wixtoolset.org)

The main WIX file is the .wixproj file that builds the required package either from inside Visual Studio or from command line – without VS. In that project file we can find some commands that use WIX heat, candle and light modules to create required msi files. See sample below:

Harvesting module will create dynamic list of all files that need to be included into installer. This list is gathered after our website app is published to the local folder:

  <Exec Command="&quot;$(WixPath)heat.exe&quot; dir $(PublishF) -dr INSTALLLOCATION -ke -srd -cg MyWebWebComponents -var var.publishDir -gg -out WebSiteContent.wxs" ContinueOnError="false" WorkingDirectory="." />

After harvesting, we can finally build installer. Please note that we can pass in custom parameters when triggering candle module, the parameters must be preceded with the letter “d” e.g. -dMyParam. In our sample we will use this to pass in command line arguments that will define our installer’s name, version, website root etc.

 <Exec Command="&quot;$(WixPath)candle.exe&quot; -ext WixIISExtension -ext WixUtilExtension -ext WiXNetFxExtension -dProductName=$(ProductName) -dWebFolderName=$(WebFolderName) -dProductVersion=$(ProductVersion) -dUpgradeCode=$(UpgradeCode) -dProductCode=$(ProductCode) -dpublishDir=$(PublishF) -dMyWebResourceDir=. @(WixCode, ' ')" ContinueOnError="false" WorkingDirectory="." />

 <Exec Command="&quot;$(WixPath)light.exe&quot; -ext WixUIExtension -ext WixIISExtension -ext WixUtilExtension -ext WiXNetFxExtension -out $(MsiOut) @(WixObject, ' ')" ContinueOnError="false" WorkingDirectory="." />

For the purpose of this article we will create very simple installer without configuring IIS settings, deploying sql scripts etc. All installer’s basic configuration can be found in UIDialogs.wxs, MyWebUI.wxs, IisConfiguration.wxs – you may want to adjust it for your needs. The file Product.wxs is the main entry point where you can define installation folders, actions, validations etc.

Please note that adding UpgradeCode and MajorUpgrade configuration elements will result in any older installed version to be automatically un-installed prior the installation. Also, any custom params you want to use inside your installer configuration must be used like this $(var.MyParamName).

 <Product Id="$(var.ProductCode)" 
			 Name="$(var.ProductName)" 
			 Language="1033" 
			 Version="$(var.ProductVersion)" 
			 Manufacturer="MyCompany" 
			 UpgradeCode="$(var.UpgradeCode)" >
 <MajorUpgrade DowngradeErrorMessage="A newer version of Product Name Removed is already installed."/> 
 
 .....

After defining our WIX configuration files, we can finally build our installer. We can do it by using batch or Power Shell script, or use Visual Studio to do it. It is because the .wixproj file it’s just normal ms-build file.

When using CI server we can simply run power-shell installation process by passing in our version number (build number, revision etc.) and other parameters we need.

This is how we can run it with PS:
WIX_installer-powershell

We can still use Visual Studio to do the same, as presented below:
WIX_installer-visualstudio

I have attached working sample below. Please don’t forget to install WIX toolset before running it. Also please check if all paths to WIX and msbuild folders are the same on your server (or use environment variables instead).

Enjoy!
MyWebsite

Artykuł Auto creating installers using WIX (Open source) and CI server pochodzi z serwisu Proxmedia.

]]>
Using presentation model components and entities in ntier architecture https://pxm-software.com/using-presentation-model-components-and-entities-in-ntier-architecture/ Fri, 30 Oct 2015 13:48:43 +0000 https://proxmedia.pl:99/en/?p=3978 When designing applications that will run in the n-tier environment there are some additional factors to be considered. First of all it is very likely that your business entities will be located on separate machine. In this scenario there is a need to create objects in your UI layer that will have functionality of the […]

Artykuł Using presentation model components and entities in ntier architecture pochodzi z serwisu Proxmedia.

]]>
When designing applications that will run in the n-tier environment there are some additional factors to be considered. First of all it is very likely that your business entities will be located on separate machine. In this scenario there is a need to create objects in your UI layer that will have functionality of the business objects sitting on separate tier. Those objects should also include businesses validation logic and be able to easily apply bindings to UI components.

When transferring business entity data between the tiers, there is a need to create “data transformation objects” that transport object information across the network (if not invoked directly). DTOs also allow to separate layers and expose only those data that we will use (encapsulation). It is especially useful when using restful services.

Diagram below shows the typical n-tier architecture using presentation components and entities.

pres_model_components

Diagram 1. Presentation model components and entities in multi tier application (please refer to “Microsoft Application Architecture Guide, 2nd Edition” book for more details).

Please note that using DTOs may incur some performance lost as the objects need to be mapped and transformed between the layes/tiers. Solution for that could be executing batch commands at once sending final results as combined object to avoid round trips within the network.

If your tiers are located within your local network it is advisable to use TCP protocol(more efficient) otherwise use HTTP/HTTPS when transferring data across the public network.

Artykuł Using presentation model components and entities in ntier architecture pochodzi z serwisu Proxmedia.

]]>
TOGAF vs COBIT, PRINCE2 and ITIL – Webinar https://pxm-software.com/togaf-vs-cobit-prince2-and-itil-webinar/ Fri, 02 Oct 2015 21:10:09 +0000 https://proxmedia.pl:99/en/?p=4212 TOGAF is currently the most popular open architecture framework that is widely implemented in medium and big organizations. It’s ADM lets us operate within generic, proven framework when defining the Enterprise Architecture. If you are an Architect or PM, it is very useful to know how TOGAF relates to other frameworks like COBIT, ITIL and […]

Artykuł TOGAF vs COBIT, PRINCE2 and ITIL – Webinar pochodzi z serwisu Proxmedia.

]]>
TOGAF is currently the most popular open architecture framework that is widely implemented in medium and big organizations. It’s ADM lets us operate within generic, proven framework when defining the Enterprise Architecture.

If you are an Architect or PM, it is very useful to know how TOGAF relates to other frameworks like COBIT, ITIL and PRINCE2.

Digram below shows how those frameworks overlap with each other on the operation, governance and change management level.
togaf_process_change_frameworks
We can clearly see that TOGAF overlaps with COBIT in architecture governance, with ITIL in service/ operation area and with PRINCE2 in change management.

Further interaction is shown on the next diagram:

Togaf_cobit_itil_prince2

Process chain for the above diagrams depicts following image:

process_change_detials

ITIL itself, can be described as follows:

service_level_knowhow_itil

Finally, the last diagram shows how PRINCE2 relates to Architecture within its operational level:

architecture_in_prince2

For more information, please check following video:


Artykuł TOGAF vs COBIT, PRINCE2 and ITIL – Webinar pochodzi z serwisu Proxmedia.

]]>