What are microservices?

If you pick up a list of talks for any developer conference, you will find at least one talk related to Microservices. As many, I have been fascinated by them for some time now. The obvious question to me was what are microservices. I started reading up on it. I came across Martin Fowler's post. I listened to industry experts such as Clemens Vasters. In a series of books to read on the topic, I started out with the Building Microservices book. Before we proceed further, it is necessary to understand what a service is.

What's a service?

The SOA patterns book says

Service should provide a distinct business function and it should be a coarse-grained piece of logic. One of the characteristics of the service is autonomy, which means the service should be mainly self-sufficient.

I like this definition. The main point to notice here is autonomy. This means a service can be deployed anytime, multiple times a day without any burden of affecting the consumers. It also tries to keep the failures isolated to itself. Loose coupling between these services is another hallmark of service oriented architecture. So, what are microservices then? What's new about them? Are they just SOA done right?

Microservices definition or properties?

There is no industry-wide, fully agreed, one sentence type of definition for microservices. However, there is a set of properties laid out in Building microservices below:

  • They are autonomous.
  • Their boundaries align with business boundaries.
  • They are small.

Now, how small is small enough? Until it doesn't feel too big? To me, they should be small enough to maintain the cohesion. Business boundaries also give us a good idea on their size. If your service spans across multiple business areas, it is too big. If they are too small and have too many boundaries, then you may end up in the death star architecture like this:

To further the point on autonomy, I should be able to independently scale these services. They should also have their own data sources and don't share a monolithic one. I can then call this data independence. One good point I've heard is

You shouldn't make a query/get call to another microservice.

You can hear Scott Bellware and Scott Hanselman discussing the details here

So, microservices are a combination of these properties. Here are few other great talks that made things very clear to me:

Analogy

I tend to shy away from playing the analogy card too much but if/when I get pushed too hard on one on this topic, I am likely to use this.
That drawing is trying to refer to a gas stove. Let's say we have to make chicken curry, steam some veggies and make Indian roti(flat bread) for dinner. I can cook all of these things together on each burner without really affecting one another. If I mess one up, the other one is not affected. I can take them off the burner at any time. That's autonomy. Each of these gets a steady supply of gas and has some their own ingredients. That's data independence. Those ingredients stay in the pot and don't leave it until I put them on my plate (composite UI). I have my boundaries clearly defined. I can use an extra burner if required for something that I need more of. Now, I am realizing the scalability aspect of the system.

In the real world

Let's apply this to my favorite tennis equipment site.
All the red rectangle areas could be services of their own with their own databases. They can be scaled differently with different deployment policies.

One of the highly scaled backend services apps I worked on had 12 different NServicebus endpoints. They were scaled differently at least from the instances perspective. They were deployed separately. They were sort of fault tolerant. They processed tens of millions of records every day.
A couple of components in the chain of endpoints were doing direct calls to SOAP services with monolithic databases. The stored procedures used were in the zone of 5k-7k lines. No one knew how to optimize them or what rules were in them. The components took longer to process than everything else. If they failed the couldn't process their backlog at all. Was this a microservices system? Not really because they were sharing a monolithic database through RPC call. Breaking that up was not a trivial task. Moreover, the endpoints did not represent business boundaries very well.

Avoid the tech evangelism trap

A good chunk of folks in our industry is trying to promote something. More often than not, it is their product FOO and it is better than everyone else's. It is the exact thing you are going to need for the foreseeable future. How many times do we hear this type of stuff? The container products are pushed very hard these days in the Microservices space. The evangelists/promoters will tell you what's right about those but don't go into space of where it doesn't apply. They tell you enough to push you off the cliff but leaving the flying part to you. Very often people just get hung up on these things. When you ask them which problem these are actually solving, you get crickets back. I am not saying all of these are bad ideas but you don't have to have one to have microservices. For the exhaustive list of what does or what doesn't make a microservice, I'd refer to Jimmy Bogard's post

Where do I begin?

It is usually easier to break up a monolith into microservices than starting out in a green field project. So, how do you divide these monoliths? There is no one silver bullet type of answer. You have to take it on a case by case basis. The good place to start is monitoring your monolith and develop a heat map. Kelsey Hightower says "If you tell me that the app is slow, you got to be able to tell me why." on this episode of hanselminutes. They touch on a number of topics in this area along with horizontal scalability. This is where containers come into the picture.

Another place is to dig deeper into the business domain and start developing some consensus around boundaries there. I’ve seen implementations go completely awry if this was missed.

Parting words

I hope this clarified some of the clouds around the definition of microservices. They are not a solution that you can slap on every problem out there. I wouldn't break up my blog into microservices architecture. We will cover more topics in the area later.

If you pick up a list of talks for any developer conference, you will find at least one talk related to Microservices. As many, I have been fascinated by them for some time now. The obvious question to me was what are microservices. I started reading up on it. I…

Read More

Continuous integration For a Xamarin App with Jenkins and deployments with Hockey App

In the previous post, we saw why Jenkins makes sense for Xamarin build, at least for now. If you have previously worked with continuous integration systems such as Team City, the steps in Jenkins will look familiar to you. The steps follow a logical flow. So, even if you're brand new to this, the learning curve is not that steep.

Jenkins supports a Master/Slave setup. If there is a master running on Windows, perhaps a Mac slave may do the trick. For simplicity, we are going to look at the master running on a Mac.

Jenkins provides a download for Mac and Windows at Jenkins .io. However, I am not going to recommend that. The installer creates a new shared user Jenkins on Mac. This complicates things if Jenkins needs to access anything from the Keychain of the logged in user(likely to be a service account or your own). So, if you are working with TFS Git or any Git repo over HTTPS, you are probably using OSXKeychain Helper or Microsoft's Credential Manager. Jenkins user simply can't access those easily. Creating another user just for Jenkins was not an option for me. We will now see how to get around this issue.

We are going to need Homebrew installed on the Mac. It is a package/application manager for Mac, Mac's Chocolatey from Windows world.
After installation, simply run the commands below:

brew Install Jenkins
brew services start Jenkins

This way Jenkins gets installed for the currently logged in user and the credentials/keychain access problem does not come up.

In order to make Jenkins site available from other machines, follow instructions below:

  • Open homebrew.mxcl.jenkins.plist located in/usr/local/Cellar/Jenkins/2.7 (your installed version could be different)
  • Change the -httpListenAddress=127.0.0.1 to 0.0.0.0
  • Fix the url in Jenkins Configuration
  • Restart Jenkins using commands below

    brew services stop jenkins
    brew servics start jenkins
    

And that's it, you should be able to hit the site installed on the port 8080 if your firewall allows traffic on that port.

Afterwards, I recommend following these very detailed instructions by Jeffry van de Vuurst except for the build script. My modified version of the build script is available on my Github repo.I have updated it to work with the new folder structure Xamarin uses to put the .IPA files in and added a command to restore packages from sources other than the nuget.org feed.

Xamarin also provides documentation on setting up Jenkins. They recommend downloading the Jenkins app which uses the installer instead of Homebrew, so you may run into issues related to credentials.

There are two ways of triggering the builds Polling and Adding Git hooks. This post goes into the details of both. For simplicity, I use polling with Cron schedule of * * * * *.

Hockey App provides a way of pushing the builds on to the devices. It is fairly cheap and simple to setup. You can create multiple teams and push different builds on to their devices. In Jenkins world, "There is a plugin for that!" is very true. They of course have a plugin for the Hockey App. The setup is very straight forward and the details on setting it up can be found in another post by Jeffry.

At this point, the setup is pretty much all set. It works just fine if you have the provisioning profile downloaded on the build machines using Xcode. While setting all that up in the Apple Developer Portal, it is very important to remember to create the certificate on the build machine otherwise the build fails. It can't create an .IPA file because of the private key issues.

Whenever I ran into issues, I always went to the workspace folder of Jenkins, opened the solution in Xamarin Studio and built it. That revealed a lot of details on the problems that lead to a resolution.

In the previous post, we saw why Jenkins makes sense for Xamarin build, at least for now. If you have previously worked with continuous integration systems such as Team City, the steps in Jenkins will look familiar to you. The steps follow a logical flow. So, even if you're brand…

Read More

Continuous Integration For a Xamarin app, a Microsoft update

Recently, Microsoft ended up buying Xamarin. It was very much anticipated, it always felt more of a "when" than "if" and finally it happened. Microsoft made the Xamarin suites (most of it) available with the MSDN licenses. It removed a barrier for a lot of companies and independent developers. Now, if you're a .NET developer and want to build a cross platform application, you may not have to look beyond Xamarin.

In this series of mobile app development, we are going to first tackle the continuous integration.

In today's Enterprise world, the use of TFS is prevalent. It usually tends be an on-prem one. Thankfully, TFS allows us to create Git repositories. For now, it only supports HTTP. SSH is coming to TFS with Update 3.The timeline for features is here. I have also seen the on-prem TFS version is slightly behind the cloud one. The features mentioned here are not available in an on-prem version. There is some starter documentation by Xamarin but it may not be enough to see you though turbulence.

For the Xamarin apps, the Mac TFS build agent has gone through a lot of churn. It used to be VSO Agent.It is being deprecated in favor of the VSTS agent.The new agent didn't support on-prem TFS until version 2.101.0. I learned it the hard way. It is in a preview state as of June 20th, 2016. It lacks community support. If you hit a road block, you may have to dig into code yourself (yay, OSS!) or open up a ticket on GitHub and wait for the team's reply.

Based on the feature gaps between visual studio online and TFS on-prem, lack of community support, confusing MSDN articles and issues with the preview version of Mac Build Agent, I'd recommend not to use TFS for the continuous integration purposes, at least for now. They are moving in the right direction but not fully there yet.

Instead, the better option is Jenkins. It is free. It has a great community behind it. It has rich plugin ecosystem. It has a nice upgrade and downgrade functionality. If for some reason, upgrading a plugin or Jenkins itself is not working the way the previously installed version did, you can simply go back to that version without too much of a hassle. In other words, it just works!

The next post in the series is available at http://aradhye.com/continuous-integration-for-xamarin-app-with-jenkins-and-deployments-with-hockey-app/

Recently, Microsoft ended up buying Xamarin. It was very much anticipated, it always felt more of a "when" than "if" and finally it happened. Microsoft made the Xamarin suites (most of it) available with the MSDN licenses. It removed a barrier for a lot of companies and independent developers. Now,…

Read More

Git Links

Why DVCS

Atlassian Post on Centralized vs DVCS

Why DVCS?(Stackoverflow question)

SoftwareWhys

Windows Setup

Dan's Powershell scripts

Dan's Windows Setup post

Mac Setup

GitHub help

Set SSH on Mac

Setup Git on Mac

Mobile

iOS app

Android App

Git With TFS

Using Git as a TFS source control

Git bridge for TFS

Workflows

GitFlow

Atlassian Post

Gitflow by Github

Git Tutorials

Atlassian Git Tutorial

.Net team's Git Training on Channel9

Git rebase

The Golden Rule of rebase

Merge or Rebase

Some more..

Safely store secrets in repo

My List of Useful commands

Git in the real world talk

Why DVCS Atlassian Post on Centralized vs DVCS Why DVCS?(Stackoverflow question) SoftwareWhys Windows Setup Dan's Powershell scripts Dan's Windows Setup post Mac Setup GitHub help Set SSH on Mac Setup Git on Mac Mobile iOS app Android App Git With TFS Using Git as a TFS source control Git…

Read More

My list of useful Git commands

After using Git for last three years extensively, I have compiled this list of commands I use frequently. Occasionally, I rely on posts like this one to get me out of the pickle.

Help

git help <commandName> (dont write git in front of it.. ex: git help remote) 

Git Staging

New Files -> git add .
Unstage/Delete -> git add -u
clean -> git clean -f

Undo

 Undo -> git reset --hard
 Undo Commit -> git reset --soft[hard] Head~1

Clone a Repo

clone  -> git clone [ssh/https Url]

Stash

stash -> git stash
apply -> git apply
drop -> git stash drop
clear -> git stash clear
pop and apply -> git stash pop
list -> git stash list

Tag

Create-> git tag [tagName]
show -> git show [tagName]
Push To Remote ->git push --tags
             git push origin [tagName]

Log

Fancy log -> git log --graph --oneline --all --decorate
Reference log 
(shows where the head was..useful for recovering delete) 
          -> git reflog

Alias

Ex:
Run -> git config --global alias.lga "log --graph --oneline --all --decorate"
Use -> git lga

Branching

create -> git branch [branchName]
create checkout -> git checkout -b [branchName]
checkout remote branch -> git checkout [remoteBranchName]
Move branch -> git branch -m [oldBranchName] [newBranchName]
Delete Branch -> git branch -d [branchName]
Delete Remote branch -> git push origin --delete [branchName]
Recover Deleted Branch -> git branch [branchName] [SHA1Hash]
From Tag -> git branch [branchName] [TagName]
create from SHA1 hash -> same as above

Push changes

git push <originname> <branchname>

Merge

Steps: 
    1) Checkout the  host branch 
    2) git merge [branchToMerge]

conflict -> git mergetool

Rebasing

replay all commits one at a time and make it look like they were always a part of a branch
Steps:
    1)Checkout the branch you want for Ex: Feature1
    2)Run-> git rebase [branchNameYouWanttoApplyTheFeatureOn]

Cherry-pick

steps:
    1)Checkout the branch
    2)Run -> git cherry-pick [SHA1]

Add multiple origins

add new origin with fetch url -> git remote add --fetch <originName> <fetchurl>
Setup push url for the origin -> git remote set-url <originName> --push <pushurl>
show all origins -> git remote -v show
fetch all the remotes ->  git remote update
remove stale branches -> git remote prune origin

Misc ((not all are Git related)

create-> touch fileName
remove -> rm fileName
See config -> cat ~/.gitconfig

After using Git for last three years extensively, I have compiled this list of commands I use frequently. Occasionally, I rely on posts like this one to get me out of the pickle. Help git help <commandName> (dont write git in front of it.. ex: git help remote)…

Read More

Upgrading NServiceBus to V5 from V4 - Part2

This is a continuation of my previous NServiceBus upgrade post.

Logging

The logging functionality that used to be in NServiceBus.Core is moved to a separate set of nuget packages such as NServiceBus.CommonLogging, NServicebus.Log4net and NServiceBus.NLog

SetLoggingLibrary from V4 is removed from V5. LogManager.Use<Log4NetFactory>() from the NServicebus.Log4Net will get the job done for Log4Net implementations. The obsolete error message clearly states that.

EndpointName

In order to stop the machine name from being appended, I thought the line below was sufficient.

configuration.ScaleOut().UseSingleBrokerQueue();

It didn't work. Then, I stumbled on this Stackoverflow post and this Github issue. It looks like RabbitMQ, SQLServer and ActiveMQ transports override that setting and try to create a queue with a machine name at the end of it even though one without the machine name exists.

To disable this behavior, you can do something like below (I am using SqlServerTransport for the sake of an example, the same should work for RabbitMQ and ActiveMQ).

configuration.UseTransport<SqlServerTransport>().DisableCallbackReceiver();

StructureMap

It is a good idea to upgrade the dependent packages if they are being used. So, for the StructureMap, after upgrading the NServiceBus.StructureMap package, the configuration that looked like below in V4

Configure.StructureMapBuilder(ObjectFactory.Container)

is like this in V5

configuration.UseContainer<StructureMapBuilder>(b=>b.ExistingContainer([container]));

It is clearly stated in the the error message from StructureMapBuilder extension method. This is an example of a good message.

Assembly Scanning

NServiceBus scans the directory where the assembly with the class that implements IConfigureThisEndPoint resides. This has to be done with care because it is very easy to fall into a dependency hell hole, and the endpoint will not come up because it may start scanning too many assemblies and their dependencies. It can be configured with a finite set of assemblies.

In V4, Configure.With([ListOfAssemblies]) used to be a way to pass the list of assemblies. In V5, configuration.AssembliesToScan(listOfAssemblies) will get the job done where configuration is an instance of BusConfiguration. The name makes more sense in V5. It can take IEnumberable<Assembly>,IIncludesBuilder or IExcludesBuilder.

I think the IIncludesBuilder approach is handy because the list of assemblies will be finite and the rest of the assemblies are excluded at the time of bringing the endpoint up. You can also do some patterns matching.

var listOfAssemblies = AllAssemblies.Matching("YourNameSpace.").And("SomethingElse");
configuration.AssembliesToScan(listOfAssemblies);

More on assembly scanning can be found at this Particular documentation link. This page makes me optimistic that the documentation will eventually catch up.

If too many assemblies are included, the dependency check can spiral into a hole. If too few are included, then you may see errors like No handlers found for the message type or Could not determine type for node like this google groups discussion

MSMQ utilties

The MsmqUtilities class is not public anymore in V5. I don't think it was ever meant to be. It is however allowed to be copied if needed. The V4 source is here and the latest is here

Persistence and features

Particular has provided three types of persistence implementations. InMemory persistence comes with core nuget. NHibernate has its own NServicebus.NHibernate nuget so does the NServiceBus.RavenDB

While using persistence, the order is important. The last option wins. It is highly recommended to take a look at this documentation link

If you want to roll out your own, you can't do it as a BusConfiguration extension because it won't work. It could be implemented as features. The endpoint below is using MyFancyPeristence

 public class MyEndpointConfig
    :IConfigureThisEndpoint
    ,AsA_Server
    ,IWantToRunWhenConfigurationIsComplete
{
    public void Customize(BusConfiguration configuration)
    {
        configuration.UsePersistence<MyFancyPeristence>().For(Storage.Subscriptions, Storage.Timeouts);
    }
    public void Run(Configure config)
    {
        //read settings here.
        //var settings = config.Settings;
    }
}

We are demanding the MyFancyPersistence to provide implementations at least for subscriptions and timeouts. The Storage enum looks like below in NSB codebase as of today.

public enum Storage
{
    Timeouts = 1,
    Subscriptions = 2,
    Sagas = 3,
    GatewayDeduplication = 4,
    Outbox = 5,
}

(source - NSB Storage enum (subject to change))

MyFancyPersistence (besides inheriting from PersistenceDefinition) declares default features with Defaults method and what it can support with Supports method.

public class MyFancyPeristence :PersistenceDefinition
{
    public MyFancyPeristence()
    {
        Defaults(s => s.EnableFeatureByDefault<MyDefaultFeature>());
        Supports(Storage.Timeouts, s => s.EnableFeatureByDefault<MyTimeoutsFeature>());
        Supports(Storage.Subscriptions, s => s.EnableFeatureByDefault<MySubscriptionFeature>());
    }
}

The individual features look like below. Again, this is a simplistic implementation.

public class MyDefaultFeature :Feature
{
    protected override void Setup(FeatureConfigurationContext context)
    {
        var settings = context.Settings; //instance of ReadOnlySettings to get endpointname,etc
        var pipeline = context.Pipeline; //instance of PipelineSettings to register steps in NSB pipeline
        var items = context.Container; //intance of IConfigureComponents to ConfigureComponents
    }
}

public class MyTimeoutsFeature :Feature
{
    protected override void Setup(FeatureConfigurationContext context)
    {
        //Configure components that implement timeouts implementation into fav storage
    }
}
public class MySubscriptionFeature :Feature
{
    protected override void Setup(FeatureConfigurationContext context)
    {
        //Configure components that implement subscription implementation into fav storage
    }
}

For more detailed implementations, please take a look at NHibernate, InMemory or RavenDb.

Conclusion

In my opinion, most of these changes are good changes and make sense. They provide more flexibility like this stackoverflow post. If you are an early adopter, you will have to deal with the documentation that is catching up and a lot of changes in the public API. The open source nature of the project overcomes all of these even if it can be a little time consuming to dig for little changes. I hope these posts help and save some time and grief.

This is a continuation of my previous NServiceBus upgrade post. Logging The logging functionality that used to be in NServiceBus.Core is moved to a separate set of nuget packages such as NServiceBus.CommonLogging, NServicebus.Log4net and NServiceBus.NLog SetLoggingLibrary from V4 is removed from V5. LogManager.Use<Log4NetFactory&…

Read More