How to Allow HTTP Method Override ASP Dotnet Core

Freaking easy, isn’t it. Not some times. it is in fact easier in DotnetCore. I have submitted a GitHub PR for updating the Official documentation for including HTTPOverrides Middleware.

What is it??
Good Question. So why you need it? Well there are various REST clients which don’t have capabilities to send PUT/ PATCH/ DELETE requests to your REST API. Why you would bother? because you want your REST API to be available on, if not all, then most platforms. Your clients would still send a POST call to your API, but with “X-HTTP-Method-Override:PUT” as a header, which makes your API consider the request as a PUT request.

So How to do it?  If you are using MVC 4 on ASP.NET full Framework, please refer to this nice article by Scott Hanselman Use X-HTTP-Method-Override for your REST Service with ASP.NET Web API.

However, if you want to do the same in Dotnet Core, that should be even more easier.

As of date, the documentation for Build-in Middleware,  “Forwarded Headers” is only reachable. I then found out that there is another Middleware HTTPMethodOverride, which can be used to pipe in to the Dotnet Core MVC Pipeline to convert the POST methods to PUT/ DELETE as needed.

Here is how you should do it in Startup.Configure method of your DotnetCore WebApi

 public void Configure (IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    if (env.IsDevelopment ()) {
       app.UseDeveloperExceptionPage ();
    }
    app.UseHttpMethodOverride();
    app.UseMvc ();
}

Once you send a POST request via PostMan specifying correct HTTP Request Header, the request would get through like shown in snapshot below.

Postman Put as POST

EDIT 1 : You can also create your own POST Tunneling Middleware referring to this article by Tomasz Pęczekhttps://www.tpeczek.com/2017/12/post-tunneling-middleware-for-aspnet.html or you can also create your own Custom MiddleWare Refer this Official Documentation
Happy Coding !!
Like always, feel free to provide any feedback..

Install omnisharp-win-x64-1.30.1.zip & VS Debugger for VS Code manually

If you are being a Corporate Proxy or you have a Developer machine running offline, you can’t use the Intellisense feature of Visual Studio Code / CSharp. For these features you need to install the Omnisharp server inside the VSCode Csharp plugin.  Moreover you would like the Debugging functionality to be also enabled. For that you need Core-Clr Debugging Release .

Here are brief steps to do it.

Install OmniSharp 

  1. Download OmniSharp release on your own via Browser , you can do so using your ‘favorite’ browser from path below depending on your Windows’ architecture.
  2. X64  https://omnisharpdownload.blob.core.windows.net/ext/omnisharp-win-x64-1.30.1.zip
    X86  https://omnisharpdownload.blob.core.windows.net/ext/omnisharp-win-x86-1.30.1.zip

  3. Do note that the VSCode-CSharp Plugin is required for Omnisharp to function. So install the plugin first.
  4. Extract the Correct Architecture type zip into the plugin folder. Omnisharp goes straight into the VSCode-CSharp folder.
    Full Path of the `OmniSharp.exe` would be like

    C:\Users\yourusername\.vscode\extensions\ms-vscode.csharp-1.15.2\.omnisharp\1.30.1\OmniSharp.exe

    to allow VSCode to pick it up and launch it. So you know the Plugin allows multiple Omnisharp versions to be installed .

Install Core CLR Debugger

  1. Download the Core CLR Debugger for 64 BIT here  https://download.visualstudio.microsoft.com/download/pr/12267706/d27a74d91a12c0e78222081afdf8e0bb/coreclr-debug-win7-x64.zip
  2. Extract the Zip in the VSCode-CSharp plugin Folder so that Vsdbg.exe is reachable using this path. 
     C:\Users\yourusername\.vscode\extensions\ms-vscode.csharp-1.15.2\.debugger\vsdbg.exe
  3. Force VS Code to believe that Dependencies have been already installed. create a new blank folder install.lock is placed in folder below

    C:\Users\yourusername\.vscode\extensions\ms-vscode.csharp-1.15.2

Once you have installed both the Dependencies, please restart VS Code . Intellisense should work seamlessly, if needed invoke command inside VSCode “OmniSharp: Restart OmniSharp”.

As always, feel free to shoot any comments .

How to Use Font-Awesome in ASPNET dotnetCore Project

https://use.fontawesome.com/0adc9a3884.js

I started with following this article from Microsoft on how to get a Bower / Gulp task runner installed in Microsoft Visual Studio, however my environment is Visual Studio Code + DotNETCore SDK + ASP.NET Core.

Step 1 – Install NodeJS / NPM package manager

How to Install NPM /Node

Step 2- Install Task Runners

Once you are done installing NPM, you can install the Task Runners needed to install Font-Awesome.

    npm install -g bower
    npm install -g gulp

The above would install Gulp and Bower Runners globally where your packages are usually stored.

Step 2 – Configuration of Proxy settings.

If you are running Bower behind a corporate Proxy you have to use .bowerrc file to override settings in the current directory.
Create a new file with below contents.

{
"directory": "wwwroot/lib",
"proxy": "http://yourProxy:yourPort",
"https-proxy":"http://yourProxy:yourPort",
"no-proxy":"myserver.mydomain.com"
}

Step 3 – Install Font-Awesome Library .

Issue below command from the directory where `bower.json` resides.

bower install components-font-awesome --save

This should run as below .

save
bower cached https://github.com/twbs/bootstrap.git#3.3.7
bower validate 3.3.7 against https://github.com/twbs/bootstrap.git#3.3.7
bower cached https://github.com/components/font-awesome.git#4.7.0
bower validate 4.7.0 against https://github.com/components/font-awesome.git
bower install components-font-awesome#4.7.0
bower install bootstrap#3.3.7
components-font-awesome#4.7.0 wwwrootlibcomponents-font-awesome
bootstrap#3.3.7 wwwrootlibbootstrap
└── jquery#2.2.0

Step 4 – Add FontAwesome lib in _Layouts.cshtml

_Layout.cshtml has 2 sections for importing Stylesheets and Java scripts. The one on top loads stylesheets, and one on bottom loads only after the whole Page has loaded and triggers the Javascripts. So, the section on top has multiple sections for Development and Non Development environments. This is how it should look like when you are using localized version of Font-awesome in the App.

<environment include="Development">
        <link rel="stylesheet" href="~/lib/bootstrap/dist/css/bootstrap.css" />
        <link rel="stylesheet" href="~/css/site.css" />
        <link rel="stylesheet" href="~/lib/components-font-awesome/css/font-awesome.css" />
    </environment>
    <environment exclude="Development">
        <link rel="stylesheet" href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.7/css/bootstrap.min.css"
              asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
              asp-fallback-test-class="sr-only" asp-fallback-test-property="position" asp-fallback-test-value="absolute" />
        <link rel="stylesheet" href="~/css/site.min.css" asp-append-version="true" />
         <link rel="stylesheet" href="~/lib/components-font-awesome/css/font-awesome.min.css" />
    </environment>

If you plan to use CDN based Fontawesome Libraries, please register yourself for a CDN account on https://cdn.fontawesome.com and generate CDN Code like below

<script src="https://use.fontawesome.com/yourembedcode.js"></script>

Since the CDN based version is actually a pointer to Javascript file, it is inserted at bottom of the _layout.cshtml as below. This would load the Css and other javascript at runtime when the Page loads.

    <script src="https://use.fontawesome.com/xxxxxxxxx.js" type="text/javascript" ></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mdbootstrap/4.4.3/js/mdb.min.js"  type="text/javascript" asp-append-version="false"> </script>
    @RenderSection("Scripts", required: false)
</body>
</html>

Step 5 – Use FontAwesome Code in HTML

Use below code for creating Google Plus / Linkedin icons to link your pages. Below are for mine.

   <!--Google +-->
   <a class="icons-sm gplus-ic" href="https://plus.google.com/u/1/102321429971962784339"><i class="fa fa-google-plus-square fa-lg white-text mr-md-4"> </i></a>
   <!--Linkedin-->
   <a class="icons-sm li-ic" href="https://www.linkedin.com/in/avinunderscorebarnwal/"><i class="fa fa-linkedin-square fa-lg white-text mr-md-4"> </i></a>

This is how it looks.

This is it for today, keep subscribed for new articles next week.
As always, let me know if any queries/ suggestions.

Part 3 – Storing Jenkins output to AWS S3 bucket

This is 3rd in series of articles written for Jenkins Continuous Integration tool. We already setup Jenkins, setup Android SDK, Gradle home, and a Test Jenkins build to archive the artifacts so far.

In this tutorial I am going to setup a AWS S3 integration from the same build to be able to archive the artifacts to S3 Bucket.

Here is a list of topics we would cover in this tutorial to achieve S3 archiving: –

  1. Create a S3 bucket.
  2. Create an IAM User , Access Key  and assign a Managed Policy to Read/Write to the specific folder.
  3. Install S3 Plugin on Jenkins
  4. Configure the S3 profile
  5. Configure a Post-Build Step to upload output to S3 bucket.

Lets start now !

Step 1 – Create  a S3 Bucket 

What is S3 Bucket and why is it needed ?  Before you can upload data into Amazon S3, we need to create a bucket to store the data. Buckets have configuration properties, including their geographical region, who has access to the objects in the bucket, and other metadata, such as the storage class of the objects in the bucket.

Create a Bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.

2. Click “Create Bucket”

bucket1

3. Select Bucket name and Region. The name that you choose must be unique across all existing bucket names in Amazon S3 and remember to use only lowercase chars as it doesn’t accept certain combinations.  .
4. Create the bucket with or without logging as per your choice.
5. Create a folder (all lowercase to avoid any ‘Access Denied’ errors). we would choose ‘apkarchive’.
6. If all was successful, AWS console should show as below.

bucket2

 

Step 2 – Create an IAM User and assign a Group & Policy to Read/Write to the specific folder.

Step 2A – To Create IAM User(s) with AWS IAM console
1. Sign in to the Identity and Access Management (IAM) console at https://console.aws.amazon.com/iam/.
2. In the navigation pane, choose Users and then choose Create New Users.
3. Type in user name = jenkinsuploader
4. Since our user would need to access AWS API from S3 plugin, we would need to generate access keys. To generate access key for new users at this time, select Generate an access key for each user. Remember that you will not have access to the secret access keys again after this step. if you lose them, you need to create a new Access Key for this IAM User.
5. Choose Create and then either show Key or Download Credentials in form of CSV.
6. Since we want to use just this IAM User for POC, we would be assigning the Managed Policy specific to the user. However, it is recommended to assign Managed policies to Groups and then map the users to the Group. Proceed to next step (Step 2B) to Create and Assign a Policy.

iamuser1iamuser2

Step 2B – Create a Customer policy and Assign to user 

  1. Sign in to the Identity and Access Management (IAM) console athttps://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose Policies and then choose Create Policy
  3. Select ‘Create your Own Policy’ and select policy name as ‘apkUploaders’ and paste the below JSON as Policy Document
  {
     "Version": "2012-10-17",
     "Statement": [
      {
       "Sid": "AllowUserToReadWriteObjectDataInapkarchive",
       "Action": [
         "s3:PutObject",
         "s3:GetObject"
          ],
       "Effect": "Allow",
       "Resource": [
         "arn:aws:s3:::bucketname/apkarchive",
         "arn:aws:s3:::bucketname/apkarchive/*"
        ]
      }
     ]
  }

4.  Go to “Attached Entities” Tab and “Attach” to the IAM user we created in Step 2A  above.

policyiamusermapping

PS –  Since the jenkins uploader user would not be logging in from AWS Console, it doesn’t need a password, and also doesn’t need most of the privileges needed for accessing the documents/ objects via AWS S3 Console.  If you need to Create a set of users who can view the documents / objects / build outputs via AWS S3 console, you may be better off creating a Group and then mapping a more liberal Policy document (I would try to cover it in my next blog).

With this step the S3 bucket and our IAM user access id and secret keys are ready to be configured on Jenkins, so lets proceed to next step.

Step 3 – Install S3 Plugin

Logon to Jenkins Dashboard with administrative id  and perform below steps to download S3 plugin automatically.

  • navigate to Jenkins dashboard -> Manage Jenkins -> Manage Plugins and select available tab. Look for “S3 plugin” and install that.
  • You could also download the HPI file from S3 plugin URL and paste in Plugins directory of the Jenkins installation.
  • Once done installation, restart Jenkins to take effect.

 

Step 4 – Configure the S3 profile

Go to Manage Jenkins and select “Configure System” and look for “Amazon S3 Profiles” section. Provide a profile name, access key and secret access key for your jenkinsuploader account that we created above.

S3 profile.png

Step 5. Configure a Post-Build Step to upload APK to S3 bucket.  

Head to the existing build configuration, we would take ‘trav’ configuration as we built in my previous tutorial Part 2 – Jenkins – Setting up Android build. Navigate down to “Post-Build Actions” and click on “Add Post-Build Action” and select “Publish Artifacts to S3 Bucket” step.  Provide params as below

Source – **/*.apk (it does accept the GLOB format wildcard)
Destination – bucketname/foldername  format (The plugin accepts bucketname followed by absolute path to the folder in which the build output has to be archived)
Storage Class – Standard
Bucket Region – Depending on your bucket’s region.
Manage artifacts – true  (This would ensure the S3 Plugin manages and keeps the build outputs as per the Jenkins archival policy)
Server side encryption – True / False (as per your bucket’s encryption policy)

Publish artifacts to S3 job.png

Now, click on save and you are done !!

All your build artifacts will get uploaded to Amazon S3 bucket.

S3 Upload finished.png

As always, happy reading and feel free to provide feedback

References 

  1. Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3
  2. BucketWriting IAM Policies: How to Grant Access to an Amazon S3 Bucket
  3. IAM Policy Variables Overview
  4. Specifying Permissions in a Policy
  5. AWS Policy Generator Tool
  6. An Example Walkthrough: Using user policies to control access to your bucket
  7. Working with Managed Policies

 

Part 2 – Setting up Android build on Jenkins

This is 2nd article in series of articles for setting up an Android build.  I hope you have read the previous one here  Part 1 – Setting up Android build on Jenkins

Requirements for this tutorial is that you have a working Android Project which builds fine  inside Android Studio.  Source code for this sample is at https://github.com/vnextcoder/trav.  So lets crack on  !

Step 1 – Login (obviously) , login to Jenkins CI server.

Step 2 – Click ‘Create New Jobs” on the home screen below .  Once you do that  select Freestyle Project and click OK

jenkins-ci-1
jenkins ci 2.png

Step 3 – Select Github project and set Project URL as the root where gradlew or gradlew.bat resides. In my case it is there at https://github.com/vnextcoder/trav.  Additionally setup GIT in SCM and GIT repository URL as https://github.com/vnextcoder/trav.  IN branches to build, you can choose master as of now “*/master ”  which is just descendent of the GIT Repo root.

jenkins ci 3.png

Step 4 – Build triggers – these are parameters which would trigger the build.  if you dont setup any , the build need to be triggered manually . We would setup a SCM Polling to poll every 10 mins for any SCM changes and trigger the build.  H/10 * * * *   means it would be run every 10 mins.

jenkins ci 4.png

Step 5 – Build step – Update Android SDK .  Jenkins doesn’t specify any pre-build steps like Circle CI or Travis-CI if you are big fan of those Cloud solutions, However, Jenkins has whole lot of customization over and above those nasty YML files.   Under Jenkins, you can run virtually any Shell script / tool .

First build step for any Android build should be to get all the build, platform-tools, etc which are needed for the project build. so lets create a Shell script build

Add build step –> Execute Shell

add below line on the step

echo yes | android update sdk --no-ui  --all --filter tools,platform-tools,build-tools-24.0.3,android-24,extra-google-m2repository,extra-google-google_play_services,extra-android-support

Now, this step would make sure all the build tools, source libs, m2repo everything is up to date on the $ANDROID_HOME folder before the Gradle build actually kicks in. Do note that –filter only works with –all option .  “echo yes” would make sure that the License agreements are accepted automatically by the build.  So far I haven’t found a way to configure this task to not update the android sdk in case it is already updated.  This seems to be a feature request and I have requested the same https://code.google.com/p/android/issues/detail?id=224879.  Many Stackoverflow questions on this topic are still unanswered.

Step 6 – Add one more “execute shell script” step to accept license automatically –

mkdir "$ANDROID_HOME/licenses" || true
echo -e "\n8933bad161af4178b1185d1a37fbf41ea5269c55" &gt; "$ANDROID_HOME/licenses/android-sdk-license"
echo -e "\n84831b9409646a918e30573bab4c9c91346d8abd" &gt; "$ANDROID_HOME/licenses/android-sdk-preview-license"

Step 7 – Add Gradle build step .  The Gradle wrapper plugin works same way as the gradle wrapper from within the Android studio.  Do select tick mark “make gradlew executable” . This would make sure that execute permissions are applied on the gradlew script after SCM checkout.   Under the tasks add “clean build” string to execute clean and build tasks.

jenkins-ci-5

2016-10-16-4

Step 8  – Post – build Actions . Jenkins allows to archive the artifacts or even push them to Play store (custom plugins needed). At the moment we would archive the apk files . Click Post-build actions –> add Post-build action –> Archive the artifacts . in the Files to archive enter “**/*.apk”. This is a glob which would find out any *.apk files under any file depth and archive them .

Save the build now.

2016-10-16 (5).png

Step 9 – Trigger the build.  Click build Now from the build configuration screen.  next go to Console Output and which should show the real build run output

2016-10-16 (6).png

2016-10-16 (7).png

If all goes well you should see “BUILD SUCCESSFUL” message in end like below .

Trav finished

Ref

  1. Accepting android SDK licenses
  2. Android update feature
  3. http://stackoverflow.com/questions/5730815/unable-to-locate-tools-jar – in case you run into ‘tools.jar’ not found errors, you must remember jdk is required for the builds rather than jre.

Part 1 – Setting up Android build on Jenkins

This tutorial is 1st in a series of articles for setting up CI for Android .  I would try and cover the pre-configuration tasks to make sure all the requirements for configuring a Android build. This tutorial assumes you have a working Jenkins installation with a Jenkins user id which allows you to configure new Build Jobs on Jenkins and access to the Ubuntu machine for creating any Gradle/Android homes. Hope you have already read my earlier tutorial to build a Jenkins CI Server on Ubuntu.  Installing Jenkins on Ubuntu.

Since the Jenkins has been setup already, we would continue with Android build setup : –

Step 1 – Install Android SDK 

Go to Android Developer tools page – https://developer.android.com/studio/index.html & scroll to bottom of the page and find the section for SDK

sdk-tools

Copy the link from Linux version and use it to download Android SDK below

wget https://dl.google.com/android/android-sdk_r24.4.1-linux.tgz

It should take a few seconds /mins to download it based on network speed.  Once downloaded, extract it from tarball as below: –

tar zxvf android-sdk_r24.4.1-linux.tgz
# move the folder extracted to a different partition/ volume to avoid filling up your root partition.
mv android-sdk-linux /opt2/android-sdk-linux
# You may remove the original tarball after the Sdk is extracted.
rm android-sdk_r24.4.1-linux.tgz

This would install android sdk in /opt2/android-sdk-linux directory and it would become your ANDROID_HOME.

Step 2 – Setup $Android_home$

Create a new file /etc/profile.d/android.sh and enter the below lines

export ANDROID_HOME="/opt2/android-sdk-linux"
export PATH="$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools:$PATH"

This would make sure that ANDROID_HOME variable and the path variables are set globally for the Ubuntu server itself rather than just Jenkins.

Now you just need to logout and login on terminal again. If you are using a Ubuntu Terminal as of now, you would need to logout fully from the desktop, and login again. This is all because Ubuntu Terminal uses ‘Login-less’ terminal and it doesn’t invoke global profile when you launch the terminal from an existing Ubuntu session.

Once you are done setting android vars in global profile, restart the Jenkins.

Step 3 – Configure Jenkins Plugins

For Jenkins to build an Android app properly it would at least need few plugins to be installed

  • Gradle Plugin – Android build scripts are mostly Gradle based so you need this plugin foremost (required)
  • Android Lint plugin (recommended)
  • Google Play Android Publisher Plugin (in case you want to automate app publishing (would need some extra security steps to keep signing keys secure). I would try to cover it in my next Blog
  • Android Emulator Plugin – if you want to run tests on emulator (automated by Jenkins runner)
  • JSLint plugin – for coverage analysis (recommended but not mandatory)

So lets go to Plugin Manager URL – direct http://192.168.0.32:8080/pluginManager/

Go to Available section and select all that you need to install. Jenkins may already have some based on previous projects it may have built.

Step 4 – Configure Gradle parameters

There are 2 major ways we can configure the Gradle parameters.  Either we pass each paramter on GRADLE_OPTS global var or just configure GRADLE_OPTS to point the Gradle User Home and then keep all the parameters in the $GRADLE_USER_HOME/gradle.properties

I would try to explain both the approaches :  –
Option 1. Configure all params in GRADLE_OPTS. This is preferred option if you don’t have a lot of parameters to be passed to Gradle / Gradle Wrapper. For this , Go to Jenkins –> Manage Jenkins –> Configure System.

Create a new global variable GRADLE_OPTS under Global Properties section and just click Save. If you have the variable already you can just append this property. It would reflect for all future builds.

gradleopts

Now we will configure these parameters to make sure Gradle runs in daemon process and configure any proxy settings if any.

GRADLE_OPTS = -Dorg.gradle.daemon=true -Dhttp.proxyHost=corphttpproxyhost.yourdomain -Dhttp.proxyPort=3128 -Dhttps.proxyHost=corphttpsproxyhost.yourdomain -Dhttps.proxyPort=3129

Option 2. Create Gradle User Home we can use this option to make sure that all the configurations , Gradle build tools go in this directory and we only make sure that this directory is configured as Gradle User Home on GRADLE_OPTS variable. This directory can be anywhere on the build machine but needs to be secured and only jenkins user should be able to read and write on it. The gradle.properties file in this folder can also be used to keep credentials storage in encrypted format.  Lets create the directory and make owner of the directory as jenkins:jenkins

$ mkdir /opt2/.gradlehome
$ cat <<EOF >> /opt2/.gradlehome/gradle.properties
org.gradle.daemon=true
systemProp.http.proxyHost=proxy.company.net
systemProp.http.proxyPort=8181
systemProp.https.proxyHost=proxy.company.net
systemProp.https.proxyPort=8181
systemProp.https.nonProxyHosts=*.company.net|localhost
EOF
$ sudo chown -R jenkins:jenkins /opt2/.gradlehome
$ sudo chmod -R 770 /opt2/.gradlehome

Configure GRADLE_OPTS to use the Gradle User home we defined above. This should reflect for any future builds without any restart. Create a new global variable GRADLE_OPTS under Global Properties section and just click Save. If you have the variable already you can just append this property. It would reflect for all future builds

-Dgradle.user.home=/opt2/.gradlehome

gradlehome

With this we have met all the requirements for configuring a Android build job and are ready to jump to the next Part (Part 2).

Like always, any comments , feedback are welcome.

 

References

Installing Jenkins on Ubuntu

After having setup Travis-CI and CircleCI builds for my sample Android app, today I thought to install Jenkins on Ubuntu (giving more control on signing, authority, Dex, etc.) , though it is pretty much straightforward, however I wanted to post my learning as well in form of a tutorial.

Step 1 – Preparing your Ubuntu

First Step of all the installations on Ubuntu is to updated it to latest system packages by running below commands

$ sudo apt update
$ sudo apt upgrade
$ sudo apt dist-upgrade

Below steps are only needed if your Ubuntu is behind a corporate proxy, you may need to set apt to use proxy by creating a 95proxies file like below

$ sudo vi /etc/apt/apt.conf.d/95proxies

Content would be like below

Acquire::http:Proxy "http://x.y.z.a:9090<"
Acquire::https:Proxy "http://x.y.z.a:9090"
Acquire::ftp:Proxy "http://x.y.z.a:9090"

Step 2 – Install Jenkins
Next we need to get the Remote keys from Jenkins Distro server and add locally into Apt Cert store, add Jenkins Repo to sources.list.d  and then install jenkins.

wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins

Once you have finished jenkins install you need to start jenkins process by issuing below

$ sudo service jenkins start

Jenkins process should display the port it is using when you use the command “sudo lsof -i | grep jenkins” . Why we using ‘sudo’ is because jenkins is running under its own user id process and no other user would usually have access to those processes.

ubuntu:~$ sudo lsof -i | grep jenkins
java      27888 jenkins  158u  IPv6  62767      0t0  TCP *:http-alt (LISTEN)
java      27888 jenkins  176u  IPv6  62954      0t0  TCP *:44305 (LISTEN)
java      27888 jenkins  178u  IPv6  62994      0t0  UDP *:33848
java      27888 jenkins  179u  IPv6  63010      0t0  UDP *:mdns
ubuntu:~$

Step 3 – Initial Setup with onetime Secret

Open URL  – http://192.168.0.32:8080 and it should display a prompt asking for a secret which is stored by Jenkins process .  This is to make sure nobody else can go and create Admin user and control the Jenkins installation. So go and check content of this initialAdminPassword file and paste it in the Web form.
PS : Usually Jenkins listens at port 8080 which can be changed by modifying /etc/init.d/jenkins file for `HTTP_PORT`.

unlock-jenkins

Once you have entered the secret initialAdminPassword, the Jenkins would ask you to do Plugin installation manually or recommended way .  Choose Recommended way for now.  You can install further plugins on the Admin page.

Once Jenkins finishes installing the recommended plugins, it would prompt for a new Admin user to be created like below .

createfirstadminuser

 

Enter the details and it should accept and show that Setup is complete like below

jenkins-readywelcome-jenkins

Thanks for reading. feel free to post comments or feedback

 

References –

Installing Jenkins on Ubuntu

 

HAProxy – Mysql cluster on Docker

HAProxy – Mysql cluster on Docker

In this tutorial I am going to setup a HAProxy based cluster (layer 4) in Docker which would load balance to a set of Mysql nodes (again running on Docker).

Before I jump into how to get this done, I would like to explain a little more about few important terms : –

  • Docker – well, everybody knows this. Excellent implementation of Microservices architecture. nearly everybody is going for it, whether Redhat, Oracle, Microsoft, Apple. it has potential to containerize on any platform.
  • HAProxy – stands for High Availability Proxy, is an open source software TCP/HTTP Load Balancer and Reverse proxy solution which can run on Linux, Solaris, and FreeBSD. Most common use case for this product is to improve the performance and availability of servers by distributing the workload across multiple servers (e.g. web, application, database (yes, even Database)). So why do we need HAProxy?  Availability / reliability / stability we have got all the reasons to have it, You may also use NGINX for this or a Network based Balancer like F5 devices.

Setup without Balancing would look more or less similar to below architecture :-

no-balancing

In the above example the users connect to the web server, and there is no load balancing. If this single web server goes down, the users will no longer get access to the web server eventually causing Service Disruption. Additionally, if many users are trying to access your app / Webserver simultaneously and it is unable to handle the load, they may get a slow experience or they may not be able to connect at all.
Layer 4 Load Balancing

A simple way to do load balancing to multiple servers is to use layer 4 (transport layer) load balancing. This way the Load Balancer will forward user traffic based on IP / IP range and port (i.e. if a request comes in for http://somedomain.com/something, the traffic will be forwarded to the backend that handles all the requests for somedomain.com). This won’t be intelligent enough to introspect the content headers  and redirect user to any other set of servers if need be for specific subdirectory of a domain.

Layer 7 Load Balancing

Another more flexible way to load balance network traffic is to use layer 7 (application layer) load balancing. Using layer 7 allows HAProxy to forward requests to different backend servers based on the content of the user’s request. This mode of load balancing allows to run multiple web application servers under the same domain and port. A example setup would like a Portal Server in backend fetching the Portal frames only being redirected based on “/portal” and another set of backend servers running the actual app which would receive traffic when Portal server invokes /app/someportlet.

Here is a diagram of a simple example of layer 7 load balancing:

load-balancing

  • Rsyslogd  – Something which is required for a HAProxy container to send the logs to. HAProxy doesn’t create logs in physical Files but depends on rsyslogd daemon.  since we don’t have Rsyslogd installed on official HAProxy image, we can use any other rsyslogd image and launch a container to receive the logs.
  • MySQL – of course your need at least 2 MySQL nodes setup in Master-Master replication setup to be able to have a Replicated  and fully available cluster. More details in my previous blog Mysql Master-Master Replication setup on Docker

So, How it would look like when we have built this HAProxy based MySql DB Cluster? . Here is how it should  : –

docker-containers-links

Lets start now : –

Step 1 – Master-Master replicated Docker containers 

Full tutorial is there in my previous article – Mysql Master-Master Replication setup on Docker.  However, quick steps here


#!/bin/bash
#title			: Launch 2 MYSQLnodes2
#description	        : This script creates 2 MySQL containers and launches them assuming the data /log/backup/conf.d are present in /opt2/mysql/<node_prefix><nodenumber> folders.
#author		 	: Avinash Barnwal
#date			: 22092016
#version		: 0.1
#usage			: bash Launch2MySqlnodes
#=============================================================================

DB_NAME=mydata
ROOT_PASS=roo235t
MYSQL_IMAGE='mysql:latest'
NODE_PREFIX=mysql

docker run --name ${NODE_PREFIX}1 \
       -e MYSQL_ROOT_PASSWORD=$ROOT_PASS \
       -e MYSQL_DATABASE=$DB_NAME -dit \
       -v /opt2/mysql/${NODE_PREFIX}1/conf.d:/etc/mysql/mysql.conf.d/ \
       -v /opt2/mysql/${NODE_PREFIX}1/data:/var/lib/mysql \
       -v /opt2/mysql/${NODE_PREFIX}1/log:/var/log/mysql \
       -v /opt2/mysql/${NODE_PREFIX}1/backup:/backup \
       -p 3306 \
       -h ${NODE_PREFIX}1 $MYSQL_IMAGE

NODE1_PORT=$(docker inspect --format='{{(index (index .NetworkSettings.Ports "3306/tcp") 0).HostPort}}' ${NODE_PREFIX}1)
 # https://docs.docker.com/engine/reference/commandline/inspect/

docker run --name ${NODE_PREFIX}2 \
       -e MYSQL_ROOT_PASSWORD=$ROOT_PASS \
       -e MYSQL_DATABASE=$DB_NAME -dit \
       --link ${NODE_PREFIX}1:${NODE_PREFIX}1cl \
       -v /opt2/mysql/${NODE_PREFIX}2/conf.d:/etc/mysql/mysql.conf.d/ \
       -v /opt2/mysql/${NODE_PREFIX}2/data:/var/lib/mysql \
       -v /opt2/mysql/${NODE_PREFIX}2/log:/var/log/mysql \
       -v /opt2/mysql/${NODE_PREFIX}2/backup:/backup \
       -p 3306 \
       -h ${NODE_PREFIX}2 $MYSQL_IMAGE

NODE2_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${NODE_PREFIX}2)
# This would add the second node's IP in the Host file of mysql first node.
docker exec -i ${NODE_PREFIX}1 sh -c 'echo '$NODE2_IP ${NODE_PREFIX}2 ${NODE_PREFIX}2' >> /etc/hosts';

Step 2 – Launch the rSyslogD container.

I have taken a existing docker image on docker hub from here voxxit/rsyslog

pull the image & Launch it

docker pull voxxit/rsyslog
docker run --name haproxy-logger -dit -h haproxy-logger -v /opt2/mysql/logs:/var/log/ voxxit/rsyslog

Step 3 – Prepare HAProxy

I am going to use official HAProxy docker image from docker hub. As this image doesn’t have a mysql-client within, we would have to set it up as Layer 4 Load Balancer. If you are interested to set it up as a Layer 7 Load balancer you would need to build your own image from the official image with Mysql client installation.

Here is a configuration which I have used : –

global
    log haproxy-logger local0 notice
    # user haproxy
    # group haproxy
defaults
    log global
    retries 2
    timeout connect 3000
    timeout server 5000
    timeout client 5000
listen mysql-cluster
    bind 0.0.0.0:3306
    mode tcp
    #option mysql-check user haproxy_check  (This is not needed as for Layer 4 balancing)
    option tcp-check
    balance roundrobin
    # The below nodes would be hit on 1:1 ratio. If you want it to be 1:2 then add 'weight 2' just after the line.
    server mysql1 mysql1:3306 check
    server mysql2 mysql2:3306 check
# Enable cluster status
listen mysql-clusterstats
    bind 0.0.0.0:8080
    mode http
    stats enable
    stats uri /
    stats realm Strictly\ Private
    stats auth status:keypas5

Step 4 – Launch HAProxy container

We are going to launch the HAProxy Docker image with all the Volume mounts

docker run --name mysql-cluster -dit \
    -h mysql-cluster \
    --link mysql1:mysql1cl  --link mysql2:mysql2cl \
    --link haproxy-logger:haproxy-loggercl \
    -v /opt2/mysql/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
    -p 33060:3306 -p 38080:8080 \
    haproxy:latest

# check the docker container status
# There should be 4 containers there
# mysql1, mysql2, mysql-cluster, haproxy-logger
docker ps -a
/opt2/mysql$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                              NAMES
f1343662ce3f        haproxy:latest      "/docker-entrypoint.s"   9 minutes ago       Up 9 minutes        0.0.0.0:33060:3306/tcp, 0.0.0.0:38080->8080/tcp   mysql-cluster
c005e3275745        voxxit/rsyslog      "rsyslogd -n"            10 minutes ago      Up 10 minutes       514/tcp, 514/udp                                   haproxy-logger
59ed9026944d        mysql:latest        "docker-entrypoint.sh"   25 hours ago        Up 21 hours         0.0.0.0:9010->3306/tcp                             mysql2
1bff0bc121bc        mysql:latest        "docker-entrypoint.sh"   25 hours ago        Up 23 hours         0.0.0.0:9004->3306/tcp                             mysql1

If everything is good, you should be able to see all the docker containers live and HAPROXY listening 33060 for MySQL transport and 38080 for Status page.

Step 5- MySQL client verification 

If you are thinking to do verify Mysql connections using passwords, that’s a bad idea.  This is because, the MySQL 5.6 onward will spit out warnings when using passwords on command line parameters “Warning: Using a password on the command line interface can be insecure.

To keep your password secure and escape such warnings you can use mysql_config_editor tool like below : –


mysql_config_editor set --login-path=local --host=localhost --user=username --password

#Now you can use below
mysql --login-path=local -e "statement"
# old usage was
mysql -u username -p pass -e "statement"

Additionally, the MySQL client would try to connect on Unix Socket , which is not running on the Docker host so you need to override that behavior by using “–protocol tcp” and then provide the port as well”-P 33060″

To test your MySQL Cluster with about 10 connections repetitively, you may use the below script

$ mysql_config_editor set --login-path=local --host=localhost --user=root --password
# Password:
# Password Stored
$ for i in `seq 1 10`
  do
  mysql --login-path=local -P 33060 --protocol tcp -e "show variables like 'server_id'"
  done

Output would be like below : –

+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 101   |
+---------------+-------+
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 102   |
+---------------+-------+
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 101   |
+---------------+-------+
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 102   |
+---------------+-------+
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 101   |
+---------------+-------+
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 102   |
+---------------+-------+
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 101   |
+---------------+-------+
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 102   |
+---------------+-------+
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 101   |
+---------------+-------+
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 102   |
+---------------+-------+

This shows that HAProxy is running and loading the traffic 1:1 between the 2 backend nodes.

If you need the Load balancing to be setup in a different ratio, you can set it up using ‘weight 2’ parameter in the haproxy.cfg when defining the backend MySQL nodes.

Here is a snapshot of the Status page. Note the LastChk column which says and stays success until the node is UP.

haproxy-status

Lets try to take one of the containers down and see what happens .

Step a – Stop Container 


$ docker stop mysql2

Step b – Check status Page 

The Status page shows MySQL node2 has gone down .

HAProxy Status failed.png

Step c – Verify the MySQL client way 

for i in `seq 1 10`
do
mysql --login-path=local -P 33060 --protocol tcp -e "show variables like 'server_id'"
done
# This would all go to MySQL&nbsp;1 showing server_id as 101

Bring the mysql2 node back would again allow HAProxy to load-balance between the nodes. Remember the mysql2 node would pick up all the changes from MYSQL1 Node as those nodes are setup in Master-Master replication.

Hope you enjoyed!  Happy learning !

Any feedback, you are more than welcome.

Mysql Master-Master Replication setup on Docker

Ever wondered why Docker is so much popular. Here it is why?  Docker has made it so much easier to spin up nodes as required and then map them all in quick easy way .

In this tutorial I am going to setup a Master-Master Replication between 2 Mysql Nodes , and of course both of them running on Docker based on Ubuntu 16 LTS.

Requirements: –

  • docker setup . Refer to Docker article for installing docker
  • docker pull mysql:latest from docker hub repo .

Lets crack on

Step 1  – Prepare the configurations / data folders 

Best thing of this docker mysql image is that you can setup your own Data, Log, config, passwords based on your requirements.  So first of all we would create directory structure as below for each node we want to spin up.

~/server#/backup  – This would contain any

~/server#/data – This would be starting point from where the data would be mounted / created. again the data would be persisted during restarts as it is going to be a host mounted volume.

~/server#/log – For storing any log files & persisting them

~/server#/conf.d – mounting special configuration files.

Make sure that the owner of the above folders / files is set to 999:999 Lets create the 2 configuration files for both nodes . The content would be like below

~/server1/conf.d/server1.cnf

[mysqld]
 server-id = 101
 log_bin = /var/log/mysql/mysql-bin.log
 binlog_do_db = mydata
 bind-address = 0.0.0.0 # make sure to bind it to all IPs, else mysql listens on 127.0.0.1
 character_set_server = utf8
 collation_server = utf8_general_ci

[mysql]
 default_character_set = utf8

~/server1/backup/initdb.sql

use mysql;
create user 'replicator'@'%' identified by 'repl1234or';
grant replication slave on *.* to 'replicator'@'%';
# do note that the replicator permission cannot be granted on single database.
FLUSH PRIVILEGES;
SHOW MASTER STATUS;
SHOW VARIABLES LIKE 'server_id';

~/server2/conf.d/server2.cnf

[mysqld]
server-id = 102 # Remember this is only Integer per official documentation
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = mydata
bind-address = 0.0.0.0 # make sure to bind it to all IPs, else mysql listens on 127.0.0.1
character_set_server = utf8
collation_server = utf8_general_ci
[mysql]
default_character_set = utf8

~/server2/backup/initdb.sql

use mysql;
create user 'replicator'@'%' identified by 'repl1234or';
grant replication slave on *.* to 'replicator'@'%';
# do note that the replicator permission cannot be granted on single database.
FLUSH PRIVILEGES;
SHOW MASTER STATUS;
SHOW VARIABLES LIKE 'server_id';

Step 2 – Launch the Nodes with the configurations

With the above files created, now we are good to create the Containers with the above configurations / Data folders.

# Launch node1

docker run --name mysql1 -e MYSQL_ROOT_PASSWORD=mysql1pass -e MYSQL_DATABASE=mydata -dit -p 33061:3306 -v /opt2/mysql/server1/conf.d:/etc/mysql/mysql.conf.d/   -v /opt2/mysql/server1/data:/var/lib/mysql -v /opt2/mysql/server1/log:/var/log/mysql -v /opt2/mysql/server1/backup:/backup -h  mysql1 mysql

# Launch node2

docker run --name mysql2  <strong>--link mysql1</strong> -e MYSQL_ROOT_PASSWORD=mysql2pass -e MYSQL_DATABASE=mydata -dit -p 33062:3306 -v /opt2/mysql/server2/conf.d:/etc/mysql/mysql.conf.d/   -v /opt2/mysql/server2/data:/var/lib/mysql -v /opt2/mysql/server2/log:/var/log/mysql -v /opt2/mysql/server2/backup:/backup -h  mysql2 mysql

Give the nodes some time to boot up and make the services available.  Also note that we have linked the mysql2 node with mysql1 node during the “docker run” time itself.

Step 3 – Link Node1 with node2 (unofficial way)

The link other way around is not possible officially as I read in some articles / stackoverflow, but I have found a workaround to link mysql1 with mysql2 inside docker0 interface .  Key thing is that docker just creates  host entry to the linked container, and this can be achieved if we modify the host file within running container. Beware this IP can be changed by docker if your container restarts.

So we find out the runtime IP of the mysql2 node and then create a host entry within mysql1 node to point to correct IP of mysql2. Here are steps

# find out IP Address of mysql2

mysql2ip=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' mysql2)

#Append the new IP as new host entry in mysql1's host file.

docker exec -i mysql1 sh -c "echo '$mysql2ip mysql2 mysql2' >> /etc/hosts"

# Check if the above command worked

docker exec -i mysql1 sh -c "cat /etc/hosts"

Here are steps to verify connectivity both ways.

docker exec -ti mysql2 sh -c "ping mysql1"
docker exec -ti mysql1 sh -c "ping mysql2"

ping-verify

Now the nodes are up, time to setup the replication.

Step 4 – Initialize the Nodes to create replication users as well as check Master Log / position and verify server_id

Node1 

Connect to node1  and run the /backup/initdb.sql

/opt2/mysql$ docker exec -ti mysql1 sh -c "mysql -uroot -p"
Enter password:
mysql> source /backup/initdb.sql
Database changed
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span> sec)
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000003 | 154 | mydata | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 101 |
+---------------+-------+
1 row in set (0.01 sec)

Node2

Connect to node2  and run the /backup/initdb.sql

/opt2/mysql$ docker exec -ti mysql2 sh -c "mysql -uroot -p"
Enter password:
mysql> source /backup/initdb.sql
Database changed
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000003 | 154 | mydata | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 102 |
+---------------+-------+
1 row in set (0.01 sec)

Now, both the Nodes are showing up very much similar File name and position .  Also note that the server-id displayed should be unique , which is why the server1.cnf and server2.cnf had different server-id variables.

Step 5 – Setup the Replication source for both nodes. 

Node2 .

/opt2/mysql$ docker exec -ti mysql2 sh -c "mysql -uroot -p"
Enter password:
mysql> stop slave;
mysql> CHANGE MASTER TO MASTER_HOST = 'mysql1', MASTER_USER = 'replicator',
    -> MASTER_PASSWORD = 'repl1234or', MASTER_LOG_FILE = 'mysql-bin.000003',
    -> MASTER_LOG_POS = 154;
mysql> start slave;
mysql> show slave status\g

Node1 .

/opt2/mysql$ docker exec -ti mysql1 sh -c "mysql -uroot -p"
Enter password:
mysql> stop slave;
mysql> CHANGE MASTER TO MASTER_HOST = 'mysql2', MASTER_USER = 'replicator',
    -> MASTER_PASSWORD = 'repl1234or', MASTER_LOG_FILE = 'mysql-bin.000003',
    -> MASTER_LOG_POS = 154;
mysql> start slave;
mysql> show slave status\g

Step 6 – Testing Master-Master Replication

We are going to test the replication. To do this, we will create a table in our mydata database on Node 1 and check on Node 2 to see if gets reflected. Then, we will remove the table in from Node2 and ideally the Node1 should no longer show up on Node1.

Lets create a table

use mydata;
create table students ('id' int,  'name' varchar(20));

We now are going to check Node2 to see if our table exists.

show tables in mydata;

We should see output similiar to the following:

+-------------------+
| Tables_in_mydata |
+-------------------+
| students |
+-------------------+
1 row in set (0.00 sec)

The last test to do is to delete our table from node2. It should also be deleted from Node1. We can do this by entering the following on node2 mysql prompt:

DROP TABLE students;

To confirm this, running the “show tables” command on node1 will show no tables:

Empty set (0.00 sec)

Thats it! A completely working mysql master-master replication setup on Docker.

Happy Reading !  Enjoy !

Some References

https://github.com/besnik/tutorials/tree/master/docker-mysql

http://stackoverflow.com/questions/17157721/getting-a-docker-containers-ip-address-from-the-host

Source Code / Configurations i used   – https://github.com/vnextcoder/docker/tree/master/mysql

Generating Self Signed Certificates using Powershell

I have been working to making a Bot to use SSL certificates for encryption on traffic to and from the BOT when communicating to its clients.  I have so far generated certs/ CSR  using OpenSSL , but I can also find few utilities in Powershell which do a very much similar job  .

 

The Powershell cmdlets below are useful for handling such jobs like Generating CSRs,  private keys, etc.  A simple help *Certificate reveals a load of cmdlets available in PKI Module.   In case you don’t see these list of cmdlets, you may need to import PKI module by running below command :

PS  > Import-Module PKI

7bc8721fbe0944c5818a6970a3eeba00
7bc8721fbe0944c5818a6970a3eeba00

 

We are going to use mainly below commands

New-SelfSignedCertificate
Export-PfxCertificate
Export-Certificate

 

  1. New-selfSignedCertificate – would generate a Self-signed certificate along with a Key. These would be stored in local Certificate Store on Windows.

2

Note down the Thumbprint which has been output by the above command. This is unique and would be required when exporting with Export-PfxCertificate cmdlet.

 

 

  1. Once you have created the Certificate the signed certificate is ready to be exported to a PFX format file. Please note that the PFX files contain both private key and the certificate and hence needs password protection. So when you try to export cert into PFX format, the cmdlet would ask you for a Secure String password.   Exporting the Certificate in PFX format is a 2 step process :

    # Create a Secure String

    $CertPwd = ConvertTo-SecureString -String “pa$$w0rd” -Force –AsPlainText</li>
    Export PFXExport-PfxCertificate -cert cert:\localMachine\my\25F6AF52512C99DF62A3AB1A4EF7308139F55714 -FilePath C:\temp\mycompany.pfx -password $CertPwd
    

3

This would create a mycompany.pfx file, which can be used to host any SSL based IIS site ,  or host a Chat Server for example (my next Blog).

3. If you are just looking to export the Certificate for importing it on your client program / machine, you can simply export just the Certificate without Private Key .

export-certificate -cert Cert:\LocalMachine\My\25F6AF52512C99DF62A3AB1A4EF7308139F55714 -filepath C:\temp\mycompany.cer

4

Listing C:\temp for the cert and pfx shows

5

Happy Reading !