Terraform EKS module on AWS.

In one of my recent work I had an opportunity to work with EKS module using Terraform, It was quite a fun to spin off EKS cluster using terraform in matter of miniutes. I am here sharing you my experience and code which is ben trimmed for share purpose.

Prer-requsite to use this EKS module

  •    VPC
  •  Subnet Ids.

Git hub link to access the code.

https://github.com/nshah14/tf-eks-module.git

How to Use this module

Continue reading “Terraform EKS module on AWS.”

Linkerd for K8S Canary Deployment/Traffic splitter.

Reference to Jira Ticket number CTDS-234, Below diagram shows upgrade from v1 to v2 while serving the traffic.
Screenshot 2019-12-10 at 08.31.27

What is Canary deployment (in k8s)?

A deployment strategy is using Canaries (a.k.a. incremental rollouts). With canaries, the new version of the application is gradually deployed to the Kubernetes cluster while getting a very small amount of live traffic (i.e. a subset of live users are connecting to the new version while the rest are still using the previous version).

How to achieve canary deployment in Linkerd?

Answer to above question is Flagger, Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINXor Gloorouting for traffic shifting and Prometheus metrics for canary analysis. The canary analysis can be extended with webhooks for running system integration/acceptance tests, load tests, or any other custom validation.Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pods health. Based on analysis of the KPIsa canary is promoted or aborted, and the analysis result is published to Slackor MS Teams.

How to setup Flagger?

Its very simple but depends on the version of Kubernetes you working currently(day of documenting), there are two versions of Kubernetes cluster one is 1.13 and 1.14, As during the time project decided to use 1.14 version in UAT and PROD environment so to setup Flagger just need to fire this command  (Need kubectl  version 1.14)
kubectl apply -k github.com/weaveworks/flagger//kustomize/linkerd
This would install Flagger in linkerd namespace.
Example implementation (referenced from Flagger):-
Steps to follow:-

  1. setup namespace for implementation >> kubectl create ns test  (can be any namespace).
  2. inject linkerd proxy into newly created namespace >> kubectl annotate namespace test linkerd.io/inject=enabled
  3. This is optional but good to have horizontal pod scaler, Refer metrics-server for setting up metric server/heapster.
    kubectl apply -k github.com/weaveworks/flagger//kustomize/podinfo.
  4. create a custom resource “canary” for your deployment object which need canary

deployment please refer the attach file and replace below parameters:-

Place holder                                                                                Description

  1. __NameOfYourChoice__            Name of your canary deployment object (i.e podinfo).
  2. __NameOfYourNameSpace__      Name of the namespace where deployment lives also where canary deployment would live.
  3. __NameOfYourDeployment__ Name of the target deployment (i.e. pod info).
  4. __NameOfYourDeployment__ This is optional and Name of the target deployment (i.e. pod info).
  5. __ClusterIPPORTNumber__ Port number of Cluster IP service deployed.
  6. __PODPortNumber__ Port Number of Pod decoyed underneath Service(Optional).

It’s good to have test which can send request and keep checking the pod deployments going well though its optional.
kubectl apply -f ./canary-podinfo.yaml
On execution of above command few objects will be applied and few will be generated.

# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
ingresses.extensions/__NameOfYourDeployment__
canary.flagger.app/__NameOfYourDeployment__
# generated
deployment.apps/__NameOfYourDeployment__-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/__NameOfYourDeployment__
service/__NameOfYourDeployment__-canary
service/__NameOfYourDeployment__-primary
trafficesplits.split.smi-spec.io/__NameOfYourDeployment__

Here is the trick bit which actually sets canary deployment after bootstrapping actual deployment would go down to zero and another deployment would come up and start serving on this address.Well Canary deployment setup is ready to cater the request for it. Link to video which shows how canary deployment happens for a sample app. Clarity is not at its best but will give some idea of objects moving in the process. The deployment video will canary deploy from version 3.1.1 to 3.1.2.
podinfo-primary-858fdd8d-grf7m.mp4
Below is the traffic split code which gets generated as command runs in step-5

Name:         podinfo
Namespace:    test
Labels:
Annotations:
API Version:  split.smi-spec.io/v1alpha1
Kind:         TrafficSplit
Metadata:
Creation Timestamp:  2019-12-08T19:06:15Z
Generation:          67
Owner References:
API Version:           flagger.app/v1alpha3
Block Owner Deletion:  true
Controller:            true
Kind:                  Canary
Name:                  podinfo
UID:                   a57bc070-4a23-42aa-9c35-1d556f8c97de
Resource Version:        270255
Self Link:               /apis/split.smi-spec.io/v1alpha1/namespaces/test/trafficsplits/podinfo
UID:                     24bd60f7-336e-4ad9-8774-8413b8ef361f
Spec:
Backends:
Service:  podinfo-canary
Weight:   0
Service:  podinfo-primary
Weight:   100
Service:    podinfo
Events:

NOTE:- Traffic splitter can be modified and created as custom reproduce for traffic splitting pointing to different service.

What is TrafficSpliter?

This resource allows users to incrementally direct percentages of traffic between various services. It will be used by clients such as ingress controllers or service mesh sidecars to split the outgoing traffic to different destinations. For example there are two versions of deployment V1 and V2, It can be done by deploying both the versions and split traffic between, for example V1 takes half the traffic and V2 takes another half or in ratios of 10/90,20/80, 30/70 so on. When a specific deployment version is been preferred to use all the traffic can route to that version with 0/100. Sample file for traffic splitter:-

apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: service
spec:
service: service
backends:
– service: service-V1
weight: 50
– service: service-V2
weight: 50
Output of sample implemenattion.

Screenshot 2019-12-10 at 15.14.33

Challenges to be addressed while implementing:-

Database changes and its impact on old version of deployment.
Unlike Blue/green deployments, Canary releases are based on the following assumptions:
Multiple versions of your application can exist together at the same time, getting live traffic.
If you don’t use some kind of sticky session mechanism, some customers might hit a production server in one request and a canary server in another. something like user-agent to identify the source of request and point to this respective server.

References

https://linkerd.io/2/tasks/canary-release/ for setup.
https://docs.flagger.app/usage/linkerd-progressive-delivery For Canary deployment using linkerd.
https://github.com/weaveworks/flagger Flagger code base.

Integrate Jira with Jenkins in continuous delivery Environment

       In continuous delivery/deployment  environment its not always 100% automation, Only fewer projects which are developed on latest tech stack managed to achieve this. Most of the legacy tech stack based projects still require manual intervention to deliver/release. Here i am sharing my experience while working on one of the project which had legacy tech stack and it needed manual intervention to deploy or release end application.   Tech-stack used to achieve this automation process involved Jira, Jenkins, Nexus, Ansible, Virtual Box/OpenStack cloud and software developed and build using java+maven.  

Here i wont be discussing how artifact was built using maven, uploaded to nexus and deployed using ansible script, this article is more about the whole process was implemented using Jenkins and Jira not the technical nitty-gritty of  writing ansible and maven scripts. More about setup of Jira add-on , Jenkins plugins and bit of Jenkins pipeline scripting.

Technologies are used

  1. Jenkins
  2. JIRA
  3. Git/Svn
  4. Nexus/Artifactory
  5. Confluence

Lets start with deploying an micro services developed in JAVA rest service.

Jenkins 2.0 pipeline +Git+ Maven + Nexus Repository 3.12 with Rest-Java-app

In one of my recent work i did demo a sample java-rest based application deployed using Jenkins 2.0 pipeline. To achieve this i created a one java-rest app and build it using maven. After building it, its getting uploaded on nexus repository. so start with first lets create a java rest service here is the sample code i had used

first service is HelloWorldService.java

package com.taa.rest;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.core.Response;
@Path(“/HelloWorld”)
public class HelloWorldService {
    @GET
    public Response getMsg(@PathParam(“param”) String msg) {
        String output = “Hello world: from TAA”;
        return Response.status(200).entity(output).build();
    }
}

Second Service is ByeByeService.java

package com.taa.rest;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.core.Response;
@Path(“/ByeWorld”)
public class ByeByeService {
    @GET
    public Response getMsg(@PathParam(“param”) String msg) {
        String output = “Bye world: from TAA”;
        return Response.status(200).entity(output).build();
    }
}
Once you have created the service we need to make sure it builds.  since in eclipse i created this project as Mavan-java project so it creates default pom.xml where i need to fill my dependency and Nexus repository urls. Here is the sample Pom.xml
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.naved.common</groupId>
    <artifactId>RESTfulExample</artifactId>
    <packaging>war</packaging>
    <version>1.5</version>
    <name>RESTfulExample Maven Webapp</name>
    <!– <repositories>
        <repository>
            <id>maven2-repository.java.net</id>
            <name>Java.net Repository for Maven</name>
            <layout>default</layout>
        </repository>
    </repositories> –>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.8.2</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>com.sun.jersey</groupId>
            <artifactId>jersey-server</artifactId>
            <version>1.19.4</version>
        </dependency>
        <dependency>
            <groupId>com.sun.jersey</groupId>
            <artifactId>jersey-bundle</artifactId>
            <version>1.19.4</version>
        </dependency>
        <dependency>
            <groupId>com.sun.jersey</groupId>
            <artifactId>jersey-core</artifactId>
            <version>1.19.4</version>
        </dependency>
    </dependencies>
    <distributionManagement>
        <repository>
            <id>releases</id>
            <name>Releases Repository</name>
            <url>
            http://nexusRepo:8081/repository/maven-releases/
            </url>
        </repository>
        <snapshotRepository>
            <id>snapshots</id>
            <name>Snapshot Repository</name>
            <url>
            http://nexusRepo:8081/repository/maven-snapshots/
            </url>
        </snapshotRepository>
    </distributionManagement>
    <build>
        <finalName>RESTfulExample</finalName>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-compiler-plugin</artifactId>
      <version>3.7.0</version>
                <configuration>
                    <source>1.6</source>
                    <target>1.6</target>
                </configuration>
            </plugin>
            <plugin>
             <artifactId>maven-clean-plugin</artifactId>
                <version>3.1.0</version>
                <executions>
                    <execution>
                        <id>auto-clean</id>
                        <phase>initialize</phase>
                        <goals>
                        <goal>clean</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
             <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.5.3</version>
                 <configuration>
<branchName>master</branchName>
<pushChanges>false</pushChanges>
<localCheckout>true</localCheckout>
                     <checkModificationExcludes>
                        <checkModificationExclude>pom.xml</checkModificationExclude>
                        <checkModificationExclude>**</checkModificationExclude>
                     </checkModificationExcludes>
</configuration>
</plugin>
        </plugins>
    </build>
     <scm>
<tag>HEAD</tag>
</scm>
</project>
You can log in through console/cmd line  and run “mvn clean install” to see its building the required artifact in  target directory. As it should now since this app is building successfully we need to make sure this become part of a successful CD pipeline so any changes build automatically.
You need to create a Jenkinsfile (name is your choice i prefer JenkinsFile) here is the sample JenkinsFile with each steps
pipeline {
Choose an agent
agent { label ‘build’ }
Setup an environment (its optional, ignore this part not relevant at this stage.)
environment {
       IMAGE = readMavenPom().getArtifactId()
       VERSION = readMavenPom().getVersion()
       BUILD_RELEASE_VERSION = readMavenPom().getVersion().replace(“-SNAPSHOT”, “.1.1”)
       IS_SNAPSHOT = readMavenPom().getVersion().endsWith(“-SNAPSHOT”)
       GIT_TAG_COMMIT = sh(script: ‘git describe –tags –always’, returnStdout: true).trim()
       NEW_VERSION = readMavenPom().getVersion()
}
Setup tools ( depends if you want to use specific one to be installed during jenkins kick off)
tools {
    jdk ‘jdk’
    maven ‘maven 3.5.3’
}
Pipeline contains multiple stages which gives you benefit i narrow down the failure .

stages{

I have called first stage as “Initialize” which is to initialize  path for home and path
Here i am using already installed
stage (‘Initialize’) {
   steps {
             sh ”’
              echo “PATH = ${PATH}”
             echo “M2_HOME = ${M2_HOME}”
            ”’
    }
}
After initialising its time to build
stage (‘Build’) {
steps {
   // some statement to print values
      echo ‘This is a minimal pipeline.’
      echo ” Project version is ${VERSION}”
      echo “Artifact id is ${IMAGE}”
      echo “Build release version is ${BUILD_RELEASE_VERSION}”
      echo ” is it snapshot ${IS_SNAPSHOT}”
      echo ” is GIT_TAG_COMMIT ${GIT_TAG_COMMIT}”
  // actual build command to and setting up version which to be build as.
sh ”’
    mvn versions:set -DnewVersion=1.0.2
    mvn clean install
”’
script{
      environment {
                               NEW_VERSION = readMavenPom().getVersion()
                               echo ” Project new version is ${NEW_VERSION}”
                             }
         }
    }
}
Deploy artifact into nexus repository, here i have more steps in which its manual on click of button on pipeline if user wants to deploy artifact into nexus repository.
If you just want to deploy remove everything from the steps except “mvn deploy”.
stage(‘Deploy To Integration’) {
         when { tag “release-*” }
             steps{
                       echo ‘start’
                       script{
                       def userWantToKeepCluster = {
                            try {
                                    timeout(time: 1, unit: ‘MINUTES’) {
                                    def keep = input message: ‘Deploy artifact ?’,
                                      parameters: [booleanParam(defaultValue: true, description:                                              ‘Make sure to destroy cluster manually after you done’, name: ‘Deploy artifact’)]
                                     return keep
                                 }
                                 } catch(e) {
                                     echo “Build Failed ::: User aborted build or its timed out”
                                      throw e
                                 }
                          }
                      def answer = userWantToKeepCluster()
                      echo “will keep cluster? $answer()”
                       if(answer)
                       {
                        echo “answer is $answer”
                        sh ”’
                               mvn deploy
                             ”’
                      }
                     else{
                              echo “Build is aborted, User does not want to deploy artifact”
                           }
                     }
                 echo ‘done’
              }
}
Close pipeline
  }
}
Here you can find the code
Now we need to configure jenkins pipeline.
Jenkins_pipelin_configure
now pipeline is ready for your execution as you click on build.
PipelineView
For any issue check console output, please feel free to raise any queries you have.

(Angular2+Nodejs) App+Docker+Jenkins2(Declarative pipeline)+Nexus OSS 3.12..

Here is my guide to do a quick setup of  Microservice app developed using Angular2+Nodejs and running on continuous integration build tool Jenkins which interacts with the nexus to upload and download docker image.

First setup is to setup a docker registry on Nexus repository. Follow the screen shot

1) Login to your nexus setup and go to settings.

create_repo

 

2) When you click on create repository , it will open below window with lots of option please choose Docker(hosted).select_repo_type

3) Follow below screen shots to create your repository (note- you can choose whichever port you like providing its open, accessible and available.

1_create_repository

2_create_repository

3_create_repostiory

4) Docker repository is ready for use.

5) Challenges i faced is getting access to docker registry from docker setup so had to add it a trusted repository.  My docker version didn’t have option of modifying json file. so had to find it from docker service and did it. Below are the steps here:-

1 vi /lib/systemd/system/docker.service

docker_service_with_custom_docker_registry
2 systemctl daemon-reload
3 systemctl restart docker

You can now access it from your jenkins2 docker code.

More to come…………………………………………………………………………..

Kubernetes cluster on vagrant Centos7 box (WIP).

Here are the basic steps to setup your local kubernetes cluster on vagrant centos boxes.

Prerequisite :-

  1.   Virtual Box 5.0+
  2.   vagrant (https://www.vagrantup.com/docs/installation/)

Used Windows machine for setting up this you can use any other os.

  1. Create a new directory in the work-space you want to use i am naming it vagrant.  Now have to create a vagrant file i am using in simple way so its easy to understand file and all its node along with one of the developer machine to compile and run docker images/containers.

Vagrant.configure(“2”) do |config|
config.vm.define “executor”, primary: true do |executor|
executor.vm.box = “centos/7”
executor.vm.hostname = “executor”
executor.vm.network :private_network, ip:”192.168.2.103″
executor.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
v.customize [“modifyvm”, :id, “–memory”, 512]
v.customize [“modifyvm”, :id, “–name”, “executor”]
end
end
config.vm.define “kube-master”, primary: true do |master|
master.vm.box = “centos/7”
master.vm.hostname = “kube-master”
master.vm.network :private_network, ip:”192.168.2.100″
master.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
v.customize [“modifyvm”, :id, “–memory”, 512]
v.customize [“modifyvm”, :id, “–name”, “kube-master”]
end
end

config.vm.define “kube-node1”, primary: true do |node001|
node001.vm.box = “centos/7”
node001.vm.hostname = “kube-node1″
node001.vm.network :private_network, ip:”192.168.2.101”
node001.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
v.customize [“modifyvm”, :id, “–memory”, 512]
v.customize [“modifyvm”, :id, “–name”, “kube-node1”]
end
end

config.vm.define “kube-node2”, primary: true do |node002|
node002.vm.box = “centos/7”
node002.vm.hostname = “kube-node2″
node002.vm.network :private_network, ip:”192.168.2.102”
node002.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
v.customize [“modifyvm”, :id, “–memory”, 512]
v.customize [“modifyvm”, :id, “–name”, “kube-node2”]
end
end
end

Up here using two worker node and one master node, master node does not run any pod or service its just the  manager of nodes and all elements deployed on it.  Run command vagrant up

2.  As those machines are ready you can check status by vagrant status

3.  Now below are common steps for each nodes, use vagrant ssh node(master)

  • yum update -y
  • systemctl disable firewalld
  • vi /etc/sysconfig/selinux (set SELINUX=disabled)
  • yum remove chrony -y
  • yum install ntp -y
  • systemctl enable ntpd.service
  • systemctl start ntpd.service
  • vi /etc/hosts (add master and node names and ips)
  • 192.168.2.100 kube-master
  • 192.168.2.101 kube-node1
  • 192.168.2.102 kube-node2
  • cat /etc/hosts
  • ping master
  • ping node001
  • ping node002
  • vi /etc/yum.repos.d/virt7-docker-common-release.repo
  • [virt7-docker-common-release]
    • name=virt7-docker-common-release
    • baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os
    • gpgcheck=0
  •  yum  install -y –enablerepo=virt7-docker-common-release kubernetes etcd flannel
  • vi /etc/kubernetes/config
    • KUBE_LOGTOSTDERR=”–logtostderr=true”
    • KUBE_LOG_LEVEL=”–v=0″
    • KUBE_ALLOW_PRIV=”–allow-privileged=false”
    • KUBE_MASTER=”–master=http://master:8080″

4.   Perfrom below steps to setup in master

 

  • [master] vi /etc/kubernetes/apiserver
    # The address on the local server to listen to.

    • KUBE_API_ADDRESS=”–address=0.0.0.0″
    • # The port on the local server to listen on.
    • KUBE_API_PORT=”–port=8080″
    • # Port minions listen on
    • KUBELET_PORT=”–kubelet-port=10250″
    • # Comma separated list of nodes in the etcd cluster
    • KUBE_ETCD_SERVERS=”–etcd-servers=http://master:2379″
    • # Address range to use for services
    • KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″
    • # default admission control policies
    • KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”
    • # Add your own!
    • KUBE_API_ARGS=””

 

  • systemctl start etcd
  • etcdctl mkdir /kube-centos/network
  • etcdctl mk /kube-centos/network/config “{ \”Network\”: \”172.30.0.0/16\”, \”SubnetLen\”:24, \”Backend\”:{\”Type\”:\”vxlan\” } }”

5. Setup nodes and master for flannel setup, flannel is to comminicate in the cluster.

  • vi /etc/sysconfig/flanneldvi /etc/sysconfig/flanneld
    • # Flanneld configuration options
      # etcd url location.  Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS=”http://master:2379&#8243;
      # etcd config key.  This is the configuration key that flannel queries
    • # For address range assignment
    • FLANNEL_ETCD_PREFIX=”/kube-centos/network”
      # Any additional options that you want to pass
    • FLANNEL_OPTIONS=”–iface=eth1″

6.  Setup Kubelet on each nodes.

  • vi /etc/kubernetes/kubelet
    • ### # kubernetes kubelet (minion) config
    • # The address for the info server to serve on (set to 0.0.0.0 or “” for all interfaces)
    • KUBELET_ADDRESS=”–address=0.0.0.0″
      # The port for the info server to serve on
    • # KUBELET_PORT=”–port=10250″
      # You may leave this blank to use the actual hostname
    • KUBELET_HOSTNAME=”–hostname-override=node-name”
      # location of the api-server
    • KUBELET_API_SERVER=”–api-servers=http://master:8080″
      # pod infrastructure container
    • #KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”
      # Add your own!
    • KUBELET_ARGS=””

7.  Kick off all the services in master with below command

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done

8.  Kick off all the services on nodes

for SERVICES in kube-proxy kubelet flanneld docker; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done

9.  Setup node cluster metadata

  1. kubectl config set-cluster default-cluster –server=http://master:8080
  2. kubectl config set-cluster default-cluster –server=http://master:8080
  3. kubectl config set-context default-context –cluster=default-cluster –user=default-admin
  4. kubectl config use-context default-context

10.  Setup port forward to communicate with the docker0 subnet

echo  “net.ipv4.ip_forward = 1” > /etc/sysctl.d/docker_network.conf

sysctl -p /etc/sysctl.d/docker_network.conf

11. To access the pods on nodes from master  run this command

iptables -P FORWARD ACCEPT

 

Mask userid and password from jenkins.

How to hide svnuser and password credential from console output.

  • Go to  Jenkins > credentials > system > Global credentials
  • Click on Add Credentials on left menu

1

  • Enter against Username field and Password field.(it might appear with some filled values but you can clear it and enter your value) and save.
  • Click OK
  • No go to your Jenkins job.
  • Go to Build Environment >> select “Use secre text(s) or file(s)
  • Bindings section will appear and select Username and password (separated)

2

  • Enter your username variable and password variable.
  • Select your credentials for these variables from credentials drop down.
  • Now go to Build section >>Advance
  • And set up as below

3

  • Click Apply and Save

Go to your gradle code and modify retrieve your param like this

 

      println USER_E_NAME

svnuser = USER_E_NAME

svnpassword = USER_E_PASSWORD

println USER_P_PASSWORD

 

 

and in console output print will be

 

****

****

In SvnInfo(svnuser, svnpassword, repoUrlForInfo)

 

You good to go now