admin

Nov 122019
 

To do that we need the executable MSbuild.exe to build our project. Normally its part of the environment variables so if you open command window and type MSBuild.exe and hit enter. If you see any error saying that the command is not found then it means that its not part of the environment variable.

Normally its located  at this path C:\Program Files (x86)\MSBuild\14.0\Bin so either you open this path and write the command there or you add that to the environment variables.

After that its really easy to run the build all you have to do is write the command

 

MSBuild.exe solutionname.sln

 

This command will build the solution. This command could come really handy if you plan to make your own power tools.

 Posted by at 10:31 pm
Nov 122019
 

Lets start by thinking what we want to do here. Now there could be some requirement where we want to log all the web calls that user is making so that we can log some audit trail.

Audit trail are good to keep accountability in the application so that problem and breach detection is easy and trackable.

If you want to add audit trail on a request level its really easy. You can make a filter that can intercept each web request and log that action. You can either log the action name or decorate each action name and use that to log.

Lets start by writing some code. So first we need to write that attribute or filter

audit1

If you notice this is a very basic class that is driving from the action filter attribute. By doing that we also get access to the overrideable methods of ActionFilterAttribute. One of which is OnActionExecuted. This method will be invoked when the action method is finished execution.

This method will be called on each web request and you can log/validate the action call here.

Lets say there are some action that you don’t want to log like api calls. In those cases you can write exception classes which are simple action attribute.

audit2

To use this we will go to our LogThisAttribute and add a exclusion so that we can skip logging if he action method is decorated with DoNotLogThis attribute.

audit3

Now that we have all the building blocks setup lets start using it.

I am going to use it in my basic MVC Dotnet Core web application. To use this I need to configure it in the startup.cs class

audit4

If you notice on line 38 I added the filter to the global Filters so it should be called on every action call. This is added in the MVC middleware options.

NOTE: IN the action executed method one could say that I only want to login stuff on successful execution else I want to skip the logging. There is no directly way of handling that except you throw a validation exception indicating that the method execution was unsuccessful. You have to process that exception in the context and make decisions on it.

You can find the code at

https://github.com/alineutron/Lab/tree/master/dotnet/HttpLogIntercepter

 

 Posted by at 10:28 pm
Nov 122019
 

Azure graph search is a search that you can do from the azure cli. Normal you cant just directly do it you have to install the extension for the azure graph. To do that you have to open the CLI and add the extension.

>az extension add – – name resource-graph

This will add the resource graph extension to the azure cli

You can view the list of the extension by using this command.

>az extension list

After that you are ready to take the advantage of the graph query there are many command you can run you can find them by running

>azure graph query -h

To list down all the projects run this command

you can view whats under a certain subscription by running this command

  • az graph query -q “project id” -s “subscriptionid”

this will give you a json presentation of the output if you want to have a more readable output you need to pass the table output paramegter.

  • Az graph query -q “project id, name” -o table

If you want to add another command in the query you have to pipe the command the easest example is to sort the ourput so run this command

  • Az graph query -q “project id,name | order by name” -o table
 Posted by at 10:20 pm
Nov 122019
 

Most of the configuration that we do n azure portal can also be generated in the form of ARMN templates. Azure Resource Manager (ARM) templates dictate how the resources will be provisioned on azure.

In this blog I will follow a very basic guide in creating a ARM template. This guide consist of 6 different steps that I will follow.

 

are Azure Resource Managere template This command is to validate the arm template

az group deployment validate –resource-group videolunch –template-file .\template.json –parameters .\parameters.json

these steps are used to make an arm template

step1: use git

step2: validate and then commit

setp3: reduce the number of parameters. Remove it from the parameters json and add the variables thr instead

step4: use unique string

step5:user vaiables. Use variables instead of constants

step6: use tshirt size or smart options. Use readble words

creating a template

template manager in azure portal

using VS create a  cloud managers and then ARM template

it contains

list of parameters, variables, resources, outputs

 

Template format

In its simplest structure, a template has the following elements:

JSONCopy

{ “$schema”:”https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”, “contentVersion”:””, “apiProfile”:””, “parameters”: {  }, “variables”: {  }, “functions”: [  ], “resources”: [  ], “outputs”: {  }}

Element name Required Description
$schema Yes Location of the JSON schema file that describes the version of the template language.

For resource group deployments, use: https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#

For subscription deployments, use: https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#

contentVersion Yes Version of the template (such as 1.0.0.0). You can provide any value for this element. Use this value to document significant changes in your template. When deploying resources using the template, this value can be used to make sure that the right template is being used.
apiProfile No An API version that serves as a collection of API versions for resource types. Use this value to avoid having to specify API versions for each resource in the template. When you specify an API profile version and don’t specify an API version for the resource type, Resource Manager uses the API version for that resource type that is defined in the profile.

The API profile property is especially helpful when deploying a template to different environments, such as Azure Stack and global Azure. Use the API profile version to make sure your template automatically uses versions that are supported in both environments. For a list of the current API profile versions and the resources API versions defined in the profile, see API Profile.

For more information, see Track versions using API profiles.

parameters No Values that are provided when deployment is executed to customize resource deployment.
variables No Values that are used as JSON fragments in the template to simplify template language expressions.
functions No User-defined functions that are available within the template.
resources Yes Resource types that are deployed or updated in a resource group or subscription.
outputs No Values that are returned after deployment.

Each element has properties you can set. This article describes the sections of the template in greater detail.

 Posted by at 10:19 pm
Nov 122019
 

Below are some of the useful azure command that I use. I will not explain them fully just will mention briefly about their output.

Login:

First of all you need to connect to azure and to do that you need to run the command az login. When you run the command you will be prompted with this message

Az login

azlogin

You will need to open the browser to authenticate your self. Once you do that the output on the PowerShell will change which will show the list of all the subscriptions you are allowed to see.

List subscriptions

Lets list all the subscriptions again. Run the following command to do that.

Az account list

You will see the list of all the subscriptions you are allowed to see

 

Show active subscription

Now you need to verify that which subscription are you on. If you have only one subscription then you don’t have to worry about that. But if you have a dev and production subscription that its always good to make sure what is your active subscription. The following command will show the active subscription. This command show the default subscription.

Az account show

 

Change the subscription

All the command that you run are against the active subscription. If you have more than one subscription the following command can be used to switch.

Az account set  -s subscription_id

These are the basic command that are used to connect to azure and get your self started after that pretty much everything that you do on the portal can be automated and written in command in power shell.

 Posted by at 10:17 pm
Nov 122019
 

This part of the build chain in DevOps was a huge riddle for me. I never quite get my head around it. But now that I have got some idea I will try to tell the story from top down approach.

So in a development process we start to have the code on developer machine. In order to share this code with the other developers we use code repositories and normally we use online/central repositories which all the developers can access. We can achieve this  by using Git or VSTS or any other code repository.

Once the code is pushed to an online repository it not only serves the purpose of sharing the code but it is also used to build the code. Once that code is pushed there then the code is build. After a successfull build the application artifacts are generate that are then published to released. Now these artifacts could be deployable files or executables.

The release ready artifacts are then directed towards the installation platform which is a normally a service on a machines and the artifcats are deployed there. In our case this will be a web app in azure cloud services.

So to achieve this process I first created the ‘code on the developer machine’ part so I created a helloworld website in .netcore. For my example I will use the github as the online repository you can also use git (available in VSTS) which can directly be used by VSTS.

  • Gubhub: is website which is using git the azure devops gives integration with the git hub but in visual studio you can also use Git.
  • VS Git: this is using the azure devops git and once you push the code here all the code will be pushed to the devops.

I am using github to host my code. Now I will push all my code to my github repository. I also created a ignore file of type visual studio to my github as I don’t want to push the unwanted files. Finally my code is now on github. But how can I introduce a end to end processes where every code push should re-deploy my website.

So this is the process that I followed.

First of you need to create a new organization I have already done that with the name Asadmirza0855. Once that is done you need to open it. To open an organization on VSTS just click it and you will be in that context.

 

cicd4

Once you open that organization now we need to create a new project in that organization.

cicd2

You can see that you can create a public of provate project by clicking on the left top corner where it says ‘new project’. I have already created a project called CICD3.

Now lets setup the build process. So to do that you need to go to the build menu under Pipelines. Once you are there then hit ‘New’. Now you will be able to create the build process here. This is a four step process

Step1: Where is your code, in our case we will select github. Its going to to authetciate you on github and then will fetch all the repos from there.

cicd3

Step2: After that you need to select which repo you want to build

cicd4

Step3: In this step you need to configure the type of the application you are trying to build. VSTS have provided some predefined template so I will use those. In my case I will use ASP.NET Core.

cicd5

In the review its going to generate the Yml file that I will use for the build. In my case my yml file looks like this.

cicd6

This is the not the standard generated file I have modified it a bit. There are two main tasks here to build the project which will generate the artefact and the other task is two publish that artefact. You can also use varaibles in the YAML file. Now push that YAML file and once that is pushed the build will be triggered.

The next step is to create a release pipeline. But first lets create the Web App in azure portal. I will not explain how to create a web app here so I assume you know that. Once that web app is created we will create a release pipeline using the ‘Azure app service deplyment’ which is a service template.

  1. Click new under release and select Azure App service Deployment.

cicd7

I have selected my project CICD3, then I need to select to latest version so it knows which version to pick and the source alias.

  1. You also need to select a trigger. I have selected the very basic trigger that is to release after every new build. You also need to update the task and deployment paramaters.

cicd8

  1. In the default stage when you click on the task you will be presented with this page

 

cicd9

cicd10

in this page you need to provide the web app name and also the azure subscription under which that web app exists.

You can also view this confgiration in YAML file as well.

 

Once all of this setup is done you can trigger the build by triggering the build using a fake push. This will perform the following steps.

cicd11

 

 

 

 

 Posted by at 8:34 am
Nov 112019
 

Ok so this is something that I was not really aware on how to do. The assignment was to read the data from the log analytics in azure to show it in one of our application by using c#

The challenge was to understand how to do it. So I started searching google and I found these three very interesting links

https://dev.loganalytics.io/documentation/Tools/CSharp-Sdk

this link is mentioned on these SDKs but it limited to limitations in the OpenAPISpecification. I am going to try that out to see if it works or not. I couldn’t quite understand the domain in this part may be its super easy and obvious but as I said I am not fully aware I think I will skip that,

The other way that I found is mentioned on this link

https://docs.microsoft.com/en-us/rest/api/loganalytics/query/get

this is the more direct way of accessing it but its more to access via power shell or python. There is still no way mentioned here to access it via c# so I found these two links

https://stackoverflow.com/questions/53915236/querying-azure-log-analytics-from-c-sharp-application

https://blogs.technet.microsoft.com/livedevopsinjapan/2017/08/23/log-analytics-log-search-rest-api-for-c/

from the second link I was able to get the bearer token so I use this link to get the token and from the first link I checked how to use the api call.

Initially I was still getting the 443 forbidden so what I did is I created a new app and allowed the log analytics access to that app. I did another run and it still didn’t work. So then I realized that my log analytics workspace should also that app to access the logs. So I have to add the role assignment in the workspace for that app. I did that and after that I got a 404 because the log table that I accessed doesn’t exists so I added another table name ‘Usage’ in the query and it worked fine.

 Posted by at 10:22 pm
Nov 112019
 

I was unable to run the functions the HostBuilder was working quite fine but it was unable to connect with the Azure function I was getting the connection refused exception. The thing that I was missing was the storage simulator.

I will start going through in making in hello world azure function

azf1

azf3

We have now selected the HttpTrigger. Every time a HTTP call is made to the specified URL this function will be triggered. For this demo, we will select the authorization level anonymous meaning anyone can access the function.

azf2

Once you start the function you will see a bulk of output in the console. I have explained it later about those options. At the end of the output, you will be able to see the URL that is used to access the function.

Write this URL in a browser or postman and put a breakpoint at the beginning of the option. When you access the URL it will hit the breakpoint.

 

azf4

Let’s see the logs in a bit detail we have

LoggerFilterOptions: with level and rules. This is used for logging if there is any provided. The level of the log and the rules are also mentioned here

FunctionResultAggregatorOptions: The function result aggregator is the option to identify what are the options that should be present in the resulting bulk.

SingletonOptions: These options ensure the singularity of the functions. This ensures only one instance of the function is running.

HttpOptions: we also have HTTP options where we mention the maximum number of current and outstanding requests. You can also define the route prefix here

 Posted by at 2:42 pm
Nov 112019
 

In docker world to play around where have docker for mac and docker for windows in k8s we have minikube. Minikube will give a master and a node as a vm and then it will also give a container runtime

Now lets install minikube you can download it from here

https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe

before you install minikube make sure that the hyper-v is enabled in BIOS. Once you install mini kube these are the step to be

Get the version of the kubernetes

C:\WINDOWS\system32>kubectl version –client

Client Version: version.Info{Major:”1″, Minor:”10″, GitVersion:”v1.10.11″, GitCommit:”637c7e288581ee40ab4ca210618a89a555b6e7e9″, GitTreeState:”clean”, BuildDate:”2018-11-26T14:38:32Z”, GoVersion:”go1.9.3″, Compiler:”gc”, Platform:”windows/amd64″}

So at this point I have the minikube installed I have the kubenetes install

Now I need to start the minikube and to so that I am running the following command

I have used this link to start the minikube but it didn’t work and I found the problem

https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c

the problem was that the minikube didn’t started when the firewall was active to I stopeed the firewall started the minikube and then run the command

>kubectl get nodes

To get the nodes. This will get the nodes for the active context you can see all the context in .kube/config file that are available

Now what I want to see is where my kubectl is pointing so to see that I need to see the current context. The following command will show me the context

<kubectl config current-context

And then you can keep on playing with the minikube to try out K8s locally and see how it works on your dev environment.

 Posted by at 2:37 pm
Nov 112019
 

To understand Kubernetes better let’s discuss an example of football. On a football field, the coach decides where each player will stand and with what qualities. Let’s say the person who is good with goalkeeping will be a goal keeps, the person who is good with attaching will be playing forward. All this orchestration of the player is done by either the coach. Kubernetes is exactly the same where the Kubernetes act as an orchestrator to manage the docker containers.

Kubernetes makes sure that all the required containers are available and suppose to be there to do their job. IN Kubernetes we tell the framework that these are the number of nodes that we want and the rest is handled by Kubernetes itself. It makes sure all the specified number of nodes are up and running. If any of the nodes go down or a scale-up is required k8s takes care of that.

K8s have some basic topic that we need to know before moving forward. These topics recalled K8s objects. In this post, I will stick to the definition and won’t be able to go into more detail.

Pods:

Pod is the basic execution unit of K8s application. The pod encapsulates the application, storage, network and other strategies that govern the containers. The pod can have one or more than one containers. The objective of the Pod is to run the single instance of the given application if you want multiple instances of the application then you need to add more pods.

Service:

An object that acts as an abstraction in running the set of pods a network service. The service is responsible in ensuring that all the network is available between them for proper communication but that decoupling is also enabled.

Namespace:

K8s provide multiple namespaces through which we can classify different resources based on the similar features they have. The namespace is a way to divide the cluster resource between multiple users.

To connect to k8s

1. Before you do that you need to PIM if you have that otherwise, it won’t work

2. Get the configuration from az and to do that you need to run this command. This command will get the config from the az and populate that in the config file as a context. So that, later on, you can switch the context in power shell and connect to it. The name of the resource group should be the same as where you created the culture

az aks get-credentials –resource-group myResourceGroup –name myAKSCluster

 

3. Once you are done that you can see if the configuration is updated or not (which it will be but we will just look at it to get more information). You can also verify it by running the command to get the current context. The command is as follows

Kubectl config current-context

4. Now if you want to connect to the host you need to first open a proxy with the kubectl. The command to do that is

Kubeclt proxy

5. After that, you can open any browser and type this

http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/overview?namespace=_all

 

6. To get a list of all the context the command is

Kubectl config get-contexts

7. kubectl exec -it {containerid} -n {namespace} powershell (to access the power shell)

 Posted by at 2:35 pm