Behavior / Steps to reproduce

Running ASP.NET Core application on Kestrel Web Server. Every time the app is run, it makes a strange redirect to “https”, and the call doesn’t work. In console you see the following error.

 

image

Notice : This error doesn’t occur while running on IIS Express.

Explanation

This happens if in your startup file you have services configured with filters to have only Https attributes

image

This will require you to have local certificate. In case of IIS Express, IIS Express takes care of it for you. However, for other web server such as Kestrel this doesn’t work.

How to fix this

If you are just testing locally, you can simply comment this filter out, and everything will work. However, if you have to use the SSL, you should create a certificate and use it in your code. How to create Certificate is explained in this awesome post. To use it in your app, you need to call this certificate in your app, and configure this certificate in your WebHostBuilder as shown in this Answer.

 

I have been playing around with Cognitive Services since sometime now, as I find them mind blowing APIs to do a lot of Artificial Intelligence tasks without spending much time building AI algorithms yourself. Take FACE API for example. Normally the most common way to do face detection/recognition is to use Eigenface  classification algorithm. For this you need to know basics of AI such as Regression and Classification, as well as basics of algorithms such as SVD, Neural Networks and so on. Even if you use a library such as OpenCV, you still need some knowledge of Artificial Intelligence to make sure you use the correct set of parameters from the library.

Cognitive Services however, make this demand completely obsolete and put the power of AI truly in the hands of common developer. You don’t need to learn complicated mathematics to use the AI tools anymore. This is good because as AI becomes more and more common in software world, it should also be accessible to the lowest common denominator of developers world who might not have the high degrees in science and mathematics, but who know how to code. The downside of it? The APIs are somewhat restricted in what they can do. But this might change in future (Or, so we should hope for).

Today I am going to talk about Using the Face Api from Microsoft Azure Cognitive Services, to build a simple UWP application that can tell some of the characteristics of a face (Such as age, emotion, smile, facial hair etc.)

To do so, first open Visual Studio –> Create New Project –> Select Windows Universal Blank App. This is your UWP application. To use Azure Cognitive Services in this project you would have to do two things. 

1. Subscribe to Face API in Azure Cognitive Services portal – To do so, go to Cognitive Services Subscription, log on with your account, and Face API. Once you do, you will see following

image

You will need the key from here later in the code to subscribe to Face API client.

2. Go To Solution Explorer in Visual Studio –> Right Click on Project –> Manage Nuget Packages –> Search for Microsoft.ProjectOxford.Face and Install it. Then in MainPage.xaml.cs,  add following lines of code at the top -

   1: using Microsoft.ProjectOxford.Face;
   2: using Microsoft.ProjectOxford.Face.Contract;
   3: using Microsoft.ProjectOxford.Common.Contract;

Now you are all set to create the Universal App for Face Detection using Cognitive Services.

To do so, you need to do following things -

1. Access the device camera

2. Run the camera stream in the app

3. Capture the image

4. Call Face API.

We will look into them one by one.

First, let us access the camera and start streaming the video in the app. To do so, in MainPage.xaml add CaputreElement. See the following code snippet -

   1: <CaptureElement Name="PreviewControl" Stretch="Uniform" Margin="0,0,0,0" Grid.Row="0"/>

Then create a Windows.Media.Capture.MediaCapture object and initialize it. Add this as a source to CaptureElement and call the StartPreviewAsync() method. Following code snippet might make it clearer

   1: try
   2:    {
   3:      m_mediaCapture = new MediaCapture();
   4:      await m_mediaCapture.InitializeAsync();
   5:  
   6:      m_displayRequest.RequestActive();
   7:      DisplayInformation.AutoRotationPreferences = DisplayOrientations.Landscape;
   8:      
   9:      PreviewControl.Source = m_mediaCapture;
  10:      await m_mediaCapture.StartPreviewAsync();
  11:      m_isPreviewing = true;
  12:    }
  13:   catch(Exception ex)
  14:   {
  15:       //Handle Exception
  16:   }

This will start video streaming from the camera in your application. Now to capture the image and process it, create a button in MainPage.xaml, and add a event handler to this button. In the event handler, call the FaceServiceClient that you should initialize in the initializing code of your app.

   1: FaceServiceClient fClient = new FaceServiceClient("Here your subscription key");

And then, use this InMemoryRandomAccessStream object to capture the Photo in JPG encoding and then call DetectAsync from the FaceServiceClient to get the information about faces.

   1: using (var captureStream = new InMemoryRandomAccessStream())
   2: {
   3:     await m_mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), captureStream);
   4:     captureStream.Seek(0);
   5:     var faces = await fClient.DetectAsync(captureStream.AsStream(), returnFaceLandmarks: true, returnFaceAttributes: new FaceAttributes().GetAll());
   6: };

returnFaceLandmarks and returnFaceAttributes are two important properties that you need to take care of in order to get the full Detect information from the API. When returnFaceLandmarks is set to true, you get all the information of location of your face parts such as Pupils, Nose, Mouth and so on. The information that comes back looks like following

   1: "faceLandmarks": {
   2:       "pupilLeft": {
   3:         "x": 504.4,
   4:         "y": 202.8
   5:       },
   6:       "pupilRight": {
   7:         "x": 607.7,
   8:         "y": 175.9
   9:       },
  10:       "noseTip": {
  11:         "x": 598.5,
  12:         "y": 250.9
  13:       },
  14:       "mouthLeft": {
  15:         "x": 527.7,
  16:         "y": 298.9
  17:       },
  18:       "mouthRight": {
  19:         "x": 626.4,
  20:         "y": 271.5
  21:       },
  22:       "eyebrowLeftOuter": {
  23:         "x": 452.3,
  24:         "y": 191
  25:       },
  26:       "eyebrowLeftInner": {
  27:         "x": 531.4,
  28:         "y": 180.2
  29:       },
  30:       "eyeLeftOuter": {
  31:         "x": 487.6,
  32:         "y": 207.9
  33:       },
  34:       "eyeLeftTop": {
  35:         "x": 506.7,
  36:         "y": 196.6
  37:       },
  38:       "eyeLeftBottom": {
  39:         "x": 506.8,
  40:         "y": 212.9
  41:       },
  42:       "eyeLeftInner": {
  43:         "x": 526.5,
  44:         "y": 204.3
  45:       },
  46:       "eyebrowRightInner": {
  47:         "x": 583.7,
  48:         "y": 167.6
  49:       },
  50:       "eyebrowRightOuter": {
  51:         "x": 635.8,
  52:         "y": 141.4
  53:       },
  54:       "eyeRightInner": {
  55:         "x": 592,
  56:         "y": 185
  57:       },
  58:       "eyeRightTop": {
  59:         "x": 607.3,
  60:         "y": 170.1
  61:       },
  62:       "eyeRightBottom": {
  63:         "x": 612.2,
  64:         "y": 183.4
  65:       },
  66:       "eyeRightOuter": {
  67:         "x": 626.6,
  68:         "y": 171.7
  69:       },
  70:       "noseRootLeft": {
  71:         "x": 549.7,
  72:         "y": 201
  73:       },
  74:       "noseRootRight": {
  75:         "x": 581.7,
  76:         "y": 192.9
  77:       },
  78:       "noseLeftAlarTop": {
  79:         "x": 557.5,
  80:         "y": 241.1
  81:       },
  82:       "noseRightAlarTop": {
  83:         "x": 603.7,
  84:         "y": 228.5
  85:       },
  86:       "noseLeftAlarOutTip": {
  87:         "x": 549.4,
  88:         "y": 261.8
  89:       },
  90:       "noseRightAlarOutTip": {
  91:         "x": 616.7,
  92:         "y": 241.7
  93:       },
  94:       "upperLipTop": {
  95:         "x": 593.2,
  96:         "y": 283.5
  97:       },
  98:       "upperLipBottom": {
  99:         "x": 594.1,
 100:         "y": 291.6
 101:       },
 102:       "underLipTop": {
 103:         "x": 595.6,
 104:         "y": 307
 105:       },
 106:       "underLipBottom": {
 107:         "x": 598,
 108:         "y": 320.7
 109:       }
 110:     }

In faceAttributes properties you should give the FaceAttributeType that you want to have such as following. In my application I created a class from which I return all of them in a list with a method called GetAll().

   1: FaceAttributeType.Age,
   2: FaceAttributeType.Emotion,
   3: FaceAttributeType.FacialHair,
   4: FaceAttributeType.Gender,
   5: FaceAttributeType.Glasses,
   6: FaceAttributeType.HeadPose,
   7: FaceAttributeType.Smile

The result will look similar to this

   1: "faceAttributes": {
   2:       "age": 23.8,
   3:       "gender": "female",
   4:       "headPose": {
   5:         "roll": -16.9,
   6:         "yaw": 21.3,
   7:         "pitch": 0
   8:       },
   9:       "smile": 0.826,
  10:       "facialHair": {
  11:         "moustache": 0,
  12:         "beard": 0,
  13:         "sideburns": 0
  14:       },
  15:       "glasses": "ReadingGlasses",
  16:       "emotion": {
  17:         "anger": 0.103,
  18:         "contempt": 0.003,
  19:         "disgust": 0.038,
  20:         "fear": 0.003,
  21:         "happiness": 0.826,
  22:         "neutral": 0.006,
  23:         "sadness": 0.001,
  24:         "surprise": 0.02
  25:       }
  26:     }

These are float values between 0 and 1 with 1 being maximum and 0 is being least. In my application I wrote a basic threshold method that can display if I am smiling, angry, happy, etc in UI based on these values. The final result looks like this -

imageIt failed to detect my age (Because when I tested it, I was using a warm light bulb for lighting. Tip: Lighting matters a lot in age detection from Face API. Use cold lights if you want to look younger Winking smile). But apart from that the other information was quite correct. I was indeed smiling, my face was happy, I wasn’t wearing glasses and have some beard. Winking smile

At the first look, Face API looks really interesting. You can do a lot with other endpoints of API such as  Verification, Identification etc. I would try to cover these other functions in next posts.

Till then.

 

I come from C# development background, and thus things like method overloading comes to my mind naturally when tackling a certain type of problem. However, when coding in languages like JavaScript, this becomes a problem. 

As TypeScript compiles into JavaScript, this becomes a problem of TypeScript as well. One can eventually achieve function overloading in TypeScript but it is quite a work and frankly a bit awkward.

However if you absolutely have to do Function overloading in typescript, here is how to do it. Lets say you have function called testFunction and you want to overload it n times. Then you have to define this function n+1 times. Here is the code snippet of function testFunction with two overloads.

   1:  
   2: testFunction(param1: string ,param2:string):void;
   3: testFunction(param1:string,param2:number):void;
   4:  
   5: testFunction(param1:string,param2:any):void{
   6:     if(typeof param2=="string"){
   7:         //Do what you want to do in first function
   8:     }
   9:     else if(typeof param2=="number"){
  10:         //Do what you want to do in second function
  11:     }
  12: }

 

So basically first you have to declare both the function and then write third function that checks which function is called based on the type of the parameter.

Now lets say you have to do it with one and two parameters respectively. That is also possible -

   1: testFunction(param1:number):void;
   2: testFunction(param1:string,param2:string):void;
   3:  
   4: testFunction(param1:any,param2?:string):void{
   5:     if(typeof param1 =="number"){
   6:         //Do what you want to do in first function
   7:     }
   8:     else if(param2&&typeof param2 =="string"){
   9:         //Do what you want to do in second function
  10:     }

 

A lot of times you might stuck in a situation when you have tell the parent that something has happened in the children and if you are new to Angular 2 (Or coming from Angular 1 background) you might be scratching your head on how to do this. This has not been talked about in the QuickStart guide of the Angular 2 tutorials. And if you do not want to spend significant time going through the documentation to figure this out yourself, this is the correct place to catch up.

Actually Angular 2 handles this quite amazingly. Way better than Angular 1 for sure. Here is what I am going to do to show you how it is achieved -

1. Create A parent component with some template

2. Child component which is used in the parent component template

3. EventEmitter object in child component that emits some value to parent when a certain event is raised

So let’s begin -

First, lets create parent component. Here I have created a sample component -

   1: import { Component, OnInit } from '@angular/core';
   2: import { TranslationService } from './service/translation.service';
   3: import { TestService } from './service/test.service';
   4:  
   5: @Component({
   6: moduleId: module.id,
   7: selector: 'test2',
   8: templateUrl: './test2.html',
   9: providers: [TestService]
  10: })
  11:  
  12: export class Test2 implements OnInit {
  13:  
  14: constructor(private testService: TestService) {
  15: }
  16:  
  17: ngOnInit(): void {
  18: }
  19:  
  20: addNewEntry(event) {
  21: console.log(event);
  22: }
  23:  
  24: }

 

For now forget about service provider, that is not in the scope of this article. Let us look at addNewEntry function. It simply takes an event as input. Let us now dive in the Html template of this component.

   1: <add-object (onNewEntryAdded)="addNewEntry($event)"></add-object>

So we just have one line here with add-object attribute. add-object is our child component. Notice that we have added an event to this attribute called onNewEntryAdded, and this is handled in our addNewEntry function where we pass the event. Now let us go in the child component to see how we create and raise this event there.

This is our addObject html class -

   1: <div class="form-group">
   2:     <div class="row">
   3:         <div class="col-sm-12">
   4:             <button type="button" class="btn btn-default" (click)="addNewEntry()">Add new entry</button>
   5:         </div>
   6:     </div>
   7: </div>

As you can see we have a button  which is handled in addNewEntry method in the component when clicked. So far, so simple. Here is our child component -

   1: import { Component, OnInit, Output, Input, EventEmitter } from '@angular/core';
   2:  
   3: @Component({
   4:     moduleId: module.id,
   5:     selector: 'add-object',
   6:     templateUrl: './add-object.html',
   7:     outputs:['onNewEntryAdded']
   8: })
   9:  
  10: export class AddObject implements OnInit {
  11:  
  12:     //@Input()
  13:     public newObject: string;
  14:  
  15:     //@Output()
  16:     public onNewEntryAdded = new EventEmitter();
  17:  
  18:     ngOnInit(): void {
  19:         this.newObject = "Test";
  20:     }
  21:  
  22:     addNewEntry(): void {
  23:         this.onNewEntryAdded.emit({
  24:             value: this.newObject
  25:         })
  26:     }
  27: }


The important things needed to be imported from core for this are – Output and EventEmitter

As you can see  we have a string object and an EventEmitter object called onNewEntryAdded. If you see up this is the same event that we have handled in parent component. This can either be decorated with @Output() decorator (Which I have commented out in this snippet) OR with outputs property in @Component decorator (Which I have used) But remember not both!

That’s basically all. When the button is clicked, in button event handler, you simply call emit function of our EventEmitter and set our string object to value. This value can be read in parent with event.value in the eventhandler of onNewEntryAdded event.

Tada! And you are ready to handle children event in parent components. Quite simple!

 

I am a traditional software developer who is dependent a lot on source code management systems like TFS and Git for project collaboration. However, lately my foray into Augmented Reality and HoloLens has taken me into the wild west of Unity and 3D game development and the source code management is not as simple here as one would like it to be.

Now Unity is awesome and gives you a lot of controls to play around with 3D models, prefabs and what not. But as soon as you start checking in your code into source control, you have a huge problem of missing files, missing scenes and what not. I have spent hours and hours just trying to get the latest version of my Unity project from TFS and still missing vital files. So, here is the attempt to help someone out there who might still be facing the same issue in this same environment.

To start with, the very first thing is, it is not possible to check-in the actual solution file to the TFS, because every time unity builds it, there is new solution file. Actually, it is not needed to check in as well.

The second confusing road block is, if you are a traditional Visual Studio user, you are used to just right clicking on the solution or project and saying “get latest version” or “check-in”. However that is not how it happens with Unity bound projects. Because the project is recreated every time when you build it from unity, it is not source bound. So you have to go to Team Explorer and click on Pending changes to check in latest change.

However, once you check-in, more often than not, some or other files, scripts, assets are missing in the source control. And this can be real pain in the ass. This is because whenever you create an asset, prefab, or script in the Unity, it is not by default added to your project in Visual Studio and in turns Team Explorer. To fix this, you have to make sure that the file you add to the project in the Unity is manually added to the TFS. To do so, after adding the file in the Unity, go to the respective folder in the Source Control Explorer inside Team Explorer, right click on the window, and then click on “Add items to folder”. Select the item you added in unity, click on Finish.

image

Then you will see this item in source control explorer. This is added to source control but not yet checked in. To check-in, right click on this item and select “check-in pending changes”. I know this is tiresome method to do so for every file you add through Unity, but that is how it is.

Now, another thing is this huge library folder which contains a large number of meta files and you don’t know what to do with them. Well, thankfully you don’t need to take care of them, as in Unity there is an option which takes care of meta files and source control integration.

Go to Unity –> Edit –> Project Settings –> Editor. In inspector, under version control, select the mode to be Visible Meta Files. Also while we are at it, set the Asset serialization to Force Text. TFS is better at integrating text files to source control than the binary files.

image

This still doesn’t insure that you absolutely won’t miss some binding in the unity when you get the latest version from the TFS (Can tell you from my own experience) However, this at least makes sure that all your files are on TFS. So I found out why I was missing some bindings finally. And here is why - Making the mode Visible Meta Files is not enough when you are dealing with the TFS. This will only ensure that you can see these meta files in your folder. However, to make sure that all bindings and information about your assets is available across all the systems that you are working with, you need to check in these files into source control explorer. Now they are not automatically added to souce control solution of your project, and you have to manually add them to your solution on Source Control. To do so, do what we have already done for normal asstes files. Go to Source Control Explorer in the Visual Studio. Go to the specific folder, right clicj and click on add items to the folder. Select Meta files, add them, and then check them in. That's all. This is enough to make sure that your Unity 3d project is available completely on the Team Foundation Server.

 

Hope it helps. Smile

 

So last week, I got an opportunity to talk about Industry 4.0 and the role of data analytics in it at Data visualization Rhein Main meetup. (Very cool meetup, people living around Frankfurt – go visit if you are into data). And I thought I should update the summary of my talk here.

The talk started with what industry 4.0 is, and why data scientists should pay attention to it. A lot has been talked about the connecting and sensors side of Industry 4.0 but the data part of it is still to an large extent ignored. And this is what should be probed more.

Industry40Talk

Heavy industries for a long time have been cut away from the cutting edge data analytics tools and Industry 4.0 is their chance to make use of data analytics and data visualization for increasing efficiency as well as making sure they stay ahead on the industrial revolution.

I talked about how Microsoft Azure is playing a vital role in achieving this step. Services like Microsoft Service Bus, Event Hubs, IoT Hub, and Azure stream analytics can streamline large chunk of data in appropriate data channels, and tools like Power BI can be used to achieve the Data visualization with great efficiency and ease.

Cs0QRAFXYAAGrNK

I told a story of fictitious person Max Mustermann who owns an SME that deals with Inventory management and Logistics. Further I showed the live presentation of how by using a simple Raspberry Pi, his machines can be connected to cloud and the data from those machines can be used for data analytics to achieve complex decisions like Predictive Analysis and Predictive Mentainance.

The second part of talk was focused on Data Visualization with Augmented and Virtual Reality. I presented a sample of Data visualization in graph and charts format with HoloLens. Devices like HoloLens can achieve greater interaction with data as well as better understanding of visualized data.

You can find the presentation of my talk here. The video will also be up soon. The video is uploaded now on YouTube by TEDx Rhein Main team. Huge thanks to them. You can see the video of the talk here.  Smile

 

I have been writing about Sencha Touch and Cordova for sometime now. In a sense it is documentation of my learning of these two platforms in last couple of months which might help somebody out there, who is struggling with the same issues that I once had. I have never used Sencha Architect for my development environment. I am using Visual Studio as Text Editor as well as Packager for Cordova Applications with TACO extension.

In my path of mastering Cordova and Sencha Touch, I have stumbled upon building orientation aware applications using these frameworks, and this is the part 1 of multi-part tutorial about how to create orientation aware cross platform application using these two frameworks.

For any orientation aware application there are two things that the developer should take care of -

1. Device orientation at the start of the application.

2. Orientation change events

But first we need to look into the application architecture to decide how to incorporate orientation change in the application. As you know Sencha Touch framework works on MVC architecture with a slight twist of Stores and Profiles. If you are interested in Sencha Touch architecture and how profiles work in Sencha Touch, this post might be useful for you. Right now our focus is not on Models, Controllers, and Stores but purely on Views of the application as the orientation will only affect the views of the application. In the context of Sencha Touch, profiles also play a crucial role while designing orientation awareness. A lot of times, your views are different for different profiles of the application. For the simplicity of tutorial, I am going to take 2 profiles - one tablet, and one phone, and build orientation awareness for both of them.

Here are the steps that we are going to follow while building orientation aware application -

1. Create a sample application with two profiles – Tablet and Mobile

2. Install Orientation Plugin from Cordova plugin manager.

3. Create views and controller

4. Add views to viewport based on the orientation of the device at the start of the application

5. Handle the orientation change event

6. Take care of stylesheets.

To start with I have created a sample Cordova application in Visual Studio 2015 named OrientationTutorial. I have added Sencha-Touch-Debug.js and Sencha-Touch.css files to my application’s root folder and loaded them in index.html. Pretty normal stuff, isn’t it? The tricky part is, unlike Sencha Architect Visual Studio doesn’t have desired visual builder for sencha touch that can let you build your views, controllers, models, stores, and profiles in different folders and build a nice app.js for you in the end. So we have to do it ourselves.

In app.js, I create the application, while in view, profile, and controller folders, I create respective views, profile, and controllers. I load all the files in index.html body, and pray to God that everything works fine. (Kidding, when Sencha Touch breaks, even God cannot fix it). The solution explorer looks something like this -

Screenshot (312)

Main view is basically just the toolbar on the top of the application. Home view has some example buttons which do nothing basically. (At least for now). So here is how we are going to do it. We create main view, which is a container with a toolbar on the top, and stick it to the application forever. Then based on the orientation of the application we change add the appropriate home view to the application.

Go to config.xml, open plugin tab and add orientation plugin to your application. It should look something like this -

Screenshot (313)

I have created the main view at the launch of the respective profile. That means the Main view is add when the phone/tablet profile is launched.

All the default views such as homePhone and homeTablet are by default landscape views. We will create portrait view differently. For that, add the folder to the view folder called ‘portrait’. Inside this, create homePortrait view.

Now, some important things. In our design architecture, we are going to create views based on their alias’. Sencha Touch creates alias’ based on xtypes in the background. Which means we cannot use same xtype for views which are essentially same but are designed differently for different profiles and orientation. For example, the home view has 5 buttons, which all work the same. But based on whether the view is rendered on tablet or phone and whether the orientation is portrait or landscape, the position of these buttons change. So for these different rendering we have different views. But we might need same xtype in the controller for these views, to tie these to some model or load data from store. To overcome this problem, we use two approaches. We tie different profile views to same widget by inserting profile name between them explicitly at the initialization of the profile. This way the views can share same xtype but are rendered based on their profile names. This works for profiles because on one device only one profile is activated. The code looks something like this -

Ext.ClassManager.setAlias('OrientationTutorialApp.view.tablet.Home', 'widget.homeView');

However, what about orientation? Orientation can change multiple times while using the application. And so we cannot even explicitly tie the orientation view to the alias. To fix this, we simply create the view by adding widget.portrait.* to the item id of the view whenever the portrait view is to be created, and for landscape view we just use normal alias. Also, inside controller we avoid using alias in configuration refs. That is because when you use alias in config refs and you have to use the views inside controllers, you have to invoke them twice with two different alias. This increases duplicate code. Instead one can either use IDs or Ref. IDs are advised against by Sencha Developer Team and rightfully so. IDs add a lot of confusion as well as if you don’t destroy the views with same ids, you are in a deep trouble with multiple hours of debugging one line of code. Ref however, in my opinion is the best approach to tackle this issue. The configuration inside controller should look something similar to -

Screenshot (336)

Notice the refs in the Config.

And here is how the landscape view of the home looks -

Screenshot (332)

I have created five sample buttons and arranged them in a bit vbox container and then smaller hbox containers. The arrangement has been handled in the sencha view while positioning and sizes are configured in css.

When the application is launched, the showTab function is launched with the id of homeView. showTab function checks the current orientation of device, and accordingly adds prefix to id in order to create correct widget with alias. The code snippet is as following -

Screenshot (337)

Now without creating portrait view of home, if we just apply orientation change event, this happens -

Screenshot (334)

As you can see, the view is totally distorted, buttons cannot be seen, and Title bar cannot hold complete text. Some of this can be handled by CSS (Which I will cover in the next part of this tutorial), However, in order to give seamless experience to the end user, one needs to design a different view altogether.

So I created a portrait version of the home view. Notice that it has same id and same ref but NOT same xtype. This is very important. You cannot have two views with same xtype. It creates hell of a confusion inside framework, and more often than not, framework gives you wrong view for wrong orientation.

Screenshot (339) 

The view looks like this -

Screenshot (335)

As you can see the view is still not perfect. Background is not exactly in the center, nor is it visible, as well as buttons need better alignment and title bar can do with smaller font to accommodate complete text. However, we will take care of that with CSS.

Now the only thing that remains is handling the orientation change event. For that, we add orientationchange event to viewport, and handle it in onOrientationChange function. Well, there is actually nothing much to handle, just get the active items id which in this case is homeView. destroy the landscape component, and add the portrait with the help of showTab function which I have already described above.

In Main controller, I added control to viewport inside config, and the function onOrientationChange is created in Main controller as given below -

Screenshot (339)

That’s all. The basic orientation awareness is implemented. In the next part, I would talk about creating different classes for different orientation, and handling multiple views.

 

Now, this is a very quirky issue that I faced recently while developing my Android application using Apache Cordova as cross platform application development environment. CSS is anyway too confusing at times, thanks to so many browser instances that a web developer (And with the great Cross Platform application development, also the mobile application developers) have to support.

My task was simple. Or so I thought. I had to assign a background image to the welcome page of my application. I set the background URL and then set the background-size to cover as I want this background to be full screen. Here is how my code looked -

Screenshot (287)

Works well on Ripple Browser emulation, several emulators, as well as Android phones. I tested it on Android tablet and everything seems fine in Landscape mode but as soon as I changed the orientation to Portrait, it didn’t work. Great. Now starts the long hours of debugging. Thankfully, this time I didn’t have to spend too much time on debugging as accidently I found out how to fix this issue. While playing around the code, I mistakenly deleted “fixed” from the property, and Voila! It worked. I don’t know how and why, but on this specific tablet that I am using for testing (Which has Android OS 4.1.1 BTW) If you set “background” property to fixed, it doesn’t work.

So if you want the fixed property on your background, well better to set it on the higher element (div that is holding this image for example) to make it work on all set of devices in all orientations.

I hope it helps somebody. Smile

 

The version of your Cordova application is always mentioned in config.xml file of the application. This is inside the Element “widget” with Attribute “version”.

One might need to read this version of application inside the application code itself for various reasons, and to do so you need to read this from config.xml file.

image

Now, to do so, one can either write their own function in native JavaScript to go through Elements of XML and their Attributes until and read the value of version attribute. This I will talk about in my next post.

One can however, also use an out of the box plugin that does this task for them. I really like the plugin system of Cordova and the ease with which it is integrated in Visual Studio Environment.

An open source plugin called wizUtils is capable of providing general utility functions to access information from the native OS including reading the version of application. In this post, I am going to talk about how to use this plugin to read application version -

1. Double click on config.xml file in Visual Studio Solution Explorer. Go to plugins tab, click on custom, select git, add this link in the address bar, and enter. The plugin will be added to your application.

image

2. Go to your JavaScript file. As I am using ExtJS framework in my application, I am going to read this version inside app.js file of the application. Call wizUtils.getBundleVersion(successCallback) method of wizUtils class. Here successCallback is callback function that decides what to do with the version. In my case I am simply logging this function to the console to show.

Here is how the code for this looks -

image

Warning – If you are using Ripple Browser for testing your application, this plugin won’t work. This has to be tested on a physical device or Android Emulator.

 

Setting app icon is usually very straight forward task in application development, and thus should not require a dedicated blogpost for this. However, while using Apache Cordova in Visual Studio environment, it becomes slightly tricky.

So, why is it tricky? That is because Visual Studio TACO (Tools for Apache COrdova) shows the config.xml file differently than you usually see XML file. It shows all the important configurations such as plugins, version, name of the application etc. in a user friendly interface -

image

However this doesn’t show where the icon should be set. To do so, you need to see the actual XML code and make changes there.

First, add the icon image in your application’s res folder under the platform that you are using. for me it is Android.

image

Once that is done, right click on config.xml and click on view code (Can also be done with F7) Now here, inside every platform attribute, you will have an icon attribute. Add your icon path as source here. You can optionally select density (For Android) or width and height (For iOS and Windows). 

image

And that’s all. The icon is set for the application.

 

As you might already know, AForge is a .NET library for Image Processing. A lot of people prefer using .NET wrapper for OpenCV known as EmguCV, and no doubt it has stronger image processing capabilities. However, being a wrapper, it has its own limitations and sometimes AForge proves to be a better choice than EmguCV, for example while trying to process an image in Universal Windows Application. I am not going to discuss the comparison between AForge and EmguCV, I am just going to talk about how to use AForge with custom filters for object detection.

AForge comes with a lot of out of the box shape detection methods such as whether the blob is circle, triangle and so on. On the other hand it also has BlobsFiltering method that can detect object based on minimum width, minimum height, maximum width, and maximum height. But what if you have to detect a shape based on custom features. Such as ratio of height to width, or specific color, or anything else. In such case, custom filters come handy.

AForge provides Filtering class called BlobsFiltering. While one can use this class directly to filter blobs using width and height of blob. However, you can pass a custom filter that inherits from IBlobsFilter interface.

Before diving into the code, some background – I have used AForge not in usual .NET environment but the new .NET Core environment which is the environment for UWP applications. The library for .NET Core is available on GitHub. Why is this important? Because it gives us the infinite opportunity to use AForge with not only usual Forms or backend applications, but also with Phones, IoT devices such as Raspberry Pi, and even HoloLens to not only detect and identify objects, but to use machine learning algorithms such as supervised perceptron learning to make these vast variety of devices “Intelligent”.

Now, I have created a sample filter called BlobFilter that inherits from IBlobsFilter interface. Here is how it looks -

image

I am checking if the blob that is detected is triangle, circle, quadrilateral or convex polygon, and if it is, then I remove these from the image. If the check method returns true, it means the object should be removed from the image.

Then I create a BlobsFiltering object with this filter. And then, I just apply the filter on the image. It gives me the image without the shapes that I mentioned.

image

To show the example, here I uploaded the sample image from AForge website in my application -

image

And when I click on Remove shapes button, I get the output -

image

The two circle like objects are actually elliptical and thus are not removed from the image.

Using machine learning functions, this custom filter can be made really advanced where it can remove more complex figures such as numbers, characters, or even real world objects like cars, chairs, etc.

Next time I will talk about how we can use supervised learning algorithm in AForge to create the custom filters that can remove more complex shapes.

Tip – If you are new to Image processing and confused about what does it mean by blob, well, a blob is any object that is detected by the algorithm inside the image.

 

Trade fairs, typically known as messe in Germany are a great way to push the new technological innovations and present them in front of the world. So when daenet offered me the opportunity to attend world’s largest messe for automation industry Hannover Messe 2016, I literally grabbed the chance and hopped the bus for Hannover.

This year, hannover messe was completely dedicated to industry 4.0 which is seen as the fourth industrial revolution. Industry 4.0 is the way to connect IoT with industry and take the automation to next level.

IoT and cloud was underlying theme for the messe, which was quite evident from various IoT showcases on almost all the stand of big giants of software and automation industry including but not limited to Microsoft, Kuka, Siemens, T-Systems and many others.

IMG-20160427-WA0043 

daenet, with collaboration of University of Applied Sciences Frankfurt and Microsoft, presented some cool use cases that included Drones, PLC machines, Weather stations, Microsoft Band and so on. The theme of daenet show case was Machine to Machine and Human to Machine interfaces that can change the game for SMEs, and OEMs in automation industry.

I also got the chance to present our use cases in front of industry representatives from various organizations, which was an amazing experience in itself. And highlight of the event – I was lucky enough to meet Microsoft CEO, Mr. Satya Nadella. A very down to earth and friendly guy.

InstagramCapture_83e488fd-83d6-4305-b527-745ba0d107be

All in all, Hannover messe has underlined one thing, IoT and Industry 4.0 is here to stay. When used in the optimum way, industry 4.0 is capable of making SMEs in automation industry way more productive than they are today.

Before, wrapping up this summary of messe experience, here is the Microsoft blog that has covered our show cases and quotes our lead architect Damir Dobric on how our solutions can be used to help Mittelstand or SMEs to be in the front row of technological race.

 

Windows 10 IoT core has pushing update to its latest OS version once you download the OS version 10.0.10586.0. The update is 10.0.10586.63. However, the device never asks you if you want to download the updates and just re-flashes the card on its own. If your Pi is running in headless state over internet connection to do some mission critical stuff, this is a very crappy way to find out your pi has stopped working. However, you can stop the Pi from running the auto update. Here is a set of PowerShell commands that does this for you -

First connect to your Pi through remote session -

net start winrm

//IP Address of PI

Set-Item WSMan:\localhost\Client\TrustedHosts –Value $ipAddress

Enter-PSSession -ComputerName $ipAddress -Credential $ipAddress\Administrator

Now once you have established remote session with Pi, you can use these commands to check and disable auto updating

sc.exe config wuauserv start=disabled

sc.exe query wuauserv

sc.exe stop wuauserv

sc.exe query wuauserv

REG.exe QUERY HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\wuauserv /v Start

  1. Line 1. Disable Windows Update service
  2. Line 2. Check service status. If it's not running skip line 3 and 4
  3. Line 3. Stop Windows Update service (if running).
  4. Line 4. Check service status again. It would not stop sometimes.
  5. Line 5. Check if service is really disabled (Start should be 0x4)

I hope it helps. Smile

 

IoT has been a full blown affair as of 2016. When I heard about IoT the first time in 2014, I was not sure how it is different from all those embedded projects that developers have been doing as a hobby for years. Soon though, the Internet in the IoT became quite clear not only to me but to a lot of other people.

Although Internet aspect of IoT has changed a lot for those small embedded projects, core of IoT is still largely sensors and embedded components, and sooner we (the non embedded developer community) are familiar with all these sensors, better for the community, IoT, and us.

Today, I will talk about Ultrasonic sensors and using them with Raspberry Pi to build a small IoT device that can quite accurately measure distance. This could have a great set of applications such as in smart cars, in automation, in storage and logistics and so on.

I am using HC – SR04 ultrasonic sensor, that has accurate maximum distance measurement to 4 meters, Raspberry Pi 2 model B, Latest stable build of Windows 10 IoT Core, and my favorite IDE, Visual Studio 2015. I have also successfully managed to build this project on Raspberry Pi 3 Model B, with Windows 10 IoT Core insider preview. The tutorial is generic though and can be used with other IoT devices/other OS as well.

First, how does Ultrasonic sensor work? Its quite simple. You send a high to low pulse to the sensor’s trigger pin with 10 micro second’s interval, this invokes 8 ultrasonic pulses from the sensor. The echo of these pulses is received on echo pin. The echo pin is set to high for the time it took for the echo to return. So it is quite simple, trigger the sensor, and listen to the echo pulse. As soon as the echo is high, see how long it was set to high. Then, convert this time to distance with the help of speed of sound. (Ultrasonic waves travel with approximately speed of sound, and covers twice the distance, once from the sensor to object, and then the echo of the same. So you need to divide it by 2) You can do the calculation is you like, but I will save you some time, and give you the direct formula in a short while.

So, now we know how to calculate distance from UT sensor. However, there is a catch. Ultrasonic sensor needs 5V VCC. This 5V is returned on Echo pin to the Raspberry Pi. However, all the GPIOs on Pi are designed to receive 3.3V. Because of this, you should build a voltage divider circuit between echo pin and GPIO of Pi.

Here is a diagram of voltage divider circuit and a photo of how I built it on a breadboard.

msohtmlclipclip_image001

Input voltage is by default 5V. Resistor R1 and R2 has to be chosen in such a way that Output voltage V2 should be 3.3V.

V2/V1 = R2/(R1+R2)

By selecting R1 = 2K ohms, we get R2 = 4K.

Here is a photo of my circuit

image 

Now, let’s dive into code.

Create a new UWP application from Visual Studio. In solution explorer right click on references, add a reference, extensions, and add IoT extension for UWP. Once that is done, you can use all the GPIO functionality of the Raspberry Pi.

Once, that is done, do these following steps -

1. Create a GpioController object, and objects for GpioPin. You need Pins for trigger, echo, and alternatively for LED and button if you are using those.

image

2. Open the pins, and set the driver mode on them. Trigger is default output, while echo is input.

image

image

3. Send a high to low pulse on Trigger of 10 microseconds width.

image

This needs to be done on a new thread so that, you can continuously trigger the sensor on a specific interval.

4. Set a valueChanged event on Echo pin.

m_Echo.ValueChanged += m_GpioEchoValueChanged;

5. Every time you enter the value changed event, check if the transition is from low to high, if so, start the timer (Which is basically Stopwatch). If it is from high to low, stop the timer, and calculate the ticks.

image

6. Here is the formula to calculate distance in centimeters from timer ticks -

image

First you convert the ticks into microseconds, and then, multiply this by 0.01715 (One sided speed of light in cm/us), and that’s how you get the distance in CM.

Once, you have the distance, you can use it to calculate a whole lot of other stuff, such as predicting the storage capacity, finding the approximate velocity of moving object or vehicle, etc. 

 

It has been some time that I have been working on improving a cross platform application which has been built using Sencha Touch and Cordova. The application originally had Sencha Touch 2.2 framework, and I managed to upgrade it to Sencha Touch 2.4.2 framework. This improved some of the functions of the application, however, to a great surprise it made the application performance worse rather than improving it.

I realized it is because of the way some of the functions written in the code originally which are actually hacks (Not really, but rather a way to pass the limitations of Sencha framework) and are not needed at times with the latest framework version.

There are native interactions, that were previously written in the application because there Sencha was supporting it, and are now shifted completely to Cordova. There are code snippets which are there to support older Sencha syntax. And then there are loops that can be improved.

In this post, I am going to cover a hack that improved the performance of my application significantly.

JSON v/s Ext Encode/Decode

Ext.encode and Ext.decode is Sencha’s version of JSON.stringify and JSON.parse. This is necessary where native methods are not available. However, with all the new browsers, and compatible mobile web rendering engines, JSON native methods are accessible and And on such platforms, using Ext.encode and Ext.decode for example, slows down the user interaction with application.

To overcome this, before loading any Ext JS function from the framework, we can create the Ext array and then setting the native JSON flag to true on this. This allows application to rely on the browser and web rendering engine to make JSON encoding and decoding as fast as possible rather than relying on the framework.

Here is the code snippet which should be added to index.html file before loading any of the Ext JS framework methods.

 

test

 

In my application this has made the performance of the application visibly better.

More Posts Next page »