Contents tagged with Microsoft Azure

  • Using Device Codes to authenticate Bots with Azure AD

    Tags: Bot Framework, Microsoft Teams, npm, Microsoft Azure, Microsoft Graph, Azure AD

    I’ve been building chat-bots for a while now and I’m seeing more and more requests of building these bots for enterprises. For bots targeted at the enterprise, perhaps being hosted in Microsoft Teams, one of the first requirements is that they should get data from their internal systems and most specifically from Office 365, through the Microsoft Graph. The problem here is that we need to authenticate and authorize the user, through Microsoft Azure AD, to be able to access these resources. A Microsoft Bot Framework bot, does not inherit the credentials or security tickets from the application the bot is being invoked from, so we need handle this ourselves. For instance, even though you have logged in to Microsoft Teams, or Skype for Business or your Intranet – your security token cannot (and should not) be passed to the Bot.

    This is not mission impossible, and there are multiple ways of implementing this. For instance if you’re building Bot Framework bots using .NET you can use the AuthBot and with node.js there’s the botauth module. There’s also other (a bit weird and specialized) ways of doing this by using the backchannel.

    All of these are custom implementations with either sending an already existing access token to the bot or using home brewed magic number generators. But, there’s a much simpler way of doing this – using the native and built-in features of the Azure Active Directory Authentication Library (ADAL), specifically using the OAuth 2.0 Device Flow.

    In this post I will demonstrate how to create a bot from scratch and use the device flow to sign in and get data from Microsoft Graph. It will all be built using node.js and TypeScript – but the procedure is the same for any kind of environment.

    Creating the bot

    First of all we need to create a bot using the Bot Framework portal. Give the bot a name, handle, description and specify the messaging endpoint. You can use localhost for testing but in the end you should have a publically available URL to be able to use it in the different Bot channels. In this sample we need to make sure that the messaging endpoint ends with /api/messages. Then you need to create a Microsoft App ID and a password – just follow the wizard and copy and take a note of the ID and specifically the password – you will only see it once. Once you’re done, save your bot.

    Configuring the App platform for the bot

    The bot created in the Bot Framework portal, is essentially an Application in the Microsoft Application Registration Portal. In order to use this Application ID with Azure AD and Microsoft Graph, we need to log in to that portal and find our newly registered bot and then add a platform for it. In this case let’s add a Native Application. You don’t have to configure it or anything, it just needs to have a platform.

    Setting the platform for the App

    In this portal you can also add the delegated permissions for your bot, under Microsoft Graph Permissions. For the purpose of this demo we only need the User.Read permissions.

    Let’s write some code

    Next step is to actually start writing some code. This will be done in node.js, using TypeScript and a set of node modules. The most important node modules used in this demo are:

    • webpack – bundles our TypeScript files
    • ts-loader – webpack plugin that transpiles TypeScript to JavaScript
    • express – node.js webserver for hosting our Bot end-point
    • adal-node – ADAL node.js implementation
    • @microsoft/microsoft-graph-client – a Microsoft Graph client
    • botbuilder – Bot Framework bot implementation

    All code in this sample are found in this Github repo: https://github.com/wictorwilen/device-code-bot. To use it, just clone the repo, run npm install. Then to be able to run it locally or debug it you can add a file called .env and in that file add your Application ID and password as follows:

    MICROSOFT_APP_ID=fa781336-3114-4aa2-932e-44fec5922cbd
    MICROSOFT_APP_PASSWORD=SDA6asds7aasdSDd7

    The hosting of the bot, using express, is defined in the /src/server.ts file. For this demo this file contains nothing specific, part from starting the implementation of the bot – which is defined in /src/devicecodebot.ts.

    In the bot implementation you will find a constructor for the bot that creates two dialogs; the default dialog and a dialog for sign-ins. It will also initialize the ADAL cache.

    constructor(connector: builder.ChatConnector) {
        this.Connector = connector;
        this.cache = new adal.MemoryCache()
    
        this.universalBot = new builder.UniversalBot(this.Connector);
        this.universalBot.dialog('/', this.defaultDialog);
        this.universalBot.dialog('/signin', this.signInDialog)
    }

    The implementation of the default dialog is very simple. It will just check if we have already logged in, but in this demo we will not set that value, so a login flow will always be started by starting the sign-in dialog.

    The sign-in dialog will create a new ADAL AuthenticationContext and then use that context to acquire a user code.

    var context = new AuthenticationContext('https://login.microsoftonline.com/common', 
      null, this.cache);
        context.acquireUserCode('https://graph.microsoft.com', 
          process.env.MICROSOFT_APP_ID, '', 
          (err: any, response: adal.IUserCodeResponse) => {
            ...
    });

    The result from this operation (IUserCodeResponse) is an object with a set of values, where we in this case should pay attention to:

    • userCode – the code to be used by the user for authentication
    • message – a friendly message containing the verification url and the user code
    • verificationUrl – the url where the end user should use the user code (always aka.ms/devicelogin)

    We use this information to construct a Bot Builder Sign-In Card. And send it back to the user:

    var dialog = new builder.SigninCard(session);
    dialog.text(response.message);
    dialog.button('Click here', response.verificationUrl);
    var msg = new builder.Message();
    msg.addAttachment(dialog);
    session.send(msg);

    This allows us to from Bot Framework channel invoke the authorization flow for the bot. The end-user should click on the button, which opens a web browser (to aka.ms/devicelogin) and that page will ask for the user code. After the user entered the user code, the user will be asked to authenticate and if it is the first time also consent to the permissions asked for by the bot.

    In our code we then need to wait for this authorization, authentication and consent to happen. That is done as follows:

    context.acquireTokenWithDeviceCode('https://graph.microsoft.com',
       
    process.env.MICROSOFT_APP_ID, response, 
      (err: any, tokenResponse: adal.IDeviceCodeTokenResponse) => {
        if (err) {
          session.send(DeviceCodeBot.createErrorMessage(err));
          session.beginDialog('/signin')
        } else {
            session.userData.accessToken = tokenResponse.accessToken;
            session.send(`Hello ${tokenResponse.givenName} ${tokenResponse.familyName}`);
            ...
        }
    });	

    The result from this operation can of course fail and we need to handle that, in this case just sending the error as a message and restart the sign-in flow. If successful we will get all the data we need to continue (IDeviceCodeTokenResponse) such as access-token, refresh-token, user-id, etc. In a real world scenario you should of course store the refresh token, in case the access token times out. And it is also here that we potentially tells our bot that the user is signed in redirects subsequent dialogs to what we want to do.

    Now we can use this access token to grab some stuff from the Microsoft Graph. The following code, with a very simplistic approach, where wo do not handle timed out access tokens, we just grab the title of the user and sends it back to the user.

    const graphClient = MicrosoftGraph.Client.init({
        authProvider: (done: any) => {
            done(null, session.userData.accessToken);
        }
    });
    graphClient.
        api('/me/jobTitle').
        version('beta').
        get((err: any, res: any) => {
            if (err) {
                session.send(DeviceCodeBot.createErrorMessage(err));
            } else {
                session.endDialog(`Oh, so you're a ${res.value}`);
            }
        });
        }
    });

    Run the application

    To run the application first we need to transpile and bundle it using webpack like this:

    npm run-script build

    The we start the express server like this:

    npm run-script run

    To test it locally we need to use the Bot Framework emulator. Download it, run it and configure it to run at http://localhost:3007/api/messages. Type anything in the emulator to start the sign-in experience

    Testing the bot with the Bot Framework emulator

    As soon as you’ve written something the Sign-In card will be displayed. When you click on the button a browser window will open and you will be asked to type the code. When you’ve done that you will be asked to sign-in and consent. And shortly after that the bot will come alive again and type the users name and if all works well, also the job title of the user.

    Consenting the device code bot

    If you decide to publish your bot (for instance to Azure, all the necessary files are in the Github repo to Git publish it to Azure) you can also use the bot in other channels, for instance Skype:

    The device code bot in Skype

    Summary

    As you’ve now seen. It is very easy to create a simple and elegant sign-in flow for your bots, without sacrificing any security, and all using standard features of ADAL and OAuth. This will nicely work with any Azure AD accounts, with MFA or not.

  • Summing up the year of 2014 and embracing 2015

    Tags: Personal, SharePoint, Microsoft Azure, Office 365, SharePoint 2013

    The time has come for me to do, as I’ve done now for eight years (2013, 2012, 2011, 2010, 2009, 2008, 2007 and 2006), my annual post to sum up the year. It is always fun to look back to what happened the past 12 months. This past year has been a somewhat “in-betweeners” year.

    We (me, my clients, colleagues etc.) are standing on the edge of something big and the bridge over to the other side is really, really long. Some hesitate to pass the bridge, thinks it is to steep down, some people are running across it in fear, some take it just easy and some pass it half-ways and then stalls there not knowing which direction to go. Microsoft has already passed the bridge to the other side, they ran as fast as they could. But, they dropped so many things on the way over, things that I and others need to pick up and fix and very often even remind Microsoft that they dropped it at all!

    Confusing – yes, stressing – hell yea, annoying – yup, new opportunities – oh YEA, wanting to go back – nope!

    Writing

    I think I hit an all-time low in blog postings this year. Not that it has been so little to write, rather that I’ve been having to little time. I have a bunch of posts in the works, that never has been published, due to various reasons.

    According to my telemetry this is what you peeps liked this year:

    I’m really glad that the last two of those posts ended up that high. Really liked working those scenarios out.

    Speaking

    I’ve been fortunate to be invited to a number of conferences the past year as well. The highlight of course is the SharePoint Conference 2014, where I had a total of three sessions. The most awesome experience from that conference was when the room after one of my sessions were empty and people stayed for an hour and a half just asking questions!

    See you in May at the new Microsoft Ignite conference. You can keep up to date on my past and future presentations on this page.

    MVP

    For the fifth time I was awarded the Microsoft MVP Award for my community contributions. Always an honor and passing the five year mark was a bit special.

    Predictions

    Each year I try to predict what is going to happen to us and our business in the future. Last year I talked a lot about SharePoint being a service (six years after the SharePoint Services announcement at PDC08), Azure dominating the cloud space and Microsoft focusing everything on Services. SharePoint may not yet be dead, this product has more lives than a cat. Azure is still growing faster than I can keep up with and I like it! And the Services piece – I think this is the most important of all my predictions last year. Microsoft is focusing on owning the services and the data – the device, product etc. is not the top prio. Take a look at the Microsoft Band – an awesome device but the service behind it is what makes the big difference, no other vendor is even close to competing in that space.

    So, 2015, what will happen? I think I stick to my Services, services, services prediction. 2015 is all about the services! I’ll leave it to that. If you don’t understand how the services will change our business you better look for a career change.

    What’s next?

    I have to admit that 2014 was not one of my favorite years, due to multiple reasons. I’ve been pretty tired of this whole “SharePoint & Office 365” situation and it has taken me some deep reflections and analysis to get my inspiration back. But 2015 will be a really interesting year. Too keep you on the hook a little bit more, head back to this blog on Friday!

    Happy New Year!

    I whish all of you a Happy New Year and I hope that your 2015 will be an awesome ride!

  • Speaking at Share-The-Point Southeast Asia 2014

    Tags: Conferences, SharePoint, Microsoft Azure

    See you there!I’m so excited to be once again going to Singapore and speak at the Share-The-Point Southeast Asia 2014, held November 25-26 2014. It is one of my favorite conferences and this will be my third time in the awesome country and city of Singapore! Everything is just great about this; the people, the speakers, the attendees, the city, the food – you name it!

    This year I will have two sessions:

    • Using Microsoft Azure for your SharePoint and Office Apps
      One of my personal favorite sessions, scenario based and packed with demos showing you tips and tricks, awesome Azure features and lots of code.
    • Building out your SharePoint Infrastructure using Azure
      Another really interesting session where I’ll walk you through the pros and cons, the do’s and don’ts of hosting your SharePoint infrastructure in Azure.

     

    If you are planning to be close to Singapore during those days you should make sure to get your conference passes as soon as possible! OR, if you have trouble convincing your boss about what you and your company will miss if you bail out of this, leave a comment (with your e-mail) and the first three persons will get a free pass (full attendance to the 2 day event, including catering and access to the exhibition area and all sessions) – what are you waiting for?

  • Microsoft Azure IAAS and SharePoint 2013 tips and tricks

    Tags: Microsoft Azure, IAAS, SharePoint 2013, SQL Server

    After doing the Microsoft Cloud Show interview with Andrew Connell I thought it might be a good idea to write some of my tips and tricks for running SharePoint 2013 on Azure IAAS. Some of the stuff in this post are discussed in more depth in the interview and some things we just didn’t have time to talk about (or I forgot). I really recommend you to listen to the podcast as well and not just read this post.

    Disks, disks and disks

    As mentioned on the Microsoft Cloud Show interview more than once, one of the first things you should look into is your disk configuration for your Azure VM’s.

    Use a lot of disks

    One of the first things you must look into is the performance of disks. As you should be aware of SharePoint and SQL requires fast disks to operate with decent performance. When running an on-premises installation (virtual or physical) you have almost full control of the disk performance – you can choose from fast spinning disks to SSDs to different RAID configurations. But you cannot do that in Microsoft Azure. The virtual disks in Azure uses the blob storage for storing the VHD files, as blobs. Disk performance are often measured in IOPS (input/output operations per second) and in with the VHD files in the Azure blob storage we are limited to 500 IOPS per disk. Also worth noticing is that if you run the VM using the Basic tier offering, the limit is 300 IOPS per disk.

    TechNet contains a couple of article about calculating IOPS and the need for IOPS. For instance in the article called “Storage and SQL Server capacity planning and configuration (SharePoint Server 2013)” we can see that a content database requires up to 0.5 IOPS per GB. If we translate that into an Azure VHD we see that we can only have about 5 content databases per disk, assuming that they grow to 200GB each (a total of 1.000 GB). The Search Service Application, specifically the crawl and link databases has high IOPS requirements, so they should be on dedicated disks and so on. This is exactly the same as on any SharePoint installation – in the cloud or on your own metal. But the hard limit on 500 IOPS per VHD makes it even more obvious, given that we do not have a choice.

    So, when deciding which machine you are going to use for SharePoint and SQL, choose one with a lot of disks. For instance choose A4/A7 to get 16 disks or A3/A6 to get 8 disks.

    Read and Write caching

    A very important thing to configure for the Azure IAAS disks are the read and write caching, it is turned off by default on data disk – leave it like that!

    SQL Server Filegroups and multiple data files

    SQL Server has a concept called Filegroups which can be used to increase performance of databases. You can for a database in the primary file group add multiple data files and have them reside on different disks. (Thanks to Trevor Seward for the tip).

    Disk striping

    The maximum size of a VHD in Azure is 1TB (1.000GB), and the limitation is in the Azure blob storage. If you need larger volumes/disks in your Azure VMs then you need to stripe multiple disks into a single volume. Just add the VHDs and then use the Disk Manager to configure the disk striping.

    Disk size

    Create large disk files! You only pay for the space you actually use. If you store 1MB on a 500GB disk, you pay for 1MB. Reconfiguring disks cost you more!

    SQL specific stuff

    When running SQL on an Azure VM or any kind of virtual or physical hardware – do not forget to format the disk using 64K allocation unit size and do not forget to give the SQL Server service account “Perform Volume Maintenance Tasks” right. These two things makes a huge difference!

    Examples (and just examples nothing else!)

    For a single SQL Server this could look something like this:

    Disk/Volume Purpose
    1 OS (default)
    2 Temporary (default)
    3 SQL Server binaries
    4 Temp DB
    5 Temp DB log files
    6 Default SQL Server DBs
    7 Default SQL Server DB log files
    8 Search DBs
    9 Search DB log files
    10 Content DBs 1
    11 Content DBs 1 log files
    12 Content DBs 2
    13 Content DBs 2 log files
    14 Service Application DBs
    15 Service Application DB log files
    16 Backup 1

    And for a SharePoint machine it could look like this:

    Disk/Volume Purpose
    1 OS (default)
    2 Temporary (default)
    3 SharePoint binaries
    4 Blob cache
    5 Log files
    6 Index files
    7 ASP.NET Temporary files
    8 Tools
    9 Visual Studio (dev machine)
    10 Project files (dev machine)

    Note: do NOT store anything of value on the Temporary (default) disk, it will be wiped whenever Microsoft decides to. I would not even recommend storing SharePoint log files there (which I’ve heard recommendations of) since you might want to go back in time and search the logs eventually.

    Virtual Machine size

    Choosing the Virtual Machine size is one of the tricky questions; do you need RAM, CPU, Memory etc. “Fortunately” we do not have that much of a choice in Azure IAAS (compared to other vendors). We can choose from A0 to A9 – all the details here: “Virtual Machines Pricing Details”.

    If we’re talking about SharePoint and SQL we’re even further limited – A3 to A7 are the ones with sufficient RAM/CPU for that scenario, but we can almost exclude A5 which only has CPU two cores and only support 4 data disks. There is no “perfect machine” here, it depends. With regards to running SharePoint you might want a lot of cores for search machines etc etc. A4, A5 and A6 are good candidates for SharePoint in my opinion. A3 might do for a simple dev machine.

    Plan for High Availability

    Azure terminologyLet’s assume we’re setting up a SharePoint + SQL production environment in Azure, then you need to start to think about HA (High Availability). Part from the usual ways to do it (see my SPC 2014 presentation on the topic) there are a couple of other things that we must think of in Azure IAAS.

    Location

    Location should be fairly obvious. Do you want your stuff in North America, Asia, Northern Europe etc. It doesn’t have much with HA to do, but more about latencies and costs! Yes, if you want to be cost efficient check out the pricing in the different locations, it’s quite a difference.

    Affinity Groups

    Affinity Groups are a very important construct in Azure. Affinity Groups allows you to make sure that your cloud services, storage, virtual networks etc are placed “together”. Remember the data centers might be several football fields in size and you don’t want your machines scattered across all that space, that would only cause unwanted latencies. An Affinity Group makes sure that all your stuff in Azure are as close to each other as possible. Also this reduced our costs, since we don’t get any cross datacenter communication. An Affinity Group exists in one Location.

    Cloud Services

    “Cloud Services” are used in Azure to “group” instances together. Cloud Services have features such as end-points, load balancing and auto-scale. For SharePoint and SQL don’t even think about auto-scale though.

    It is very important to note that if you are using SQL Server Always-On Availability Groups, this SQL Server setup must be in its own Cloud Service, separate from the clients accessing it.

    Availability Sets

    Availability sets are a logical construct that allows you to specify a group of instances/roles. Whenever Microsoft must reboot one of your VMs for maintenance or other purposes, they will respect the Availability Sets and make sure that only one of the machines within an Availability Set are down at a time. For instance grouping all SharePoint Web Servers into one Availability Set, makes sense.

    An example

    Here is one example that I’ve shown when presenting this that gives you an idea of how an Azure IAAS SharePoint and SQL infrastructure might look like, once again just an example!

    SharePoint and SQL Azure IAAS example

    SQL Server optimizations

    I will not dig too deep into tips and tricks with SQL Server specifically but instead urge you to read the really good article “Performance Best Practices for SQL Server in Azure Virtual Machines”. That article gives you all the details and a nifty check list.

    Short notes

    Here are some other small things for you to remember:

    • Office Web Apps 2013/Office Online/WAC are NOT supported on Azure IAAS at the moment – hybrid with a site-to-site VPN is your way if you want WAC.
    • Always set your machine in High Performance mode .
    • If you’re using the public IP of the Cloud Service, remember to always have a machine running in that Cloud Service, otherwise the IP will change. Or use the Azure static DNS offering.
    • For development SharePoint machines and SQL – do not use the Azure SQL images, instead use your MSDN SQL Server Development Edition. If you don’t you will be billed for the SQL Server resource usage and that is even more expensive than running the actual VM.

    Summary

    That was a mouthful of tips and tricks and I hope you get something out of this. Of course there are plenty more, don’t be shy and use the comments for your best tips and tricks. I might update the post with your best tips or other things that I find. And also note, that these are in no way the official tips and tricks from Microsoft and the Azure Team, just my experience from working with it.

  • Interviewed on the Microsoft Cloud Show about Azure IAAS

    Tags: Microsoft Azure, SharePoint 2013, Interviews

    A couple of weeks back I was interviewed by Andrew Connell for the Microsoft Cloud Show. The Microsoft Cloud Show is an (almost) weekly podcast where Andrew (AC) and his wingman Chris Johnson (CJ) discusses everything related to Microsoft cloud offerings including benchmarks with other cloud vendors. If you’re not subscribing and listening to the show already then I urge you to do that as soon as possible!

    Microsoft Cloud Show

    Me and AC sat down for almost an hour discussing Microsoft Azure IAAS and specifically when running SharePoint 2013 in that service. We had a great talk, as usual when it comes to AC, and I think we covered a lot of the issues and gotchas and things to think about when building a SharePoint 2013 infrastructure on Azure IAAS.

    You can download and listen to Episode 40 of the Microsoft Cloud Show here. Enjoy!

  • Announcing Azure Commander

    Tags: Windows Azure, Windows Phone 8, Windows 8, Microsoft Azure

    For no one out there, in the SharePoint space or any other space, Microsoft Azure has gone unnoticed. Microsoft Azure is a really great service, or rather set of services, that for a (Microsoft or SharePoint) developer or IT-Pro is something that they should use and embrace. Personally I’ve been using Azure since the dawn of the service and I’ve been using it more and more. I use it to host web sites, host SharePoint and Office Apps, Virtual Machines, Access Control and a lots of other things.

    Specifically I’ve been lately using it for work more and more, and my customers and company are seeing huge benefits from using Microsoft Azure. We’re using it to host our development machines, demo environments, hosting full SharePoint production environments (staging, CI etc etc) and we’re using it to host SharePoint and Office Apps.

    All these services and instances and configurations must be managed somehow. For that we have the Azure portal (new and old version) and PowerShell. None of them are optimal in my opinion and I needed something more agile, something I could use when on the run, something to start, stop or reset my environments before a meeting, on my way home etc. This so I can optimize my resource/cost usage for the services and to save time.

    Introducing Azure Commander

    For all these purposes and reasons I have created a brand new Universal App for Windows 8.1 and Windows Phone 8.1 called – Azure Commander. Using Azure Commander I never forget to shut down a Virtual Machine since I can do it while commuting home, I can easily restart any of my web sites, when something hits the fan and more.

    Azure Commander are in its initial release focused on maintenance tasks for Virtual Machines (IAAS), Web/Worker Roles (PAAS) and Azure Web Sites. But you will see more features being added continuously! I personally like the option of easily firing up the app, choose a virtual machine and start an instant RDP session to it – from my laptop or Surface 2. For full features head on over to the Azure Commander web site.

    The interface on Windows 8.1

    AZ-Screen-W8-1.1.1.2

    The interface on Windows Phone 8.1

    AZ-Screen-WP8-1.1.1.2

    The App is since a couple of hours back available in both the Windows and Windows Phone store. You can get the app for only $3.99 – which is something you save in an instant when you use the app to remember turning of a virtual machine or web site or service over the weekend. A good thing is that this is an Universal Windows App – which means that if you buy them on either platform you will get it for free on the other one.

    Windows 8.1

    Download it from the Windows Store!

    Windows Phone 8.1

    Download it from the Windows Phone Store!

    Summary

    I hope you will enjoy the App as much as I do (both using it and building it). If you like it, please review it and follow Azure Commander on Twitter (@AzureCommander) or like it on Facebook.

About Wictor...

Wictor Wilén is the Nordic Digital Workplace Lead working at Avanade. Wictor has achieved the Microsoft Certified Architect (MCA) - SharePoint 2010, Microsoft Certified Solutions Master (MCSM) - SharePoint  and Microsoft Certified Master (MCM) - SharePoint 2010 certifications. He has also been awarded Microsoft Most Valuable Professional (MVP) for seven consecutive years.

And a word from our sponsors...