8 May
2018

//build/ 2018 – Day 2

Category:Azure

Today I am attending Day 2 of //build/ 2018 and plan to just give a quick summary of each session I attend today. Let’s see if I can keep it manageable . I’ve designed my day around Azure, where amazing things are happening enabling scenarios I could never have imagined I’d be working with until quite recently.

DNA Storage

I just had to start here. There is a team consisting of Microsoft Research and University of Washington team members working to store data in DNA. Yes, we’ve maybe read an article in Future magazine or something, but this is real.

My favorite quote from the session was, “We could put the entire contents of the Internet into a shoebox.” And it’s not vaporware. They are working on increasing read/writes to 200 megs of data instead og the 100K or so they can manage today. I don’t know the read/write times or if the DNA is stored in a petri dish-grown ear, but this is truly amazing.

Architecting and Building Hybrid Cloud Apps for Azure and AzureStack

It was very clear from the show of hands that few people have used AzureStack yet. When I first heard about it, it was a simple solution for running Azure PaaS and IaaS services in your own datacenter. While this is true, the industry opportunities are very compelling, especially if we consider Azure Cloud and AzureStack integration to be latent partners.

Some scenarios:

  1. Keep data local in AzureStack when aboard a moving vehicle like an airplane, car, or freightliner. Securely synch data to proper region in Azure Cloud when connectivity becomes possible again.
  2. Data currency is updated to AzureStack when the ship pulls into port, providing more data to the applications aboard.
  3. An oil rig runs its stack. The rig is equipped with sensors that are monitored for failure probability and predictive maintenance scenarios.

Beyond the myriad of scenarios we might envision, developers get to write for the same cloud application stack. The private cloud becomes no different that the public cloud. In fact, it meets one of the 2 pillars of hybrid stacks mentioned in the session.

  1. Consistent development
  2. Azure services available on premises
  3. Integrated delivery experience

There is a lot more to say, but I have more to cover. For more on AzureStack, check out this paper AzureStack: An extension of Azure.

5 Azure services every developer should know

A quick talk on this topic held in the expo area proposed a pretty good idea. Here are 5 Azure services you should know about and try on your own.

(@AndrewBrianHall & paulyuki99)

The goal of this sessions was to find some places for developers to start working with Azure in scenarios that will draw their interests. No practical experience yet required.

  1. Azure App Service – Let’s us put together a full application in Azure. It can host your web app, APIs, etc. It’s deeply integrated into Visual Studio. You get fully managed update patching and management of servers. App Service applications can be deployed directly from Visual Studio or from a CICD pipeline (preferred).

     

  2. Azure Functions provide event driven execution of code in response to events from inside or outside Azure. You only pay for execution time on a Function App. These are serverless, meaning there are no VMs or clusters you must manage, but these services can auto-scale as needed. The example was to upload an image to Blob storage, which triggered a Function. The Function wrote the image back to Blog storage, this time with a watermark.

    By the way, they are like turtles, asynchronous all the way down.

     

  3. Typical Azure storage options. These are automatically backed up, of course.
    1. Blobs
    2. Tables
    3. Queues
    4. Files

       

  4. Azure Cosmos DB enables document storage through many interfaces. That means it speaks MongoDB, Graph API, Azure Storage Table, and of course, SQL. We’re encouraged here not to just store JSON objects, but also things like POJO and POCO objects.

    CRUD is async and reads and writes are darn simple.

    Public Preview: Tying using Azure Storage Explorer to see and query what’s in Cosmos DB.

     

  5. Put some time into learning logs and metrics. These + Application Insights gives us the ability to see very clearly in our logs the data we need to debug our apps through log files.

    Logging topics are much longer, but at least learn about App Insights.

     

Azure Storage – Foundation for Building Secure, Scalable Cloud Applications

There are several types of storage in Azure from spinning disks to SSDs on VMs to PaaS offerings like Cosmos DB and SQL Azure. It turns out, most Azure storage services are built on top of Blob storage, so that’s where most of this session landed.

Immutable Storage Releases in May

This is a good one. Our financial services clients will be particularly interested, I think. Wouldn’t it be great if I could just use Blob storage for Hot, Cool, and Archive data? That means I wouldn’t need to move it around and things just become simpler. That’s shipping. The pre-bits demo showed a simple toggle to choose between the services for the Blob storage DB being used. AND it works at a container or storage account level.

Further, what if I could define my retention policies and just have them run to trim data on a policy I set? NICE!

Finally, what if we could do all this through the same SDK/API. Yup, that’s in there, too.

Pillars of Azure Storage

Even though Blob storage kind of “leads the way”, all storage technologies in Azure must adhere to these pillars:

  1. Durable / Available
  2. Secure / Compliant
  3. Manageable / Cost efficient
  4. Scalable / Performant
  5. Open / Interoperable

Replication

To deal with the durable and available part, here are some factoids:

  1. LRS – Locally redundant storage – 3 copies of all your data is kept so there are always at least 2 fault tolerant instances.
  2. GRS – Geo-redundant storage – 3 copies for each region are kept so each region is fault tolerant on its own.
  3. RA-GRS – Read access GRS from a secondary region
  4. ZRS – Zone Redundant Storage – sync copies in 3 availability zones

     

You can read more about Azure Storage replication here.

 

Encryption & Security

RBAC storage authorization will premier in June, meaning we won’t need keys or SAS tokens on all DB connections. Woot!

 

Admins can grant new and narrow permissions to specific users, even giving read and/or write to a given user which is perfect for service accounts.

 

This will also be available from the SDKs using a simple auth token.

Lock down access to Azure with Identity

Arturo Lucatero (@ArLucaID) is a Program Manager in Active Directory and put on a quick clinic for attendees interested in identity and security within Azure. The demo gods weren’t in love with him, but some good points came through loud and clear.

Microsoft is committed to interoperability. He showed logging into a Linux machine using Active Directory, which isn’t a fresh and new thing, but impressive nevertheless. Then he showed SSH login both from the standpoint of a human being and from the view of a bot. Very cool stuff there.

A note of advice he offered is to establish minimum criteria to sign into a given resource (principle of least privilege). To do this, define a Policy, which will map the identity to an Assignment (the who + what app). In a very cool twist, you can now access controls on the Access Control you’ve created with conditions like:

  1. Conditions with 2FA! Can limit sign-in to things like, “is Bob working outside the network?” and if he is, use 2-factor authentication to validate him into the application. There was little more than a drop down to enable 2-factor authentication using Bob’s phone. Awesome.
  2. The rights on the given resource for the account
  3. Timing can also be controlled. This account may have access to a resource for a given scheduled time window, either now or scheduled for the future. This can be useful in nightly jobs and similar situations.

All this enables Just-in-time and just-enough-resource access. Watch this space as there is more to come from this team soon.

Conclusion

Today was a big data day for me, pardon the pun. It was well worth it. I now understand several storage products to a much deeper level than I did before.

You can too, by choosing links from this article and diving in!