These are challenging times for the whole world but to stay true to ourselves we are preparing for the release of the 4th edition of Inside Azure Management book. We expect to be ready no later than 15th of May but as you can understand many of us are busy supporting our customers from home. The book as always will be free download but of course you will be able to purchase it via Amazon as well if you want to. Return to this blog post in a few days to check for the Amazon link. We have worked hard to update the content to the latest changes inside Azure but also to give you some new scenarios.
In IT naming of resources has been around for quite some time. In some of the early days IT personal was using super hero names, constellation names, etc. to name their servers. That was when the number of servers count was equal or less than your fingers. Over the years the number of servers has went up which required using naming convention. Another need for the naming convention was also the different role each server had. Of course with the coming of the cloud the result is that even more resource started to be generated. Strangely though we haven’t changed much our guidelines for naming resources much compared to how we did it on-premises. But may be it is time to change them?
I have promised that I will write the last part of this series and I am doing it later than never. After the December holidays I have been occupied with some community stuff that hopefully will see light in the next months. Due those community duties I was not able to write the last part sooner.
In this last part we will cover Azure Alerts Common schema. I will try not to cover things that are already in the official documentation but I want to mention a few important things. If you haven’t checked the documentation please do before reading the rest of the blog post.
With the recent capability of setting retention period for Log Analytics data per table a lot of new possibilities of managing and retaining your data pop-up. A common scenario is that you may have a lot of performance data which may be logged every minute or even every 10 seconds. You need that data in such short intervals in your Log Analytics workspace only for the past month or so but you do not need such granularity for older data. At the same time it is good to have some summarization (aggregation) of that data for longer period due to compliance, analysis, etc but there is a cost associated when you retain a lot of data for longer period. By using serverless and the new per table retention capability now you can achieve this and save cost. In this blog I will show you how you can achieve this with simple example.
Right before Ignite Microsoft has released a new SKU for Log Analytics. With that SKU the model of usage does not change but it is rather discount you get for committing certain usage in your Log Analytics workspace. To me it is similar to reserved instances but on a monthly bases. This SKU is also related to Azure Sentinel as it is the recommended SKU when you have onbarded Log Analytics workspace to Azure Sentinel.