Windows azure iaas white paper


















Any client-specific state should be written to Windows Azure storage or passed back to the client after each request. Instead, a Worker role instance initiates its own requests for input. It can read messages from a queue, for instance, as described later, and it can open connections with the outside world. Given this more self-directed nature, Worker role instances can be viewed as akin to a batch job or a Windows service. A developer can use only Web role instances, only Worker role instances, or a combination of the two to create a Windows Azure application.

If the load decreases, he can reduce the number of running instances. This agent exposes a relatively simple API that lets an instance interact with the Windows Azure fabric. For example, an instance can use the agent to write to a Windows Azure-maintained log, send alerts to its owner via the Windows Azure fabric, and do a few more things. To create Windows Azure applications, a developer uses the same languages and tools as for any Windows application.

She might write a Web role using ASP. Similarly, she might create a Worker role in one of these. NET Framework. A developer who has installed PHP, for example, might choose to use another tool to write applications. Meeting this need is the goal of the Windows Azure Storage service, described next. Accordingly, the Windows Azure Storage service provides several options. Figure 4: Windows Azure Storage provides blobs, tables, and queues.

The simplest way to store data in Windows Azure storage is to use blobs. Blobs can be big—up to 50 gigabytes each—and they can also have associated metadata, such as information about where a JPEG photograph was taken or who the singer is for an MP3 file. To let applications work with data in a more fine-grained way, Windows Azure storage provides tables. NET Data Services. The reason for this apparently idiosyncratic approach is that it allows scale-out storage—scaling by spreading data spread across many machines—much more effectively than would a standard relational database.

In fact, a single Windows Azure table can contain billions of entities holding terabytes of data. Blobs and tables are both focused on storing and accessing data. The third option in Windows Azure storage, queues, has a quite different purpose. A primary function of queues is to provide a way for Web role instances to communicate with Worker role instances.

For example, a user might submit a request to perform some compute-intensive task via a Web page implemented by a Windows Azure Web role. The Web role instance that receives this request can write a message into a queue describing the work to be done.

Any results can be returned via another queue or handled in some other way. Regardless of how data is stored—in blobs, tables, or queues—all information held in Windows Azure storage is replicated three times.

The 6 8. Windows Azure storage can be accessed by a Windows Azure application, by an application running on- premises within some organization, or by an application running at a hoster. In all of these cases, all three Windows Azure storage styles use the conventions of REST to identify and expose data, as Figure 4 suggests. Within that data center, the set of machines dedicated to Windows Azure is organized into a fabric.

Figure 5 shows how this looks. Figure 5: The fabric controller interacts with Windows Azure applications via the fabric agent. As the figure shows, the Windows Azure Fabric consists of a large group of machines, all of which are managed by software called the fabric controller. The fabric controller is replicated across a group of five to seven machines, and it owns all of the resources in the fabric: computers, switches, load balancers, and more.

This broad knowledge lets the fabric controller do many useful things. It manages operating systems, taking care of things like patching the version of Windows Server that runs in Windows Azure VMs. It also decides where new applications should run, choosing physical servers to optimize hardware utilization.

To do this, the fabric controller depends on a configuration file that is uploaded with each Windows Azure application. This file provides an XML-based description of what the application needs: how many Web role instances, how many Worker role instances, and more. When the fabric controller receives this new application, it uses this configuration file to determine how many Web role and Worker role VMs to create.

If an application requires five Web role instances and one of them dies, for example, the fabric controller will automatically restart a new one. Similarly, if the machine a VM is running on dies, the fabric controller will start a new instance of the Web or Worker role in a new VM on another machine, resetting the load balancer as necessary to point to this new machine. While this might change over time, the fabric controller in the Windows Azure CTP maintains a one-to-one relationship between a VM and a physical processor core.

Because of this, performance is predictable— each application instance has its own dedicated processor core. A Web role instance, for example, can take as long as it needs to handle a request from a user, while a Worker role instance can compute the value of pi to a million digits if necessary.

Developers are free to do what they think is best. The best way to get a feeling for this platform is to walk through examples of how it can be used. Accordingly, this section looks at four core scenarios for using Windows Azure: creating a scalable Web application, creating a parallel processing application, creating a Web application with background processing, and using cloud storage from an on-premises or hosted application.

The usual choice today is to run that application in a data center within the organization or at a hoster. In many cases, however, a cloud platform such as Windows Azure is a better choice. For example, if the application needs to handle a large number of simultaneous users, building it on a platform expressly designed to support this makes sense. The intrinsic support for scale-out applications and scale-out data that Windows Azure provides can handle much larger loads than more conventional Web technologies.

An online ticketing site might display this pattern, for example, as might news video sites with occasional hot stories, sites that are used mostly at certain times of day, and others. Running this kind of application in a conventional data center requires always having enough machines on hand to handle the peaks, even though most of those systems go unused most of the time. If 8 Since Windows Azure charging is usage-based, this is likely to be cheaper than maintaining lots of mostly unused machines.

To create a scalable Web application on Windows Azure, a developer can use Web roles and tables. Figure 6 shows a simple illustration of how this looks. Figure 6: A scalable Web application can use Web role instances and tables.

In the example shown here, the clients are browsers, and so the application logic might be implemented using ASP. NET or another Web technology. In either case, the developer specifies how many instances of the application should run, and the Windows Azure fabric controller creates this number of VMs.

As described earlier, the fabric controller also monitors these instances, making sure that the requested number is always available.

For data storage, the application uses Windows Azure Storage tables, which provide scale-out storage capable of handling very large amounts of data. Think about an organization that occasionally needs lots of computing power for a parallel processing application. There are plenty of examples of this: rendering at a film special effects house, new drug development in a pharmaceutical company, financial modeling at a bank, and more. Windows Azure can instead provide these resources as needed, offering something like an on-demand supercomputer.

A developer can use Worker roles to create this kind of application. In Windows Azure, this means using blobs. Figure 7 shows a simple illustration of how this kind of application might look. Figure 7: A parallel processing application might use a Web role instance, many Worker role instances, queues, and blobs. In the scenario shown here, the parallel work is done by some number of Worker role instances running simultaneously, each using blob data.

Since Windows Azure imposes no limit on how long an instance can run, each one can perform an arbitrary amount of work. To interact with the application, the user relies on a single Web role instance.

Through this interface, the user might determine how many Worker instances should run, start and stop those instances, get results, and more. Communication between the Web role instance and the Worker role instances relies on Windows Azure Storage queues. Those queues can also be accessed directly by an on-premises application.

Rather than relying on a Web role instance running on Windows Azure, the user might instead interact with the Worker role instances via an on-premises application to. Figure 8 shows this situation. Figure 8: A parallel processing application can communicate with an on-premises application through queues. In this example, the parallel work is accomplished just as before: Multiple Worker role instances run simultaneously, each interacting with the outside world via queues.

Here, however, work is put into those queues directly by an on-premises application. For example, think about a Web application for video sharing. It needs to accept browser requests, perhaps from a large number of simultaneous users.

Some of those requests will upload new videos, each of which must be processed and stored for later access. Instead, the part of the application that accepts browser requests should be able to initiate a background task that carries out this work.

Windows Azure Web roles and Worker roles can be used together to address this scenario. Figure 9 shows how this kind of application might look. Figure 9: A scalable Web application with background processing might use all of Windows Azure's capabilities. Like the scalable Web application shown earlier, this application uses some number of Web role instances to handle user requests. To support a large number of simultaneous users, it also uses tables to store information. For background processing, it relies on Worker role instances, passing them tasks via queues.

In this example, those Worker instances work on blob data, but other approaches are also possible. This example shows how an application might use all of the basic capabilities that Windows Azure exposes: Web role instances, Worker role instances, blobs, tables, and queues. While not every application needs all of these, having them all available is essential to support more complex scenarios like this one.

For example, think about an on-premises or hosted application that needs to store a significant amount of data. An enterprise might wish to archive old email, for example, saving money on storage while still keeping the mail accessible. Similarly, a news Web site running at a hoster might need a globally accessible, scalable place to store large amounts of text, graphics, video, and profile information about its users.

A photo sharing site might want to offload the challenges of storing its information onto a reliable third party. All of these situations can be addressed by Windows Azure Storage. Figure 10 illustrates this idea. Figure An on-premises or hosted application can use Windows Azure blobs and tables to store its data in the cloud. For some applications, this tradeoff is definitely worth making. Supporting the four scenarios described in this section—scalable Web applications, parallel processing applications, scalable Web applications with background processing, and non-cloud applications accessing cloud storage—is a fundamental goal for the Windows Azure CTP.

As this cloud platform grows, however, expect the range of problems it addresses to expand as well. As described earlier, the platform supports both. NET applications and applications built using unmanaged code, so a developer can use whatever best fits her problem.

To make life easier, Windows Azure provides Visual Studio project templates for creating Web roles, Worker roles, and applications that combine the two. This difference has the potential to make development more challenging and more expensive, since using Windows Azure 13 Figure 11 shows how this looks.

Figure The development fabric provides a local facsimile of Windows Azure for developers. The development fabric runs on a single machine running either Windows Server or Windows Vista. It emulates the functionality of Windows Azure in the cloud, complete with Web roles, Worker roles, and all three Windows Azure storage options. A developer can build a Windows Azure application, deploy it to the development fabric, and run it in much the same way as with the real thing.

Once the application has been developed and tested locally, the developer can upload the code and its configuration file via the Windows Azure portal, then run it. Still, some things are different in the cloud. Yet even logging could be problematic. Several instances of a Windows Azure application are typically running simultaneously, and life would be simpler if they could write to a common log file. Fortunately, they can: As mentioned earlier, this is a service provided by the Windows Azure agent.

By calling an agent API, all writes to a log by all instances of a Windows Azure application can be written to a single log file. Windows Azure also provides other services for developers. For example, a Windows Azure application can send an alert string through the Windows Azure agent, and the platform will forward that alert via email, instant messaging, or some other mechanism to its recipient.

If desired, the Windows Azure fabric can itself detect an application failure and send an alert. In other situations, however, you might need more control. Suppose your data needs to remain within the European Union for legal reasons, for example, or maybe most of your customers are in North America. In situations like these, you want to be able to specify the data centers in which your application runs and stores its data.

To allow this, Windows Azure lets a developer indicate which data center an application should run in and where its data should be stored. Microsoft is initially providing Windows Azure data centers only in the United States, but a European data center will also be available in the not-too-distant future. Wherever it runs, a Windows Azure application is installed and made available to its users in a two-step process. When the developer is ready to make the application live, she uses the Windows Azure portal to request that it be put into production.

A couple of things about this process are worth pointing out. First, because the VIP swap is atomic, a running application can be upgraded to a new version with no downtime. This is important for many kinds of cloud services. Second, notice that throughout this process, the actual IP addresses of the Windows Azure VMs—and the physical machines those VMs run on—are never exposed.

Once the application is accessible from the outside world, its users are likely to need some way to identify themselves. An application might use a membership provider to store its own user ID and password, for example, just like any other ASP. To control access to the information in this account, Windows Azure gives its creator a secret key.

Each request an application makes to information in this storage account—blobs, tables, and queues—carries a signature created with this secret key. In other words, authorization is at the account level. Blobs Binary large objects—blobs—are often just what an application needs. Whether they hold video, audio, archived email messages, or anything else, they let applications store and access data in a very general way.

To use blobs, a developer first creates one or more containers in some storage account. Each of these containers can then hold one or more blobs. Recall that blobs can be large—up to 50 gigabytes—and so to make transferring them more efficient, each blob can be subdivided into blocks. If a failure occurs, retransmission can resume with the most recent block rather than sending the entire blob again. Beyond the selection of purely supported VM types, you also need to check whether those VM types are available in a specific region based on the site Products available by region.

But more important, you need to evaluate whether:. Most of that data can be found here Linux and here Windows for a particular VM type. The pricing of each of the different offers with different service offers around operating systems and different regions is available on the site Linux Virtual Machines Pricing and Windows Virtual Machines Pricing. For details and flexibility of one year and three year reserved instances, check these articles:.

For more information on spot pricing, read the article Azure Spot Virtual Machines. Pricing of the same VM type can also be different between different Azure regions. For some customers, it was worth to deploy into a less expensive Azure region. Additionally, Azure offers the concepts of a dedicated host. The dedicated host concept gives you more control on patching cycles that are done by Azure.

You can time the patching according to your own schedules. This offer is specifically targeting customers with workload that might not follow the normal cycle of workload. To read up on the concepts of Azure dedicated host offers, read the article Azure Dedicated Host.

Using this offer is supported for SAP workload and is used by several SAP customers who want to have more control on patching of infrastructure and eventual maintenance plans of Microsoft.

For more information on how Microsoft maintains and patches the Azure infrastructure that hosts virtual machines, read the article Maintenance for virtual machines in Azure. Microsoft's hypervisor is able to handle two different generations of virtual machines.

Those formats are called Generation 1 and Generation 2. Generation 2 was introduced in the year with Windows Server hypervisor. Azure started out using Generation 1 virtual machines. As you deploy Azure virtual machines, the default is still to use the Generation 1 format. Meanwhile you can deploy Generation 2 VM formats as well.

This article also lists the important functional differences of Generation 2 virtual machines as they can run on Hyper-V private cloud and Azure. More important this article also lists functional differences between Generation 1 virtual machines and Generation 2 VMs, as those run in Azure. Read the article Support for generation 2 VMs on Azure to see a list of those differences.

Moving an existing VM from one generation to the other generation is not possible. To change the virtual machine generation, you need to deploy a new VM of the generation you desire and re-install the software that you are running in the virtual machine of the generation. With that a seeming less up and downsizing between Mv1 and Mv2 family VMs is possible. Microsoft Azure Virtual Machines utilize different storage types.

When implementing SAP on Azure Virtual Machine Services, it is important to understand the differences between these two main types of storage:. Azure VMs offer non-persistent disks after a VM is deployed. In case of a VM reboot, all content on those drives will be wiped out. There might be exceptions for some of the databases, where these non-persisted drives could be suitable for tempdb and temp tablespaces. However, avoid using those drives for A-Series VMs since those non-persisted drives are limited in throughput with that VM family.

By "any changes", like files stored, directories created, applications installed, etc. By any changes, like files stored, directories created, applications installed, etc.

Azure storage accounts have limitations either in IOPS, throughput, or sizes those can contain. In the past these limitations, which are documented in:. It was on you to manage the number of persisted disks within a storage account. You needed to manage the storage accounts and eventually create new storage accounts to create more persisted disks.

In recent years, the introduction of Azure managed disks relieved you from those tasks. The recommendation for SAP deployments is to leverage Azure managed disks instead of managing Azure storage accounts yourself.

Azure managed disks will distribute disks across different storage accounts, so, that the limits of the individual storage accounts are not exceeded. Within a storage account, you have a type of a folder concept called 'containers' that can be used to group certain disks into specific containers.

Azure offers a variety of persisted storage option that can be used for SAP workload and specific SAP stack components. For more details, read the document Azure storage for SAP workloads. Microsoft Azure provides a network infrastructure, which allows the mapping of all scenarios, which we want to realize with SAP software.

The capabilities are:. There are many different possibilities to configure name and IP resolution in Azure. More information can be found in this article and on this page. Because networking and name resolution is a vital part of the database deployment for an SAP system, this concept is discussed in more detail in the DBMS Deployment Guide. However, Domain Name resolution is done on-premises assuming that the VMs are a part of an on-premises domain and hence can resolve addresses beyond different Azure Cloud Services.

More details can be found in this article and on this page. By default, once a VM is deployed you cannot change the Virtual Network configuration. Default behavior is Dynamic IP assignment.

Running the VMs in an Azure Virtual Network opens a great possibility to leverage this functionality if needed or required for some scenarios.

The IP assignment remains valid throughout the existence of the VM, independent of whether the VM is running or shutdown. As a result, you need to take the overall number of VMs running and stopped VMs into account when defining the range of IP addresses for the Virtual Network. For more information, read this article. See also the document Troubleshoot Azure virtual machine backup.

See also SAP's note on general guidance using virtual host names. With the ability to have multiple vNICs you can start to set up network traffic separation where, for example, client traffic is routed through one vNIC and backend traffic is routed through a second vNIC.

Exact details, functionality, and restrictions can be found in these articles:. It is expected to become the most common SAP deployment pattern in Azure. The assumption is that operational procedures and processes with SAP instances in Azure should work transparently.

This means you should be able to print out of these systems as well as use the SAP Transport Management System TMS to transport changes from a development system in Azure to a test system, which is deployed on-premises. More documentation around site-to-site can be found in this article. In order to create a site-to-site connection on-premises data center to Azure data center , you need to either obtain and configure a VPN device, or use Routing and Remote Access Service RRAS which was introduced as a software component with Windows Server The connectivity from the on-premises network to Azure is established via VPN.

For the SAP scenarios, we are looking at, point-to-site connectivity is not practical. Therefore, no further references are given to point-to-site VPN connectivity. Previously a single subscription was limited to one site-to-site VPN connection. This makes it possible to leverage more than one Azure Region for a specific subscription through cross-premises configurations. For more documentation, see this article. However often you have the requirement that the software components in the different regions should communicate with each other.

Ideally this communication should not be routed from one Azure Region to on-premises and from there to the other Azure Region. To shortcut, Azure offers the possibility to configure a connection from one Azure Virtual Network in one region to another Azure Virtual Network hosted in another region. This functionality is called VNet-to-VNet connection. Microsoft Azure ExpressRoute allows the creation of private connections between Azure data centers and either the customer's on-premises infrastructure or in a co-location environment.

ExpressRoute connections do not go over the public Internet. ExpressRoute connections offer higher security, more reliability through multiple parallel circuits, faster speeds, and lower latencies than typical connections over the Internet.

Express Route enables multiple Azure subscriptions through one ExpressRoute circuit as documented here. For VMs joining on-premises domains through site-to-site, point-to-site, or ExpressRoute, you need to make sure that the Internet proxy settings are getting deployed for all the users in those VMs as well.

By default, software running in those VMs or users using a browser to access the internet would not go through the company proxy, but would connect straight through Azure to the internet. If software running in the VM is not doing that or an administrator manipulates the settings, traffic to the Internet can be detoured again directly through Azure to the Internet. In order to avoid such a direct internet connectivity, you can configure Forced Tunneling with site-to-site connectivity between on-premises and Azure.

This chapter contained many important points about Azure Networking. Here is a summary of the main points:. We need to be clear about the fact that the storage and network infrastructure is shared between VMs running a variety of services in the Azure infrastructure.

As in the customer's own data centers, over-provisioning of some of the infrastructure resources does take place to a degree. The Microsoft Azure Platform uses disk, CPU, network, and other quotas to limit the resource consumption and to preserve consistent and deterministic performance. The different VM types A5, A6, etc. This means that once the VM is deployed, the resources on the host are available as defined by the VM type.

When planning and sizing SAP on Azure solutions, the quotas for each virtual machine size must be considered. The VM quotas are described here Linux and here Windows.

The quotas described represent the theoretical maximum values. The IOPS limit is enforced on the granularity of single disk. As a rough decision tree to decide whether an SAP system fits into Azure Virtual Machine Services and its capabilities or whether an existing system needs to be configured differently in order to deploy the system on Azure, the decision tree below can be used:. The Azure portal is one of three interfaces to manage Azure VM deployments.

The basic management tasks, like deploying VMs from images, can be done through the Azure portal. In addition, the creation of Storage Accounts, Virtual Networks, and other Azure components are also tasks the Azure portal can handle well. Administration and configuration tasks for the Virtual Machine instance are possible from within the Azure portal.

Besides restarting and shutting down a Virtual Machine you can also attach, detach, and create data disks for the Virtual Machine instance, to capture the instance for image preparation, and configure the size of the Virtual Machine instance. The Azure portal provides basic functionality to deploy and configure VMs and many other Azure services.

However not all available functionality is covered by the Azure portal. In the Azure portal, it's not possible to perform tasks like:. Windows PowerShell is a powerful and extensible framework that has been widely adopted by customers deploying larger numbers of systems in Azure.

After the installation of PowerShell cmdlets on a desktop, laptop or dedicated management station, the PowerShell cmdlets can be run remotely. More detailed steps on how to install, update, and configure the Azure PowerShell cmdlets can also be found in Install the Azure PowerShell module. Customer experience so far has been that PowerShell is certainly the more powerful tool to deploy VMs and to create custom steps in the deployment of VMs. All of the customers running SAP instances in Azure are using PowerShell cmdlets to supplement management tasks they do in the Azure portal or are even using PowerShell cmdlets exclusively to manage their deployments in Azure.

Since the Azure-specific cmdlets share the same naming convention as the more than Windows-related cmdlets, it is an easy task for Windows administrators to leverage those cmdlets. As Azure provides more functionality, new PowerShell cmdlets are going to be added that requires an update of the cmdlets. The new version is installed on top of the older version. For customers who use Linux and want to manage Azure resources PowerShell might not be an option.

Microsoft offers Azure CLI as an alternative. For information about installation, configuration and how to use CLI commands to accomplish Azure tasks see. The first step can be one that is time consuming, but most important, is to work with compliance and security teams in your company on what the boundary conditions are for deploying which type of SAP workload or business process into public cloud.

If your company deployed other software before into Azure, the process can be easy. If your company is more at the beginning of the journey, there might be larger discussions necessary in order to figure out the boundary conditions, security conditions, that allow certain SAP data and SAP business processes to be hosted in public cloud.

As useful help, you can point to Microsoft compliance offerings for a list of compliance offers Microsoft can provide. Other areas of concerns like data encryption for data at rest or other encryption in Azure service is documented in Azure encryption overview. Don't underestimate this phase of the project in your planning. Only when you have agreement and rules around this topic, you need to go to the next step, which is the planning of the network architecture that you deploy in Azure.

In this chapter, you learn the different ways to deploy a VM in Azure. Microsoft Azure offers multiple ways to deploy VMs and associated disks. Thus it is important to understand the differences since preparations of the VMs might differ depending on the method of deployment. In general, we take a look at the following scenarios:. You plan to move a specific SAP system from on-premises to Azure. Therefore, generalizing the image is not necessary. See chapters Preparation for moving a VM from on-premises to Azure with a non-generalized disk of this document for on-premises preparation steps and upload of non-generalized VMs or VHDs to Azure.

For more information, see chapter Protect SAP of the About disaster recovery for on-premises apps guide. To prepare such a private image for duplication, the following items have to be considered:. There are small differences depending on the deployment method that is used. In this case, the guest OS of VM should not be generalized for multiple deployments. If the on-premises network got extended into Azure, then even the same domain accounts can be used within the VM as those were used before on-premises.

In this scenario no generalization sysprep of the VM is required to upload and deploy the VM on Azure. Set disk automount for attached disks as described in chapter Setting automount for attached disks in this document. In this scenario no generalization waagent -deprovision of the VM is required to upload and deploy the VM on Azure.

For the OS disk, make sure that the bootloader entry also reflects the uuid-based mount. For the OS disk, make sure the bootloader entry also reflects the uuid-based mount. The last step is to sign in to a VM with an Administrator account. Open a Windows command window as administrator. A small window will appear. It is important to check the Generalize option the default is unchecked and change the Shutdown Option from its default of 'Reboot' to 'shutdown'. If you want to perform the procedure with a VM already running in Azure, follow the steps described in this article.

How to capture a Linux virtual machine to use as a Resource Manager template. Another possibility is the use of the tool 'AzCopy'. The tool can copy VHDs between on-premises and Azure in both directions. Consult this documentation for download and usage of AzCopy. A third alternative would be to use various third-party GUI-oriented tools. However, make sure that these tools are supporting Azure Page Blobs. For our purposes, we need to use Azure Page Blob store the differences are described in Understanding block blobs, append blobs, and page blobs.

This is important because this efficiency in compression reduces the upload time which varies anyway depending on the upload link to the internet from the on-premises facility and the Azure deployment region targeted. Such a VM does NOT need to be generalized and can be uploaded in the state and shape it has after shutdown on the on-premises side. The same is true for additional VHDs, which don't contain any operating system. This is a multi-step process. You can move SAP systems from Azure back into the on-premises world as well.

Even when downloading disks, which are mounted to VMs, the VM needs to be shut down and deallocated. If you only want to download the database content, which, then should be used to set up a new system on-premises and if it is acceptable that during the time of the download and the setup of the new system that the system in Azure can still be operational, you could avoid a long downtime by performing a compressed database backup into a disk and just download that disk instead of also downloading the OS base VM.

Then you can copy the underlying blob to a new storage account and download the blob from this storage account. The command could look like:. In order to do that, you need the name and the container of the VHD, which you can find in the 'Storage Section' of the Azure portal need to navigate to the Storage Account and the storage container where the VHD was created and you need to know where the VHD should be copied to.

Then you can leverage the command by defining the parameters blob and container of the VHD to download and the destination as the physical target location of the VHD including its name. Neither the Azure functionality of copying disks nor the Azure functionality of saving disks to a local disk has a synchronization mechanism, which snapshots multiple disks in a consistent manner.

Therefore, the state of the copied or saved disks even if those are mounted against the same VM would be different. This means that in the concrete case of having different data and logfile s contained in the different disks, the database in the end would be inconsistent.

Only then you can copy or download the set of disks to either create a copy of the SAP system in Azure or on-premises. Data disks can be stored as VHD files in an Azure Storage Account and can be directly attached to a virtual machine or be used as an image. In this case, the VHD is copied to another location before being attached to the virtual machine. As mentioned earlier already, the name is kind of a three-part name that looks like:. Data disks can also be Managed Disks.

In this case, the Managed Disk is used to create a new Managed Disk before being attached to the virtual machine. The name of the Managed Disk must be unique within a resource group. To create a new Managed Disk, use az disk create as shown in the following example. The copy of a VHD itself within a storage account is a process, which takes only a few seconds similar to SAN hardware creating snapshots with lazy copy and copy on write.

After you have a copy of the VHD file, you can attach it to a virtual machine or use it as an image to attach copies of the VHD to virtual machines. This task cannot be performed on the Azure portal. The PowerShell cmdlets or CLI commands can create and manage blobs, which include the ability to asynchronously copy blobs across Storage Accounts and across regions within the Azure subscription. You can also copy VHDs between subscriptions. For examples see this article.

Ideally the handling of the structure of a VM and the associated disks should be simple. In on-premises installations, customers developed many ways of structuring a server installation. There were various reasons for this, but when we went back to the root, it usually was that the drives were small and OS upgrades needed additional space years ago. Both conditions do not apply these days too often anymore. In order to keep deployments simple in their structure, it is recommended to follow the following deployment pattern for SAP NetWeaver systems in Azure.

Add or change the following settings:. Read SAP Note for more details on the recommended swap file size. Exact quotas are described in this article Linux and this article Windows. Experience of SAP deployments over the last two years taught us some lessons, which can be summarized as:.

It now can have up to 1 TB in size. This should be enough space to keep all the necessary file including, for example, SAP batch job logs. In most scenarios, you need to create additional disks in order to deploy the SAP database into the VM. The Azure portal allows to attach and detach disks once a base VM is deployed. When attaching a disk, the Azure portal offers to attach an empty disk or an existing disk, which at this point in time is not attached to another VM.

During the deployment of a new virtual machine, you can decide whether you want to use Managed Disks or place your disks on Azure Storage Accounts. If you want to use Premium Storage, we recommend using Managed Disks. Next, you need to decide whether you want to create a new and empty disk or whether you want to select an existing disk that was uploaded earlier and should be attached to the VM now.

In case of database transaction log file, no caching is recommended. How to attach a data disk in the Azure portal. If automount is not enabled as recommended in chapter Setting automount for attached disks , the newly attached volume needs to be taken online and initialized. If disks are attached, you need to sign in to the VM and initialize the disks as described in this article. If the new disk is an empty disk, you need to format the disk as well.

Otherwise, you need to see how you can balance these VMs between different Storage accounts without hitting the limit of each single Storage Account. These restrictions do not apply if you use Managed Disk. If you plan to use Premium Storage, we recommend using Managed Disk. Geo-replication is enabled or disabled on the Storage Account level and not on the VM level. If geo-replication is enabled, the VHDs within the Storage Account would be replicated into another Azure data center within the same region.

Before deciding on this, you should think about the following restriction:. This means there is no synchronization between the changes in the different VHDs.

In addition to the DBMS, there also might be other applications where processes write or manipulate data in different VHDs and where it is important to keep the order of changes. If that is a requirement, geo-replication in Azure should not be enabled. Dependent on whether you need or want geo-replication for a set of VMs, but not for another set, you can already categorize VMs and their related VHDs into different Storage Accounts that have geo-replication enabled or disabled.

For VMs, which are created from own Images or Disks, it is necessary to check and possibly set the automount parameter. The parameter is set for the images provided by Microsoft in the Azure Marketplace. In order to set the automount, check the documentation of the command-line executable diskpart.

You need to initialize a newly attached empty disk as described in this article. Later in the document we will discuss the other major scenario, connecting to SAP systems in cross-premises deployments, which have a site-to-site connection VPN tunnel or Azure ExpressRoute connection between the on-premises systems and Azure systems.

With Azure Resource Manager, there are no default endpoints anymore like in the former classic model. See the architecture difference between classic model and ARM as described in this article.



0コメント

  • 1000 / 1000