Security Measures Protecting Files in Microsoft 365

bp1

Microsoft 365 employs a multi-layered, “defense-in-depth” security architecture to ensure that files stored in the cloud are safe from unauthorized access or loss. Many of these protections operate behind the scenes – invisible to end users and administrators – yet they are critical to safeguarding data. This comprehensive report details those security measures, from the physical defenses at Microsoft’s datacenters to the encryption, access controls, and monitoring systems that protect customer files in Microsoft 365. The focus is on the stringent, built-in security mechanisms that end users and admins typically do not see, illustrating how Microsoft protects your data every step of the way.


Physical Security in Microsoft Datacenters

Microsoft’s datacenters are secured with robust physical protections that most users never witness. The facilities housing Microsoft 365 data are designed, built, and operated to strictly limit physical access to hardware that stores customer files[1]. Microsoft follows the defense-in-depth principle for physical security, meaning there are multiple layers of checks and barriers from the outer perimeter all the way to the server racks[1]. These layers include:

  • Perimeter Defenses: Microsoft datacenters are typically nondescript buildings with high steel-and-concrete fencing and 24/7 exterior lighting[1]. Access to the campus is only through a secure gate, monitored by cameras and security guards. Bollards and other barriers protect the building from vehicle intrusion[1]. This exterior layer helps deter and prevent unauthorized entry long before anyone gets near your data.

  • Secured Entrances: At the building entrance, trained security officers with background checks control access[1]. Two-factor authentication with biometrics (such as fingerprint or iris scan) is required to enter the datacenter interior[1]. Only pre-approved personnel with a valid business justification can pass this checkpoint, and their access is limited to specific areas and time frames[2][1]. Visitors and contractors must be escorted by authorized staff at all times and wear badges indicating escort-only status[2]. Every entrance and exit is logged and tracked.

  • Datacenter Floor Controls: Gaining access to the server room (the datacenter floor where customer data resides) requires additional approvals and security steps. Before entering the server area, individuals undergo a full-body metal detector screening to prevent any unauthorized devices or objects from being brought in[1]. Only approved devices are allowed on the datacenter floor to reduce risks of data theft (for example, you can’t simply plug in an unapproved USB drive)[1]. Video cameras monitor every server rack row from front and back, and all movements are recorded[1]. When leaving, personnel pass through another metal detector to ensure nothing is removed improperly[1].

  • Strict Access Management: Physical access is strictly role-based and time-limited. Even Microsoft employees cannot roam freely – they must have a justified need for each visit and are only allowed into the areas necessary for their task[2][1]. Access requests are managed via a ticketing system and must be approved by the Datacenter Management team[1]. Temporary access badges are issued for specific durations and automatically expire afterward[2][1]. All badges and keys are secured within the on-site Security Operations Center and are collected upon exit (visitor badges are disabled and recycled only after their permissions are wiped)[2][1]. Least privilege principle is enforced – people get no more access than absolutely necessary[1].

  • On-Site Security Monitoring: Dedicated security personnel and systems provide continuous surveillance of the facilities. The Security Operations Center at each datacenter monitors live camera feeds covering the perimeter, entrances, corridors, server rooms, and other sensitive areas[3]. If an alarm is triggered or an unauthorized entry is attempted, guards are dispatched immediately[3]. Security staff also conduct regular patrols and inspections of the premises to catch any irregularities[1][1]. These measures ensure that only authorized, vetted individuals ever get near the servers storing customer files.

In short, Microsoft’s physical datacenter security is extremely strict and effectively invisible to customers. By the time your data is stored in the cloud, it’s inside a fortress of concrete, biometrics, cameras, and guards – adding a formidable first line of defense that end users and admins typically don’t even think about.


Data Encryption and Protection (At Rest and In Transit)

Once your files are in Microsoft 365, multiple layers of encryption and data protection kick in, which are also largely transparent to the user. Microsoft 365 automatically encrypts customer data both when it’s stored (“at rest”) and when it’s transmitted (“in transit”), using strong encryption technologies that meet or exceed industry standards[4][5]. These encryption measures ensure that even if someone were to intercept your files or get unauthorized access to storage, they could not read or make sense of the data.

  • Encryption in Transit: Whenever data moves between a user’s device and Microsoft 365, or between Microsoft datacenters, it is safeguarded with encryption. Microsoft 365 uses TLS/SSL (Transport Layer Security) with at least 2048-bit keys for all client-to-server data exchanges[5]. For example, if you upload a document to SharePoint or OneDrive, that connection is encrypted so that no one can eavesdrop on it. Even data traveling between Microsoft’s own servers (such as replication between datacenters) is protected – though such traffic travels over private secure networks, it is further encrypted using industry-standard protocols like IPsec to add another layer of defense[5][5]. This in-transit encryption covers emails, chats, file transfers – essentially any communication involving Microsoft 365 servers – ensuring data cannot be read or altered in transit by outside parties.

  • Encryption at Rest: All files and data stored in Microsoft 365 are encrypted at rest on Microsoft’s servers. Microsoft uses a combination of BitLocker and per-file encryption to protect data on disk[5]. BitLocker is full-disk encryption technology that encrypts entire drives in the datacenter, so if someone somehow obtained a hard drive, the data on it would be unreadable without the proper keys[5]. In addition, Microsoft 365 uses file-level encryption with unique keys for each file (and even each piece or version of a file) as an extra safeguard[5][5]. This means that two different files on the same disk have different encryption keys, and every single update to a file gets its own new encryption key as well[5]. Microsoft employs strong ciphers – generally AES (Advanced Encryption Standard) with 256-bit keys – for all of this encryption, which is compliant with strict security standards like FIPS 140-2 (required for U.S. government use)[5].

  • Separation of Data and Keys: A critical behind-the-scenes protection is how Microsoft handles encryption keys. The keys used to encrypt your files are stored in a physically and logically separate location from the actual file content[5][5]. In practice, this means that if someone were to access the raw stored files, they still wouldn’t have the keys needed to decrypt them. For SharePoint and OneDrive, Microsoft stores file content in its blob storage system, while the encryption keys for each file (or chunk of a file) are kept in a secure key store/database separate from the content[5][5]. The file content itself holds no clues for decryption. Only the combination of the encrypted content plus the corresponding keys (managed by the system) can unlock the data[5], and those two pieces are never stored together.

  • Per-File (Chunk) Encryption Architecture: Microsoft 365 takes the unusual step of encrypting data at a granular, per-chunk level for SharePoint Online and OneDrive for Business, which is a security measure completely hidden from end users. When you save a file in these services, the file is actually split into multiple chunks, and each chunk is encrypted with its own unique AES-256 key[5][5]. For instance, a 5 MB document might be broken into, say, five pieces, each piece encrypted separately. Even the deltas (changes) in an edited document are treated as new chunks with their own keys[5][5]. These encrypted chunks are then randomly distributed across different storage containers within the datacenter for storage efficiency and security[5][5]. A Content Database keeps a map of which chunks belong to which file and how to reassemble them, and it also stores the encrypted keys for those chunks[5][5]. The actual key to decrypt each chunk is stored in a separate Key Store service[5][5]. This means there are three distinct repositories involved in storing your file: one for the content blobs, one for the chunk-key mappings, and one for the encryption keys – and each is isolated from the others[5]. No single system or person can get all the pieces to decrypt a file by accident. An attacker would need to penetrate all three stores and combine information – an almost impossibly high bar – to reconstruct your data[5]. This multi-repository design provides “unprecedented level of security” for data at rest[5], since compromise of any one component (say, the storage server) is insufficient to reveal usable information.

  • Encryption Key Management: The entire process of encryption and key management is automated and managed by Microsoft’s systems. Keys are regularly rotated or refreshed, adding another layer of security (a key that might somehow be obtained illicitly will soon become obsolete)[5]. Administrators don’t have to manage these particular keys – they are handled by Microsoft’s behind-the-scenes key management services. However, for organizations with extreme security needs, Microsoft 365 also offers options like Customer Key (where the organization can provide and control the root encryption keys for certain services) and Double Key Encryption (where two keys are required to open content – one held by Microsoft and one held by the customer)[4]. These are advanced capabilities visible to administrators, but it’s important to note that even without them, Microsoft’s default encryption is very robust. Every file stored in SharePoint, OneDrive, Exchange, or Teams is automatically encrypted without any user intervention, using some of the strongest cipher implementations available[4].

In summary, encryption is a fundamental unseen safeguard protecting files in Microsoft 365. Data is scrambled with high-grade encryption at every stage – in transit, at rest on disk, and even within the storage architecture itself. The encryption and key separation ensure that even if an outsider gained physical access to drives or intercepted network traffic, they would only see indecipherable ciphertext[4][4]. Only authorized users (through the proper Microsoft 365 apps and services) can decrypt and see the content, and that decryption happens transparently when you access your files. This all happens behind the scenes, giving users strong data protection without any effort on their part.


Strict Identity and Access Controls

Beyond encrypting data, Microsoft 365 rigorously controls who and what can access customer data. This includes not only customer-side access (your users and admins) but also internal access at Microsoft. Many of these controls are invisible to the customer, but they dramatically reduce the risk of unauthorized access.

  • Tenant Isolation & Customer Access: Microsoft 365 is a multi-tenant cloud, meaning many organizations’ data reside in the same cloud environment. However, each organization’s data is logically isolated. Customer accounts can only access data within their own organization’s tenant – they cannot access any other customer’s data[6]. The cloud’s identity management ensures that when your users log in with their Azure Active Directory (Entra ID) credentials, they are cryptographically restricted to your tenant’s data. Azure AD handles user authentication with strong methods (password hash verification, optional multi-factor authentication, conditional access policies set by your admin, etc.), which is a part the admin can see. But the underlying guarantee is that no matter what, identities from outside your organization cannot cross over into your data, and vice versa[6]. This tenant isolation is enforced at all levels of the service’s architecture.

  • Role-Based Access & Least Privilege (Customer Side): Within your tenant, Microsoft 365 provides granular role-based access controls. While this is partially visible to admins (who can assign roles like SharePoint Site Owner, Exchange Administrator, etc.), the underlying principle is least privilege – users and admins should only have the minimum access necessary for their job. For example, an admin with Exchange management rights doesn’t automatically get SharePoint rights. On the platform side, Microsoft 365’s code is designed so that even if a user tries to escalate privileges, they cannot exceed what Azure AD and role definitions permit. Regular users cannot suddenly gain admin access, and one organization’s global admin cannot affect another organization. These logical access controls are deeply baked into the service.

  • Behind-the-Scenes Service Accounts: Microsoft 365 is made up of various services (Exchange Online, SharePoint Online, etc.) that talk to each other and to databases. Internally, service accounts (identities used by the services themselves) are also restricted. Microsoft follows the same least privilege approach for service and machine accounts as for human users[6][6]. Each micro-service or component in the cloud only gets the permissions it absolutely needs to function – no more. This containment is invisible to customers but prevents any single component from inappropriately accessing data. If one part of the service were compromised, strict role separation limits what else it could do.

  • Zero Standing Access for Microsoft Engineers: Perhaps one of the most stringent (yet unseen) security measures is Microsoft’s internal policy of Zero Standing Access (ZSA). In Microsoft 365, Microsoft’s own engineers and technical staff have no default administrative access to the production environment or to customer data[6][7]. In other words, Microsoft runs the service with the assumption that even its employees are potential threats, and no engineer can just log in to a server or open a customer’s mailbox on a whim. By default, they have zero access. This is achieved through heavy automation of operations and strict controls on human privileges[6][6] – “Humans govern the service, and software operates the service,” as Microsoft describes it[6]. Routine maintenance, updates, and troubleshooting are largely automated or done with limited scopes, so most of the time, no human access to customer data is needed.

  • Just-In-Time Access via Lockbox: If a Microsoft service engineer does need to access the system for a valid reason (say to investigate a complex issue or to upgrade some backend component), they must go through an approval workflow called Lockbox. Lockbox is an internal access control system that grants engineers temporary, scoped access only after multiple checks and approvals[7][7]. The engineer must submit a request specifying exactly what access is needed and why[7][7]. The request must meet strict criteria – for example, the engineer must already be part of a pre-approved role group for that type of task (enforcing segregation of duties), the access requested must be the least amount needed, and a manager must approve the request[7][7]. If those checks pass, the Lockbox system grants a just-in-time access that lasts only for a short, fixed duration[7]. When the time window expires, access is automatically revoked[7]. This process is usually invisible and automatic (taking just minutes), but it’s mandatory. Every single administrative action that touches customer content goes through this gate.

  • Customer Lockbox for Data Access: For certain sensitive operations involving customer content, Microsoft even provides a feature called Customer Lockbox. If a Microsoft engineer ever needs to access actual customer data as part of support (which is rare), and if Customer Lockbox is enabled for your organization, your administrator will get a request and must explicitly approve that access[7]. Microsoft cannot access the data until the customer’s own admin grants the approval in the Customer Lockbox workflow[7]. This gives organizations direct control in those extraordinary scenarios. Even without Customer Lockbox enabled, Microsoft’s policy is that access to customer content is only allowed for legitimate purposes and is logged and audited (see below). Customer Lockbox just adds an extra customer-side approval step.

  • Secure Engineer Workstations: When an engineer’s access request is approved, Microsoft also controls how they access the system. They must use Secure Admin Workstations (SAWs) – specially hardened laptops with no email, no internet browsing, and with all unauthorized peripherals (like USB ports) disabled[7][7]. These SAWs connect through isolated, monitored management interfaces (Remote Desktop through a secure gateway, or PowerShell with limited commands)[7]. The engineers can only run pre-approved administrative tools – they cannot arbitrarily explore the system. Software policies ensure they can’t run rogue commands outside the scope of their Lockbox-approved task[7]. This means even with temporary access, there are technical guardrails on what an engineer can do.

  • Comprehensive Logging and Auditing: All these access control measures are complemented by extensive logging. Every privileged action in Microsoft 365 – whether performed by an automated system or a support engineer via Lockbox – is recorded in audit logs[7]. These logs are available to Microsoft’s internal security teams and to customers (through the Office 365 Management Activity API and Compliance Center) for transparency[7]. In effect, there’s a tamper-evident record of every time someone accesses customer data. Unusual or policy-violating access attempts can thus be detected and investigated. This level of auditing is something admins might glimpse in their Security & Compliance Center, but the vast majority of internal log data and alerting is handled by Microsoft’s systems quietly in the background.

In summary, Microsoft 365’s access control philosophy treats everyone, including Microsoft personnel, as untrusted by default. Only tightly controlled, need-based access is allowed, and even then it’s temporary and closely watched. For end users and admins, this yields high assurance: no one at Microsoft can casually browse your files, and even malicious actors would find it extremely hard to bypass identity controls. Your admin sees some tools to manage your own users’ access, but the deeper platform enforcement – tenant isolation, service-level restrictions, and Microsoft’s internal zero-access policies – operate silently to protect your data[6][7].


Continuous Monitoring and Threat Detection

Security measures in Microsoft 365 don’t stop at setting up defenses – Microsoft also maintains vigilant round-the-clock monitoring and intelligent threat detection to quickly spot and respond to any potential security issues. Much of this monitoring is behind the scenes, but it’s a crucial part of protecting data in the cloud.

  • 24/7 Physical Surveillance: Physically, as noted, each datacenter has a Security Operations Center that continuously monitors cameras, door sensors, and alarms throughout the facility[3]. If, for example, someone tried to enter a restricted area without authorization or if an environmental alarm (fire, flood) triggers, operators are alerted immediately. There are always security personnel on duty to respond to incidents at any hour[1][1]. This on-site monitoring ensures the physical integrity of the datacenter and by extension the servers and drives containing customer data.

  • Automated Digital Monitoring: On the digital side, Microsoft 365 is instrumented with extensive logging and automated monitoring systems. Every network device, server, and service in the datacenter produces logs of events and security signals. Microsoft aggregates and analyzes these logs using advanced systems (part of Azure Monitor, Microsoft Defender for Cloud, etc.) to detect abnormal patterns or known signs of intrusion. For example, unusual authentication attempts, atypical administrator activities, or strange network traffic patterns are flagged automatically. Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) are deployed at network boundaries to catch common attack techniques (like port scanning or malware signatures). Many of these defenses are inherited from Azure’s infrastructure, which uses standard methods such as anomaly detection on network flow data and threat intelligence feeds to identify attacks in progress[3][3].

  • Identity Threat Detection (AI-Based): Because identity (user accounts) is a key entry point, Microsoft uses artificial intelligence to monitor login attempts and user behavior for threats. Azure AD (Microsoft Entra ID) has built-in Identity Protection features that leverage adaptive machine learning algorithms to detect risky sign-ins or compromised accounts in real time[8]. For instance, if a user’s account suddenly tries to sign in from a new country or a known malicious IP address, the system’s AI can flag that as a high-risk sign-in. These systems can automatically enforce protective actions – like requiring additional authentication, or blocking the login – before an incident occurs[8][8]. This all happens behind the scenes; an admin might later see a report of “risky sign-ins” in their dashboard, but by then the AI has already done the monitoring and initial response. Essentially, Microsoft employs AI-driven analytics over the immense volume of authentication and activity data in the cloud to spot anomalies that humans might miss.

  • Email and Malware Protection: Another largely hidden layer is the filtering of content for malicious files or links. Microsoft 365’s email and file services (Exchange Online, OneDrive, SharePoint, Teams) integrate with Microsoft Defender technologies that scan for malware, phishing, and viruses. Emails attachments are automatically scanned in transit; files uploaded to OneDrive/SharePoint can be checked by antivirus engines. Suspicious content might be quarantined or blocked without the user ever realizing it – they simply never receive the malicious email, for example. While admins do have security dashboards where they can see malware detections, the day-to-day operation of these defenses (signature updates, heuristic AI scans for zero-day malware, etc.) is fully managed by Microsoft in the background.

  • Distributed Denial-of-Service (DDoS) Defense: Microsoft also shields Microsoft 365 services from large-scale network attacks like DDoS. This is not visible to customers, but it’s critical for keeping the service available during attempted attacks. Thanks to Microsoft’s massive global network presence, the strategy involves absorbing and deflecting traffic floods across the globe. Microsoft has multi-tiered DDoS detection systems at the regional datacenters and global mitigation at edge networks[3][3]. If one of the Microsoft 365 endpoints is targeted by a flood of traffic, Azure’s network can distribute the load and drop malicious traffic at the edge (using specialized firewall and filtering appliances) before it ever reaches the core of the service[3]. Microsoft uses techniques like traffic scrubbing, rate limiting, and packet inspection (e.g., SYN cookie challenges) to distinguish legitimate traffic from attack traffic[3][3]. These defenses are automatically engaged whenever an attack is sensed, and Microsoft continuously updates them as attackers evolve. Additionally, Microsoft’s global threat intelligence – knowledge gained from monitoring many attacks across many services – feeds into improving these DDoS defenses over time[3][3]. The result is that even very large attacks are mitigated without customers needing to do anything. Users typically won’t even notice that an attack was attempted, because the service remains up. (For example, if one region is attacked, traffic can be routed through other regions, so end users may just see a slight network reroute with no interruption[3][3].)

  • Threat Intelligence and the Digital Crimes Unit: Microsoft also takes a proactive stance by employing teams like the Microsoft Digital Crimes Unit (DCU) and security researchers who actively track global threats (botnets, hacker groups, new vulnerabilities). They use this intelligence to preempt threats to Microsoft 365. For instance, the DCU works to dismantle botnets that could be used to attack the infrastructure[3]. Additionally, Microsoft runs regular penetration testing (“red teaming”) and security drills against its own systems to find and fix weaknesses before attackers can exploit them. All of these activities are behind the curtain, but they elevate the overall security posture of the service.

  • Security Incident Monitoring: Any time a potential security incident is detected, Microsoft’s internal security operations teams are alerted. They have 24/7 staffing of cybersecurity professionals who investigate alerts. Microsoft, being a cloud provider at huge scale, has dedicated Cyber Defense Operations Centers that work around the clock. They use sophisticated tools, many built on AI, to correlate alerts and quickly determine if something meaningful is happening. This continuous monitoring and quick response capability helps ensure that if any part of the Microsoft 365 environment shows signs of compromise, it can be addressed swiftly, often before it becomes a larger issue.

In essence, Microsoft is constantly watching over the Microsoft 365 environment – both the physical facilities and the digital services – to detect threats or anomalies in real time. This is a layer of security most users never see, but it dramatically reduces risk. Threats can be stopped or mitigated before they impact customers. Combined with the preventative measures (encryption, access control), this proactive monitoring means Microsoft is not just locking the doors, but also patrolling the hallways, so to speak, to catch issues early.


Data Integrity, Resiliency, and Disaster Recovery

Protecting data isn’t only about keeping outsiders out – it’s also about ensuring that your files remain intact, available, and recoverable no matter what happens. Microsoft 365 has extensive behind-the-scenes mechanisms to prevent data loss or corruption, which end users might not be aware of but benefit from every day.

Microsoft 365 is built with the assumption that hardware can fail or accidents can happen, and it implements numerous safeguards so that customer files remain safe and accessible in such events. Here are some of the key resiliency and integrity measures:

  • Geo-Redundant Storage of Files: When you save a file to OneDrive or SharePoint (which underpins files in Teams as well), Microsoft 365 immediately stores that file in two separate datacenters in different geographic regions (for example, one on the East Coast and one on the West Coast of the U.S., if that’s your chosen data region)[9][9]. This is a form of geographic redundancy that protects against a catastrophic outage or disaster in one location. The file data is written near-simultaneously to both the primary and secondary location over Microsoft’s private network[9]. In fact, SharePoint’s system is set up such that if the write to either the primary or secondary fails, the entire save operation is aborted[9][9]. This guarantees that a file is only considered successfully saved if it exists in at least two datacenters. Should one datacenter go offline due to an issue (power failure, natural disaster, etc.), your data is still safely stored in the other and can be served from there. This replication is automatic and continuous, and end users don’t see it happening – they just know their file saved successfully.

  • Local Redundancy and Durable Storage: Within each datacenter, data is also stored redundantly. Azure Storage (which SharePoint/OneDrive uses for the actual file blobs) uses something called Locally Redundant Storage (LRS), meaning multiple copies of the data are kept within the datacenter (typically by writing it to 3 separate disks or nodes)[9]. So even if a disk or server in the primary datacenter fails, other copies in that same location can serve the data. Combined with the geo-redundancy above, this means typically there are multiple copies of your file in one region and multiple in another. The chance of losing all copies is astronomically low.

  • Data Integrity Checks (Checksums): When file data is written to storage, Microsoft 365 computes and stores a checksum for each portion of the file[9][9]. A checksum is like a digital fingerprint of the data. Whenever the file is later read, the system can compare the stored checksum with a freshly computed checksum of the retrieved data. If there’s any mismatch (which would indicate data corruption or tampering), the system knows something is wrong[9][9]. This allows Microsoft to detect any corruption of data at rest. In practice, if corruption is detected on the primary copy, the system can pull the secondary copy (since it has those near-real-time duplicates) or vice versa, thereby preventing corrupted data from ever reaching the user[9][9]. These integrity checks are an invisible safety net ensuring the file you download is exactly the one you uploaded.

  • Append-Only Storage and Versioning: SharePoint’s architecture for file storage is largely append-only. This means once a file chunk is stored (as described in the encryption section), it isn’t modified in place — if you edit a file, new chunks are created rather than altering the old ones[9]. This design has a side benefit: it’s very hard for an attacker (or even a software bug) to maliciously alter or corrupt existing data, because the system doesn’t permit random edits to stored blobs. Instead, it adds new data. Previous versions of a file remain as they were until they’re cleaned up by retention policies or manual deletion. SharePoint and OneDrive also offer version history for files, so users can retrieve earlier versions if needed. From a back-end perspective, every version is a separate set of blobs. This append-only, versioned approach protects the integrity of files by ensuring there’s always a known-good earlier state to fall back to[9][9]. It also means that if an attacker somehow got write access, they couldn’t secretly alter your file without creating a mismatch in the stored hashes or new version entries – thus any such tampering would be evident or recoverable.

  • Automated Failover and High Availability: Microsoft 365 services are designed to be highly available. In the event that one datacenter or region becomes unavailable (due to a major outage), Microsoft can fail over service to the secondary region relatively quickly[9]. For example, if a SharePoint datacenter on the East Coast loses functionality, Microsoft can route users to the West Coast replica. The architecture is often active/active – meaning both regions can serve read requests – so failover might simply be a matter of directing all new writes to the surviving region. This is handled by automation and the Azure traffic management systems (like Azure Front Door)[9]. Users might notice a brief delay or some read-only period, but full access to data continues. All of this is part of the disaster recovery planning that Microsoft continuously refines and tests. It’s invisible to the end user aside from maybe a status notice, but it ensures that even widespread issues don’t result in data loss.

  • Point-in-Time Restore & Backups: In addition to live replication, Microsoft 365 also leverages backups for certain data stores. For instance, the SharePoint content databases (which hold file metadata and the keys) are backed up via Azure SQL’s automated backups, allowing Point-In-Time Restore (PITR) for up to 14 days[9]. Exchange Online (for email) and other services have their own backup and redundancy strategies (Exchange keeps multiple mailbox database copies in a DAG configuration across datacenters). The key point is that beyond the multiple live copies, there are also snapshots and backups that can be used to restore data in rare scenarios (like severe data corruption or customer-requested recovery). Microsoft is mindful that things can go wrong and designs for failure rather than assuming everything will always work. If needed, they can restore data to a previous point in time to recover from unforeseen issues[9].

  • Protection Against Accidental Deletion: Microsoft 365 also provides behind-the-scenes protection for when users accidentally delete data. Services like OneDrive and Exchange have recycle bins or retention periods where deleted items can still be recovered for a time. Administrators can even enable retention policies that keep backups of files or emails for a set duration, even if users delete them. While not entirely invisible (end users see a recycle bin), these are part of the service’s built-in resilience. Furthermore, in SharePoint/OneDrive, if a large deletion occurs or a lot of files are encrypted by ransomware, Microsoft has a feature to restore an entire OneDrive or site to an earlier date. This leverages the versioning and backup capabilities under the hood to reconstruct the state. So even in worst-case scenarios on the user side, Microsoft 365 has mechanisms to help recover data.

All these resiliency measures operate without user intervention – files are quietly duplicated, hashed for integrity, and distributed across zones by Microsoft’s systems. The result is an extremely durable storage setup: Microsoft 365’s core storage achieves 99.999%+ durability, meaning the likelihood of losing data is infinitesimally small. End users and admins typically are not aware of these redundant copies or integrity checks, but they provide confidence that your files won’t just vanish or silently corrupt. Even in the face of natural disasters or hardware failures, Microsoft has your data safe in another location, ready to go.


Compliance with Global Security Standards and Regulations

To further assure customers of the security (and privacy) of their data, Microsoft 365’s security measures are aligned with numerous industry standards and are regularly audited by third parties. While compliance certifications are not exactly a “security measure,” they reflect a lot of unseen security practices and controls that Microsoft has implemented to meet rigorous criteria. End users might never think about ISO certifications or SOC reports, but these show that Microsoft’s security isn’t just robust – it’s independently verified and holds up to external scrutiny.

  • Broad Set of Certifications: Microsoft 365 complies with more certifications and regulations than nearly any other cloud service[3][3]. This includes well-known international standards like ISO/IEC 27001 (information security management) and ISO 27018 (cloud privacy), SOC 1 / SOC 2 / SOC 3 (service organization controls reports), FedRAMP High (for U.S. government data), HIPAA/HITECH (for healthcare data in the U.S.), GDPR (EU data protection rules), and many others[3]. It also includes region- or country-specific standards such as IRAP in Australia, MTCS in Singapore, G-Cloud in the UK, and more[3]. Meeting these standards means Microsoft 365 has implemented specific security controls – often beyond what an ordinary company might do – to protect data. For example, ISO 27001 requires a very comprehensive security management program, and SOC 2 requires strong controls in categories like security, availability, and confidentiality.

  • Regular Third-Party Audits: Compliance isn’t a one-time thing; Microsoft undergoes regular independent audits to maintain these certifications[3]. Auditors come in (usually annually or more frequently) to review Microsoft’s processes, examine technical configurations, and test whether the security controls are operating effectively. This includes verifying physical security, reviewing how encryption and key management are done, checking access logs, incident response processes, etc. Rigorous, third-party audits verify that Microsoft’s stated security measures are actually in place and functioning[3]. The fact that Microsoft 365 continually passes these audits provides strong assurance that the behind-the-scenes security is not just claimed, but proven.

  • Service Trust Portal & Documentation: Microsoft provides customers with documentation about these audits and controls through the Microsoft Service Trust Portal. Customers (particularly enterprise and compliance officers) can access detailed audit reports, like SOC 2 reports, ISO audit certificates, penetration test summaries, and so on[3][3]. While an end user wouldn’t use this, organizational admins can use these reports to perform their due diligence. The availability of this information means Microsoft is transparent about its security measures and allows others to verify them.

  • Meeting Strict Data Protection Laws: Microsoft has to adhere to strict data protection laws globally. For example, under the European GDPR, if Microsoft (as a data processor) experienced a breach of personal data, they are legally obligated to notify customers within a certain timeframe. Microsoft also signs Data Protection Agreements with customers, binding them to specific security commitments. Although legal compliance isn’t directly a “technical measure,” it drives Microsoft to maintain very high security standards internally (the fines and consequences of failure are strong motivators). Microsoft regularly updates its services to meet new regulations (for instance, the EU’s evolving cloud requirements, or new cybersecurity laws in various countries). This means the security baseline is continuously evolving to remain compliant worldwide.

  • Trust and Reputation: It’s worth noting that some of the world’s most sensitive organizations (banks, healthcare providers, governments, etc.) use Microsoft 365, which is only possible because of the stringent security and compliance measures in place. These organizations often conduct their own assessments of Microsoft’s datacenters and operations (sometimes even on-site inspections). Microsoft’s willingness to comply with such assessments, and its track record of successfully completing them, is another indicator of robust behind-the-scenes security.

In summary, Microsoft 365’s behind-the-scenes security measures aren’t just internally verified – they’re validated by independent auditors and meet the high bar set by global security standards[3][3]. While an end user may not know about ISO or SOC, they benefit from the fact that Microsoft must maintain strict controls to keep those certifications. This layer of oversight and certification ensures no corner is cut in securing your data.


Incident Response and Security Incident Management

Even with the best preventative measures, security incidents can happen. Microsoft has a mature security incident response program for its cloud services. While end users and admins might only hear about this if an incident affects them, it’s important to know that Microsoft is prepared behind the scenes to swiftly handle any security breaches or threats. Key aspects include:

  • Dedicated Incident Response Teams: Microsoft maintains dedicated teams of cybersecurity experts whose job is to respond to security incidents in the cloud. These teams practice the “prepare, detect, analyze, contain, eradicate, recover” cycle of incident response continually. They have playbooks for various scenarios (like how to handle a detected intrusion, or a stolen credential, etc.) and they rehearse these through drills. Microsoft also runs live site exercises (similar to fire drills) to simulate major outages or breaches and ensure the teams can respond quickly and effectively. This means that if something abnormal is detected by the monitoring systems – say, an unusual data access pattern or a piece of malware on a server – the incident response team is on standby to jump in, investigate, and mitigate.

  • Cutting Off Attacks: In the event of a confirmed breach or attack, Microsoft can isolate affected systems very quickly. For example, they might remove a compromised server from the network, fail over services to a safe environment, or revoke certain access tokens system-wide. Because Microsoft controls the infrastructure, they have the ability to implement mitigation steps globally at cloud scale – sometimes within minutes. An example scenario: if a vulnerability is discovered in one of the services, Microsoft can rapidly deploy a security patch across all servers or even roll out a configuration change that shields the flaw (such as blocking a certain type of request at the network level) while a patch is being readied.

  • Customer Notification and Support: If a security incident does result in any customer data being exposed or affected, Microsoft follows a formal process to inform the customer and provide remediation guidance. Under many regulatory regimes (and Microsoft’s contractual commitments), Microsoft must notify customers within a specified period if their data has been breached. While we hope such an event never occurs, Microsoft’s policies ensure transparency. They would typically provide details on what happened, what data (if any) was impacted, and what steps have been or will be taken to resolve the issue and prevent a recurrence. Microsoft 365 admins might receive an incident report or see something in the Message Center if it’s a widespread issue.

  • Learning and Improvement: After any incident, Microsoft’s security teams perform a post-mortem analysis to understand how it happened and then improve systems to prevent it in the future. This could lead to new detection patterns being added to their monitoring, coding changes in the service, or even process changes (for example, adjusting a procedure that might have been exploited socially). These continuous improvements mean the security posture gets stronger over time, learning from any incidents globally. Many of these details are internal and not visible to customers, but customers benefit by incidents not happening again.

  • Shared Responsibility & Guidance: Microsoft also recognizes that security is a shared responsibility between them and the customer. While Microsoft secures the infrastructure and cloud service, customers need to use the security features available (like setting strong passwords, using multi-factor auth, managing user access properly). Microsoft’s incident response extends to helping customers when a security issue is on the customer side too. For instance, if a tenant admin account is compromised (due to phishing, etc.), Microsoft might detect unusual admin activities and reach out or even temporarily restrict certain actions to prevent damage. They provide extensive guidance to admins (through the Secure Score tool, documentation, and support) on how to configure Microsoft 365 securely. So while this crosses into the admin’s realm, it’s part of the holistic approach to keep the entire ecosystem secure.

In essence, Microsoft has a plan and team for the worst-case scenarios, much of which an end user would never see unless an incident occurred. This preparedness is like an insurance policy for your data – it means that if ever there’s a breach or attack, professionals are on it immediately, and there’s a clear process to mitigate damage and inform those affected. The strict preventive measures we’ve discussed make incidents unlikely, but Microsoft still plans as if they will happen so that your data has that extra safety net.


Continuous Improvement and Future Security Enhancements

Security threats continually evolve, and Microsoft knows it must continuously improve its defenses. Many of the measures described above have been progressively enhanced over the years, and Microsoft is actively working on future innovations. Although end users might not notice these changes explicitly, the service is getting more secure behind the scenes over time.

  • Massive Security Investment: Microsoft invests heavily in security R&D – over \$1 billion USD each year by recent accounts – which funds not only Microsoft 365 security improvements but also the teams and infrastructure that protect the cloud. Thousands of security engineers, researchers, and threat analysts are employed to keep Microsoft ahead of attackers. This means new security features and updates are constantly in development. For example, improvements in encryption (like adopting new encryption algorithms or longer key lengths) are rolled out as standards advance. In late 2023, Microsoft 365 upgraded its Office document encryption to use a stronger cipher mode (AES-256-CBC) by default[4], reflecting such continuous enhancements.

  • Innovation in Encryption and Privacy: Microsoft is working on advanced encryption techniques to prepare for the future. Post-quantum cryptography (encryption that will resist quantum computer attacks) is an area of active research, to ensure that even in the future Microsoft 365 can protect data against next-generation threats. Microsoft has also introduced things like Double Key Encryption, which we mentioned, allowing customers to hold a key such that Microsoft cannot decrypt certain data without it – even if compelled. This feature is an example of giving customers more control and ensuring even more privacy from the service side. As these technologies mature, Microsoft integrates them into the service for those who need them.

  • Enhancing Identity Security: Looking forward, Microsoft continues to refine identity protection. Features like passwordless authentication (using biometrics or hardware tokens instead of passwords) are being encouraged to eliminate phishing risks. Azure AD’s Conditional Access and anomaly detection are getting smarter through AI, meaning the system will get even better at blocking suspicious logins automatically. Microsoft is also likely to incorporate more behavioral analytics – for instance, learning a user’s typical access patterns and alerting or challenging when something deviates strongly from the norm.

  • Artificial Intelligence and Machine Learning: AI is playing an ever-growing role in security, and Microsoft is leveraging it across the board. The future will bring even more AI-driven features, such as intelligent email filters that catch phishing attempts by understanding language patterns, or AI that can automatically investigate and remediate simple security incidents (auto-isolate a compromised account, etc.). Microsoft’s huge datasets (activity logs, threat intelligence) feed these AI models. The goal is a sort of self-healing, self-improving security system that can handle threats at cloud speed. While admins might see the outcomes (like an alert or a prevented attack), the heavy lifting is done by AI behind the scenes.

  • Transparency and Customer Control: Interestingly, one future direction is giving customers more visibility into the security of their data. Microsoft has been adding features like Compliance Manager, Secure Score, Activity logs, etc., which pull back the curtain a bit on what’s happening with security. In the future, customers might get even more real-time insights or control levers (within safe bounds) for their data’s security. However, the baseline will remain that Microsoft implements strong default protections so that even customers who do nothing will be very secure.

  • Regulatory Initiatives (Data Boundaries): Microsoft is also responding to customer and government concerns by initiatives like the EU Data Boundary (ensuring EU customer data stays within EU datacenters and is handled by EU-based staff), expected by 2024. This involves additional behind-the-scenes controls on where data flows and who can touch it, adding another layer of data protection that isn’t directly visible but raises the bar on security and privacy.

Overall, Microsoft’s mindset is that security is an ongoing journey, not a destination. The company continually updates Microsoft 365 to address new threats and incorporate new safeguards. As a user of Microsoft 365, you benefit from these improvements automatically – often without even realizing they occurred. The strict security in place today (as described in this report) will only get stronger with time, as Microsoft continues to adapt and innovate.


Conclusion

Files stored in Microsoft 365 are protected by a comprehensive set of security measures that go far beyond what the end user or administrator sees day-to-day. From the concrete and biometric protections at the datacenter, to the multi-layer encryption and data fragmentation that safeguard the files themselves, to the stringent internal policies preventing anyone at Microsoft from improper access – every layer of the service is built with security in mind. These measures operate silently in the background, so users can simply enjoy the productivity of cloud storage without worrying about the safety of their data.

Importantly, these behind-the-scenes defenses work in tandem: if one layer is bypassed, the next one stands in the way. It’s extremely unlikely for all layers to fail – which is why breaches of Microsoft’s cloud services are exceedingly rare. Your data is encrypted with strong keys (and spread in pieces), stored in fortified datacenters, guarded by strict access controls, and watched over by intelligent systems and experts. In addition, regular audits and compliance certifications verify that Microsoft maintains these promises, giving an extra layer of trust.

In short, Microsoft 365 employs some of the industry’s most advanced and rigorous security measures to protect customer files[4]. Many of these measures are invisible to customers, but together they form a powerful shield around your data in the Microsoft cloud. This allows organizations and users to confidently use Microsoft 365, knowing that there is a deep and strict security apparatus – physical, technical, and procedural – working continuously to keep their information safe inside Microsoft’s datacenters. [4][3]

References

[1] Datacenter physical access security – Microsoft Service Assurance

[2] Physical security of Azure datacenters – Microsoft Azure

[3] Microsoft denial-of-service defense strategy

[4] Encryption in Microsoft 365 | Microsoft Learn

[5] Data encryption in OneDrive and SharePoint | Microsoft Learn

[6] Account management in Microsoft 365 – Microsoft Service Assurance

[7] Microsoft 365 service engineer access control

[8] Azure threat protection | Microsoft Learn

[9] SharePoint and OneDrive data resiliency in Microsoft 365

Find the largest files in OneDrive for Business

image

If you navigate to your OneDrive for Business and select My files and then scroll down to the bottom on the left navigation pane you will find teh following option.

image

This option give you an indication of the space consumed for your OneDrive for Business.

You might also notice that the used storage is hyperlinked.

image

If you select that hyperlink you will now see a descending list of the largest file sizes you have saved into your OneDrive for Business as shown above.

This option make it easy to quickly see your total OneDrive for Business storage usage as well as finding the largest files you have saved. Very handy!

Setting Archive Tier on Azure storage

In my article

Moving to the Cloud – Part 2

I spoke about using Azure Archive storage as a good location for long term data retention. The way that you configure this is basically to set up a storage account as usual and initially configure it as ‘Cool’ storage (since you can’t do Archive storage directly). You then upload files there (typically using Azure Storage Explorer). The final piece of the puzzle is to change the access tier from ‘Cool’ to ‘Archive’ by right mouse clicking on the item.

image

You can do the same using Azure Storage Explorer.

The challenge becomes when you want to do more than a single file at a time.

image

You’ll see that you now don’t get the option to set a tier any more once you have two items or more selected. The same happens with Azure Storage Explorer as well.

Thanks to Marc Kean who pointed me in the right direction, the solution lies in changing this programmatically. Marc has a script on his site and I found another on GitHub as well but decided to write my own anyway which you’ll find here:

https://github.com/directorcia/Azure/blob/master/az-blob-tierset.ps1

with mine you’ll need to set the following variable first at the top of the script:

$storageaccountname = “<your storage account name here>”

$storageresourcegroup = “<your storage account resource group name here>”

$storagetier = “<your desired storage tier level here>” # Hot, Cool or Archive

You’ll also need to connect to you Azure account beforehand which you can do with script of mine:

https://github.com/directorcia/Azure/blob/master/az-connect.ps1

My script will, get the storage account via:

$storageaccount = Get-AzStorageAccount -name $storageaccountname -ResourceGroupName $storageresourcegroup

Get the access for that account via

$key = (get-azstorageaccountkey -ResourceGroupName $storageaccount.ResourceGroupName -Name $storageaccount.StorageAccountName).value[0]

Get the context via:

$context = New-AzstorageContext -StorageAccountName $storageaccount.StorageAccountName -StorageAccountKey $key

get the actual container via:

$storagecontainers = get-azstoragecontainer -Context $context

It will then build an array of all the objects in that container. It will then cycle through all these items changing their tier level via:

$blob.icloudblob.SetStandardBlobTier($StorageTier)

This therefore effectively changes all the items in the container to the tier level you select. This is why I like to set up containers for specific tiers rather than intermingling.

Just remember to run this script AFTER you upload your files to swap them to the cheaper Archive tier. You could also use this script to swap them back at a later stage if you need.

Moving to the Cloud–Part 2

This is part of a multi part examination of the options of moving to the Microsoft cloud. If you missed the first episode, you’ll find it here:

Moving to the Cloud  – Part 1

which covered off setting up a site to site VPN to Azure.

The next piece of the puzzle that we’ll add here is storage.

Storage in the Microsoft cloud comes in many forms, SharePoint, Teams, OneDrive for Business and Azure. We’ll get to stuff in Microsoft 365 like SharePoint, Teams and OneDrive later, but to start off with we want to take advantage of the site to site VPN that was set up in Part 1.

In Azure there are three different access tiers of storage; hot, cool and archive. They all vary by access speed and cost. The slower the access, the cheaper it is. Hot is the fastest access, followed by cool, then archive. You can read more about this here:

Azure Blob storage: hot, cool, and archive access tiers

The other variable here with Azure storage is the performance tier; standard or premium. You can read more here:

Introduction to Azure storage

Basically, standard performance tier uses HDD while Premium uses SSD. Apart from performance, the major difference is how the storage cost is actually calculated. With the standard tier, you are only billed for the space you consume BUT you are also billed for access (read, write, delete) operations. With premium, you are billed for the total capacity of the storage you allocate immediately BUT, you are not billed for any access operations.

So the key metrics you need to keep in mind when you are designing a storage solution in Azure is firstly the access tier (hot, cool or archive) the performance tier (standard or premium) and the capacity you desire for each. You may find some combinations are unavailable, so check out the document linked above for more details on what is available with all these options.

The easiest approach to Azure storage is to create an Azure SMB Share and map these directly on a workstation which I have previously detailed here:

Creating an Azure SMB Share

as well as an overview on pricing:

Clarification on Azure SMB file share transactions

Azure SMB files currently only supports hot and cool tiers. You can use archive storage but only via blob access, not SMB files. So what good are all of these you may ask? Well, if you read my article:

Data discovery done right

You’ll find that I recommend dividing up your data into items to be deleted, archived and to be migrated.

So we need to ask ourselves the question, what data makes sense where?

Let’s start with Azure archive storage. What makes sense in here, given that Azure archive storage is aimed at replacement of traditional long term storage (think tape drives)? Into this, you want to put data that you aren’t going to access very often, and that doesn’t make sense going into Teams, SharePoint and OneDrive. What sort of data doesn’t make sense going into SharePoint? Data that can’t be indexed such as large image files without text, Outlook PST backups, custom file types SharePoint indexing doesn’t support (think some types of CAD files and other third party file types). In my case, Azure archive storage is a great repository for those PST backups I’ve accumulated over the years.

Here is the guidance from Microsoft:

  • Hot – Optimized for storing data that is accessed frequently.

  • Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.

  • Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).

We now repeat this again but with the cool tier storage, remember that this tier now directly supports Azure SMB files. So, what makes sense here? There is obviously no hard and fast rule but again, what doesn’t make sense going into SharePoint? Stuff that can’t be indexed, is typically large, is not accessed that often but more often than archive storage AND you also want to be accessible via a mapped drive letter. In my case, that data that springs to mind are my desktop utility apps (like robocopy), ISO images (of old versions of SharePoint server I keep in case I need to do a migration) and copies of my podcast recordings in MP3 format.

We repeat this again for the hot tier which is fastest and most expensive storage option. Initially here I’m going to place the user profile data when I get around to configuring Windows Virtual Desktop (WVD) in this environment. That needs to be quick, however most other current data files I have will go into Microsoft 365. Being the most expensive tier of storage, I want to keep this as small as possible and only REALLY put data on here that makes sense.

You don’t have to use all three tiers as I do. You can always add more storage later if you need to, but I’d recommend you work out what capacity you want for each tier and then implement it. For me, I’m going for 100GB Archive, 100GB cool and 50GB hot as a starting point. Your capacities will obviously vary depending on how much data you plan to put in each location. That why you need to have some idea idea where all your data is going to go BEFORE you set all this stuff up. Some will go to Azure, some will go to Microsoft 365, some will deleted and so on.

As for performance tiers, I’m going to stick with standard across all storage accounts for now to keep costs down and only pay for the capacity I actually use.

Let’s now look at some costs by using the Azure pricing calculator:

image

I’ll firstly work out the price for each based on 1TB total storage for comparisons between the tiers and to SharePoint and OneDrive for Business.

All the storage calculations are in AU$, out of the Australian East data center, on the standard performance tier and locally redundant unless otherwise stated.

You can see that 1TB or archive storage is only AU$2.05, but it ain’t that simple.

image

There are other operations, as you can see above that need to be taken into account. I have adjusted these to what I believe makes sense for this example but as you can see, variations here can significantly alter the price (especially the read operations).

The estimated total for 1TB of archive storage on the standard performance tier = AU$27.05 per month.

Now, as a comparison, if I change the performance tier to Premium I get:

image

The price of the storage goes way up, while the price of operations goes way down. So, if you want to minimise costs and you have lots of operations on your storage, standard tier is your best option.

The estimated total for 1TB of archive storage on the premium performance tier = AU$224.22 per month.

Basically 10 x the cost above the standard tier.

In my case, I don’t need 1TB of storage, I only want 100GB of storage.

image

When I now do the estimation of 100GB of archive storage, the cost of just the storage falls by 10x (as expected) to AU$0.20, Don’t forget however about the storage operations which remain the same. So, my storage cost went down but my operation costs remained the same. Thus,

The estimated total for my 100GB of archive storage on the standard performance tier = AU$25.95 per month.

While premium is:

image

The estimated total for my 100GB of archive storage on the premium performance tier = AU$22.78 per month.

As outlined before, as a general rule of thumb with archive storage, premium performance tier is better value for low storage capacity and also low data operations. Once the capacity increases with premium performance, the price ramps ups.

So why would I recommend staying with the standard performance tier? Although, I ‘estimate’ that my archive will be small, I want the flexibility to grow the capacity if I need it. Remember, that we don’t set a storage capacity quota for block storage, it can just grow as needed and the bigger the storage capacity the more it will cost me if I go premium. Given that storage capacity here is more important than working with the data, I want the cheapest storage costs I can get as the data capacity increases. Thus, I’ll stick with the standard access tier. Also, remember that I’m estimating when my storage reaches 100GB here I’ll be billed AU$25.95 per month but until I reach that capacity and the less operations I do on files there, the cheaper this storage will be. I therefore expect my ‘real world’ costs to in fact be much less than this AU$25.95 figure over time.

Let’s now look at the next two storage locations, which will be Azure SMB file shares.

Unfortunately, the pricing calculator doesn’t allow us to easily calculate the price for an SMB Share on a cool access tier (Azure SMB files doesn’t currently support being on the archive tier). However, the pricing is only an estimate, so I know if I place it on a cool access tier it will be cheaper anyway, so I’m going to keep it simple.

image

Thus, for reference:

The estimated total for 1TB of SMB file storage on the standard performance tier = AU$106.58 per month.

remembering that for the standard tier we need to take into account the cost of operations as shown.

and for Premium:

image

The estimated total for 1TB of SMB file storage on the premium performance tier = AU$348.00 per month.

With premium storage, you don’t need to worry about operations, however don’t forget, if you go premium you’ll be paying for the total allocated capacity no matter how much you are actually using. Thus, I’ll again be sticking with standard storage.

So, for my 50GB Azure SMB files hot tier I calculate the following:

image

The estimated total for my 50GB of hot SMB file storage on the standard performance tier = AU$32.40 per month.

Now how can I get an idea of what the cool SMB file price will be? Although it is not this simple, I’m going to use a ratio from:

Azure Block blob pricing

image

So, by my super rough rule of thumb maths I get:

cool/hot = 0.02060/0.0275 = 0.75

Thus, cool storage is 75% the cost of hot storage say.

The estimated total for my 100GB of cool SMB file storage on the standard performance tier = AU$32.40 per month x 2 x 0.75 = AU$48.60 per month

The 2 x is because the hot price I have is only for 50GB and I want 100GB of cool storage.

In summary then, I will create 3 x storage repositories for my data:

– 100GB blob archive storage = AU$25.95 per month

– 100GB SMB file cool storage = AU$48.60 per month

– 50GB SMB file hot storage = AU$32.40 per month

250GB total storage estimated cost = AU$106.95 per month

Again remember, this is my estimated MAXIMUM cost, I expect it to be much lower until the data capacities actually reach these levels.

Now that I have the costs, how do I actually go about using these storage locations?

Because archive storage is blob storage I’ll need to access it via something like Azure Storage Explorer as I can’t easily use Windows Explorer. I’m not expecting all users to work with this data so Azure Storage Explorer will work fine to upload and manipulate data if needed by a select few.

As for the SMB file cool and hot storage I’m going to map these to two drives across my VPN as I have detailed previously:

Azure file storage private endpoints

This means they’ll just appear as drive letter on workstations and I can copy data up there from anything local, like a file server. The great thing is that these Azure SMB file shares are only available across the VPN and not directly from elsewhere as the article shows. That can be changed if desired, but for now that’s they way I’ll leave it. I can also potentially get to these locations via Azure Storage Explorer if I need to. The flexibility of the cloud.

So far we now have:

– Site to Site VPN to Azure (<5GB egress from/unlimited ingress to Azure)= $36.08 per month

– 100GB blob archive storage = AU$25.95 per month

– 100GB SMB file cool storage (mapped to Y: Drive) = AU$48.60 per month

– 50GB SMB file hot storage (mapped to Z: Drive) = AU$32.40 per month

Total maximum infrastructure cost to date = AU$143.03 per month

So we now have in place the ability to start shifting data that doesn’t make sense going into Microsoft 365 SharePoint, Teams and OneDrive for Business. Each of the three new storage locations has their advantages and disadvantages. That is why I created them all, to give me the maximum flexibility at the minimum cost

We continue to build from here in upcoming articles. Stay tuned.

Azure file storage private endpoints

I’ve previously detailed how to create an Azure SMB File Share:

Creating an Azure SMB file share

as a way to create a ‘cloud USB’ drive that you can map to just about any desktop quickly and easily. All of this is accomplished securely but many remain hesitant to do this across the Internet directly. Luckily, there is now an option to map this SMB share to an IP address inside an Azure VNet to restrict access if desired.

image

Before you set this up you will need to have an existing Azure Vnet created as well as a paid Azure subscription. You can add a Private Endpoint to an existing Azure storage account or create one at the same time you create a new Azure Storage account. In this case, I’m going to an existing account.

In the Azure portal search for “private link”, which should then take you to the Private Link Center as shown above. Select the Add button on the right.

image

You’ll need to select a Resource Group as well as a Name as shown above.

image

You’ll then to select the Azure Storage account and the file option to connect to an existing SMB file share as shown above.

image

Next, you’ll need to connect to an existing Vnet and if you want to access the resource privately by a name, then you’ll need to integrate it with a private DNS zone, which will also be set up for you as part of this process.

image

You can then add tags. Note – when I created mine, if I assigned tags here I couldn’t create the Private Endpoint, which appears to be a bug. So, if for some reason you find the same issue, create the Private Endpoint without tags and then add them later.

With all that done, select the Create button to finish the configuration on the Review + Create page.

image

When the set up process is complete you’ll now see your endpoint as shown above with an allocated IP address on the Vnet you selected.

image

If you then look at your Vnet, as shown above, you will see that the Storage Account is seen as a connected device.

SNAGHTMLc990f5b

If you now visit the Storage Account and select Firewalls and virtual networks as shown above, you can configure what networks can access this new Private Endpoint.

Leaving the option set to All networks means that you can still map to that SMB share directly across the Internet, which you may want.

image

However, in the above case, I have selected to restrict the access to the Vnet only.

image

Doing so means that the ONLY way I can now access that SMB Share is via the selected Vnet. I can’t get to it using the Azure portal on my remote desktop machine as shown above.

image

If I wanted to access this from a remote location, outside the Vnet across the Internet, I could add those details below. However, I have chosen not to do this.

My Azure SMB File share now has a dedicated IP address that is restricted to access via an Azure Vnet, how do I work with this share directly on premises? Easy. I set up an Azure Site to Site VPN to that same Vnet and now I can access that Azure SMB File share from my local machines by mapping to something like the IP address.

image

Thus, the only way that Azure SMB file share can be access is across a Site to Site VPN, making even more secure.

image

Private Endpoints support connection to a number of PaaS Azure services as shown above. This is handy as it allows you to connected you Azure IaaS services (like VMs) directly to Azure PaaS (like storage) quickly and easily as shown. What’s the benefit? Remember, IaaS is typically billed on time used, while PaaS is billed on resource consumption. Thus, why should I pay for a VM to store my data and pay the time it runs (typically 24/7), plus disk storage where I could use Azure Storage and most be billed just for the data capacity?

PaaS is the future and has many benefits over IaaS. You should be looking to shift as much of you infrastructure to PaaS to take advantage of things like reduce maintenance, cost savings, etc. Private Endpoints is an easy way to start doing just that. For more information on Azure Private Endpoint visit:

What is Azure Private Endpoint?