Windows Server and the future of file servers in the cloud computing world
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:-techrepublic.com
We still run our businesses on files. How is Microsoft upgrading Windows Server to use files in a hybrid world?
We do a lot with servers today — much more than the age-old file and print services that once formed the backbone of business. Now servers run line-of-business applications, host virtual machines, support collaboration, provide telephony services, manage internet presence. It’s a list that goes on and on — and too often we forget that they’re still managing and hosting files.
There are occasional reminders of Windows as a file server, with Microsoft finally deprecating the aging SMB 1 file protocol, turning it off in Windows 10. It was a change that forced system administrators to confront insecure connections and the applications that were still using them. There’s an added problem: many legacy file servers are still running the now unsupported Windows Server 2008R2.
Files aren’t going away
Microsoft hasn’t forgotten the Windows File Server and the services that support it. There’s still a lot of work going into the platform, using it as a bridge between on-premise storage and the growing importance of cloud-scale storage in platforms like Azure. New hardware is having an effect, with technologies like Optane blurring the distinction between storage and memory and providing a new fast layer of storage that outperforms flash.
As much as organizations use tools like Teams and Slack, and host documents in services like SharePoint and OneDrive, we still run our businesses on files. We might not be using a common shared drive for all those files anymore, but we’re still using those files and we still need servers to help manage them. Windows Server’s recent updates have added features intended to help modernize your storage systems, building on key technologies including Storage Replica and new tools to build and run scale-out file servers.
Much of Microsoft’s thinking around modern file systems is focused on hybrid storage scenarios, bridging on-premise and cloud services. It’s a pragmatic choice: on-premise storage can benefit from cloud lessons, while techniques developed for new storage hardware on-premise can be used in the cloud as new hardware rolls out. That leads to a simple process for modernizing file systems, giving you a set of steps to follow when updating Windows Server and rolling out new storage hardware. In a presentation at Ignite 2019, Ned Pyle, principal program manager on the Windows Server team, breaks it down into four steps: Learn/Inventory, Migrate/Deploy, Secure, and Future.
You can manage multiple server migrations (to newer hardware or VMs) from the Windows Admin Centre interface.
Image: Microsoft
Building a modern file system
The latest version of SMB, SMB 3.1.1 adds new security features to reduce the risks to your files. It improves encryption and adds protection from man-in-the-middle attacks. It’s a good idea to migrate much of your file system traffic over to it, removing NTLM and SMB 1 from your network.
You shouldn’t forget Microsoft’s alternate file system technology ReFS. Offering up to 4TB files, it can use its integrity stream option to validate data integrity, as well as supporting file-system level data deduplication. You can get a significant data saving with ReFS as part of Windows Server’s Storage Spaces.
Microsoft now offers a Storage Migration Service to help manage server upgrades. As well as supporting migrations from on-premise to Azure, it can help bring files from older Windows Server versions to Windows Server 2019 and its newer file system tools and services. It will map storage networks, copy data, ensure file security and validity, before obfuscating old endpoints and cutting over to the new.
Part of the future for Windows Server’s file protocols is an implementation of SMB over the QUIC protocol, using UDP. It’s designed to be spoofing resistant, using TLS 1.3 over port 443. Microsoft is working on adding SMB compression to file traffic, reducing payload size and offering improved performance in congested networks and over low-bandwidth connections.
Using the cloud with Azure files
One option for building a hybrid file system is using Azure Files. On-premise systems can use VPN connections with either NFS or SMB 3.0 connections to Azure to work with what looks like a familiar share, except that it’s hosted on Azure. If you’re not using a VPN you still have secure connectivity options, with SMB 3.0 over port 445 or using the Azure File Sync REST API over SSL. All you need is the Windows network name of the share, using it the same way you’d use any Windows Server share locally.
Those Azure file shares aren’t only for on-premise data; they’re accessible using the same protocols inside Azure. With data now a hybrid resource, you can use Azure for scalable compute and analytics, or for gathering and sharing IoT analytics with on-premise applications, or as a disaster recovery location that’s accessible from anywhere in the world. There’s no change to your servers, or the way you work, only to where that data is stored. With Azure storage able to take advantage of its economies of scale, you can expand those shares as needed, without having to invest in physical storage infrastructure.
SEE: Windows 10: A cheat sheet (TechRepublic)
There’s certainly a lot of capacity in Azure file shares: over 100TB of storage per share, with 10,000 IOPS in standard drives (which can be 10 times faster if you pay for premium services). There’s support for Azure Active Directory, so you can apply the same access control rules as in your on-premise systems. Ignite 2019 saw Microsoft add support for NFS shares, as well as increasing the maximum file size to 4TB, and adding support for Azure Backup. To simplify things further, Azure File Shares can be managed through Windows Admin Center.
Azure files and Windows Server: working together
Perhaps the most important recent change is the shift to workload-optimized service tiers. By picking a plan that’s closest to your needs you can be sure that you’re not paying for features you don’t want. At one end of the scale is high I/O and throughput, with Premium storage on SSDs, while at the other archival storage on Cool disks with slow startup times keeps costs to a minimum.
Users will be able to access these Azure-hosted file shares as if they’re a Windows Server file share, allowing you to begin phasing out local file servers and reduce the size of the attack surface on your local systems. Attackers will not be able to use the file system as a route into line-of-business servers, or as a vector for privilege escalation. Domain-joined Azure file shares will be accessible via SMB 3.0 over VPN connections or via ExpressRoute high-speed dedicated links to Azure.
A modern file server architecture will mix on-premise and cloud. Tiering to Azure makes sense, as it gives you business continuity as well as providing an extensible file system that no longer depends on having physical hardware in your data center. You’re not constrained by space or power and can take advantage of it when it’s needed.
Similarly, moving traffic to SMB 3.1.1 and using Windows Admin Center will improve performance and give you a future-proof management console that will work for both on-premise and in-cloud storage resources. Putting it all together, Microsoft is delivering a hybrid filesystem solution that you really should be investigating.