echomili.blogg.se

Diskkeeper 2010 on server 2008
Diskkeeper 2010 on server 2008





  1. #DISKKEEPER 2010 ON SERVER 2008 SOFTWARE#
  2. #DISKKEEPER 2010 ON SERVER 2008 SERIES#
  3. #DISKKEEPER 2010 ON SERVER 2008 WINDOWS#

Although there is nothing difficult about shrinking a virtual hard disk file using the method that I am about to show you, it can be time consuming, and does require you to take to corresponding virtual server offline. The first option for reclaiming lost hard disk space is to manually shrink the virtual hard disk file. In this article, I want to explore some techniques for resolving these issues. The nature of dynamically expanding hard drives also makes it easy to waste a lot of storage space since the virtual hard drive files do not automatically shrink when data is removed from them. Unless an organization carefully manages storage resources, this dynamic expansion can make it easy to accidentally over commit storage resources. Dynamically expanding virtual hard drives initially start out much smaller than the amount of space that you have allocated to them, but will automatically expand as data is written to them. In the first part of this article series, I explained that when you create a virtual hard drive in Hyper-V, a dynamically expanding virtual hard drive will be created by default.

#DISKKEEPER 2010 ON SERVER 2008 SERIES#

After cleanups and configuration, we run it on the base image before it is templated.If you would like to read the first part in this article series please go to Reclaiming Lost Hard Drive Space on Hyper-V Host Servers (Part 1). (The only time we run defrag is when creaing a new server image from scratch that will be used to deploy multiple VMs. On the flipside, just to give the idea the benefit of a very large doubt.even if there was some performance bump from defragging VMs, is it worth a relatively large i/o spike on the storage system if something like a preventative defrag for all VMs was scheduled? Doesn't make sense to me. In my experience, there's never a magic bullet.

diskkeeper 2010 on server 2008

If *a* server is having performance problems, we look into it.configs, OS, apps, hardware, etc. Our practical thinking is that we don't generally see the kinds of poor performance and degradation that call for having a "standard operating procedure" to do "preventative maintenance" such as defragging.

#DISKKEEPER 2010 ON SERVER 2008 SOFTWARE#

We have also heard claims (from defrag software vendors) of performance benefits and there is even a "whitepaper" or two floating around with such claims but I don't put much stock in them.

#DISKKEEPER 2010 ON SERVER 2008 WINDOWS#

We avoid defrag of VMs and have standing requests to have it turned off via GPO for our Windows servers.

diskkeeper 2010 on server 2008

Just chiming in since this has been a topic in our environment, too. In fact as it'll reallocate all the data, it'll probably have a detrimental effect on the NetApp storage system, the snapshots and any replication you have. Any sort of simple filesystem reallocation on the VM will have little actual benefit. I've asked this question a couple of times of VMware professionals and always got a similar response to the above. You'd need specific use cases to verify the benefit of doing any defragmentation. Running a LUN reallocate directly on the NetApp may have some good benefits to performance however as this will optimise the data layout so that the read patterns can be more efficient and use more contiguous blocks for the corresponding LUN. A filesystem defrag will have no benefit, and perhaps may further fragment a large database file.

diskkeeper 2010 on server 2008

What areas of the filesystem are you looking to benefit? If you're talking Exchange or SQL, then arguable the data wouldn't be in a VMDK, but a database defrag from the application may benefit as it also rebuilds the indexes. The read-ahead algorithms and techniques of NetApp WAFL make the benefit of filesystem defragmentation really minimal. (Andrew miller's response is much better than mine, lol) If you're still worried about fragmentation you should take a look at the netapp reallocate command, and a few of the following links: The simple fact of the matter is that the VM's Filesystem will 99.9999% of the time not correspond to the underlying disk arrangement, you'll simply be wasting I/O and CPU resources running diskkeeper. If you're worried about fragmentation affecting performance, I wouldn't from a VM level. You also have dedupe to consider, if you run a defrag on your VM disks, then all the data that diskkeeper moves around will become re-duplicated until the dedupe process runs (which will essentially undefragment the data). It essentially writes all data to the end of the aggregate, including rewritten data, even after running the 'defrag' utility the data on the physical disks would still be "fragmented". WAFL (the underlying Netapp Filesystem) does not arrange data sequentially on disks like most filesystems.

diskkeeper 2010 on server 2008

I would say this is a *bad* idea if you're using a netapp filer as storage for your VMs.







Diskkeeper 2010 on server 2008